Eric03 commited on
Commit
65c50fc
·
verified ·
1 Parent(s): 4338e65

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2012.12631/main_diagram/main_diagram.drawio +1 -0
  2. 2012.12631/main_diagram/main_diagram.pdf +0 -0
  3. 2012.12631/paper_text/intro_method.md +17 -0
  4. 2106.11485/main_diagram/main_diagram.drawio +0 -0
  5. 2106.11485/paper_text/intro_method.md +49 -0
  6. 2106.11539/paper_text/intro_method.md +99 -0
  7. 2110.05357/main_diagram/main_diagram.drawio +1 -0
  8. 2110.05357/main_diagram/main_diagram.pdf +0 -0
  9. 2110.05357/paper_text/intro_method.md +25 -0
  10. 2110.06848/main_diagram/main_diagram.drawio +1 -0
  11. 2110.06848/main_diagram/main_diagram.pdf +0 -0
  12. 2110.06848/paper_text/intro_method.md +19 -0
  13. 2202.06242/main_diagram/main_diagram.drawio +1 -0
  14. 2202.06242/main_diagram/main_diagram.pdf +0 -0
  15. 2202.06242/paper_text/intro_method.md +98 -0
  16. 2208.10531/main_diagram/main_diagram.drawio +0 -0
  17. 2208.10531/paper_text/intro_method.md +109 -0
  18. 2212.06301/main_diagram/main_diagram.drawio +0 -0
  19. 2212.06301/paper_text/intro_method.md +88 -0
  20. 2212.08094/main_diagram/main_diagram.drawio +0 -0
  21. 2212.08094/paper_text/intro_method.md +71 -0
  22. 2304.11436/main_diagram/main_diagram.drawio +1 -0
  23. 2304.11436/paper_text/intro_method.md +150 -0
  24. 2305.05560/main_diagram/main_diagram.drawio +1 -0
  25. 2305.05560/main_diagram/main_diagram.pdf +0 -0
  26. 2305.05560/paper_text/intro_method.md +290 -0
  27. 2306.00196/main_diagram/main_diagram.drawio +1 -0
  28. 2306.00196/main_diagram/main_diagram.pdf +0 -0
  29. 2306.00196/paper_text/intro_method.md +56 -0
  30. 2306.01364/main_diagram/main_diagram.drawio +0 -0
  31. 2306.01364/paper_text/intro_method.md +95 -0
  32. 2306.02913/main_diagram/main_diagram.drawio +1 -0
  33. 2306.02913/paper_text/intro_method.md +228 -0
  34. 2306.14672/main_diagram/main_diagram.drawio +1 -0
  35. 2306.14672/main_diagram/main_diagram.pdf +0 -0
  36. 2306.14672/paper_text/intro_method.md +175 -0
  37. 2309.16585/main_diagram/main_diagram.drawio +1 -0
  38. 2309.16585/main_diagram/main_diagram.pdf +0 -0
  39. 2309.16585/paper_text/intro_method.md +38 -0
  40. 2310.15517/main_diagram/main_diagram.drawio +0 -0
  41. 2310.15517/paper_text/intro_method.md +85 -0
  42. 2310.19180/main_diagram/main_diagram.drawio +130 -0
  43. 2310.19180/main_diagram/main_diagram.pdf +0 -0
  44. 2310.19180/paper_text/intro_method.md +105 -0
  45. 2312.11792/main_diagram/main_diagram.drawio +336 -0
  46. 2312.11792/paper_text/intro_method.md +99 -0
  47. 2401.09192/main_diagram/main_diagram.drawio +116 -0
  48. 2401.09192/main_diagram/main_diagram.pdf +0 -0
  49. 2401.09192/paper_text/intro_method.md +113 -0
  50. 2402.14606/main_diagram/main_diagram.drawio +0 -0
2012.12631/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile modified="2020-06-23T17:38:41.943Z" host="app.diagrams.net" agent="5.0 (X11)" etag="5s2JvfiLF-JOUIsW82gv" version="13.3.0" type="dropbox" pages="13"><diagram id="C2fSY1v2SiZeoUbDoYyL" name="Pnn">7VvbcpswEP0aP8YDSAb8mDiXTi/TtJlOmrx0qFEMLUauLMd2v74CJHM1YO7upHkwWomVdPZotSvREZgtd3fEWFmfsImckSKZuxG4HimKDGWd/XiSfSDRVDkQLIht8kah4MH+i7hQ4tKNbaJ1rCHF2KH2Ki6cY9dFcxqTGYTgbbzZC3biva6MBUoJHuaGk5Y+2ia1uFRWp2HFO2QvLN61rmhBxdIQjflM1pZh4m1EBG5GYEYwpsHTcjdDjgeewCV47/ZI7WFgBLm0zAtfftwa3x6f0fMH9+7m59+vn9/bzxdCzZruxYyRyQDgRRe77OeK4I1rIk+PxEqYUAsvsGs4HzFeMaHMhL8QpXtuPmNDMRNZdOnwWjZEsv/uvT+WVSgET17tmIEpBNc73kVQ2kdL94jYS0QR4cL09MVU8IbMUc6cBY0MskA0rx03sQdIpAeO7h3CbDhkzxoQ5BjUfo0zxuDEWxzaHV69xzYbsyLxRaLo0liK/OMK+IKZqFJcYTBuriO0N3uIDCoU+Sw4gRG891fD2fAJpRgSp8PWsil6WBk+5lvmBeKmP2qnV0Qo2uUCK2oFBAKSKS9vwxUJuciKrEVZOm6KGGynYqScAUZyzxiBDIxUh/V69YJ99odgqX82WFRcrH0fcska6KtdWMeeFt4v3njvykIXG1qgLqgdnBkg6JuqaWq27OAnMfde4Nqre3HYmxevZQ84PNehgOmw3OtkeBgBeWAYqcPDKMWjvrcgbXgYpXjUN0Z6CqNgh60Jlbcp881Cb4ddve+s0wx2BUGJab+WjW/8wCYzxAGREMdX2ESE07xZkoTu3SxCcd6qR6556Z0NsNLcMdZre95MNLOmBP9GM+xg4ncErvy/PKQL4xc++sLwJYL3JANvIauZq8r6kVRMqAjitlR2mk565YSiSUJR22luiTx3QDQZqvmTy7is+ZOKIOzY/CVS+DfzpxbtsUOGk1d/Mk3u2vxZpxPVza81k/MW7gVgWHQAsCE6JBV1TocSWXlb3mDoaz5l5Ko7fkqR0rGRSxwrvLn8QpdfOeBLKura/CVOTArNv7NpxPqs9CTMzZ5D23uF2OVVN5Rp7uZLGxQH5WneBZkqnWkKcrgYPpbM1zuHV/qnm1aSb2pJunVzRC83fW5YAdsTzsWUBIvFNhtZm0rW2mwJPQWm3Wq7F06nEb0Gnacl6VzWe3ZE56yDyjp0boC0QBoYaSddk7b1z2CaSyZr797KeDIFmqKpEwB1XY1bvvIRQu5XM1DveC9XuybQ0I4caucZcCxXYkZjJiyx7Z95glB82VD2Y4nOjpvzvoyr7DoSgT/UOnYWesEGfO6Bf2ka1XUaMHH5CDP50pVZmw60Os0boNpvCAbS2fB/kjd0tYUCZUirQcx6UGlH0vG3xnlWDL/sDyAN/38EuPkH</diagram><diagram name="simple PNN" id="AoCCFkszaukctQ-0nF1u">7VtLl6I6EP41LvVAEl7L6dfMYvr03NN3zrWXHIjCDBImxFb7198gCW/x0Qja07qQVEJhvvqqUhXjCN4u1l+pHXmPxMXBCCjuegTvRgCoOtL5RyLZpBLTUlLBnPquGJQLnv03LIRy2NJ3cVwayAgJmB+VhQ4JQ+ywksymlKzKw2YkKD81sue4Jnh27KAu/c93mSekqm7lHd+wP/fEo01gpB0LWw4WM4k92yWrggjej+AtJYSlV4v1LQ4S8CQuGzRVltF4+jRTHl9m/xD7YayMU2UPx9ySTYHikHWrGqaqX+1gKfASc2UbCSAly9DFiRJlBG9Wns/wc2Q7Se+KU4bLPLYIeEvllwd+UzGjV0wZXhfsJL75V0wWmNENH5LxUBhBsFCTLFzlNkVC5BWsKVloCxbNM805UvxCgHUEcOhagVMHBk5rAE4P+FNvZoRPvYig/mdJZMc43kaWL3yAGa3zPn41Tz7JMrlXlbr4V0vVpb3XYRsEB7aNcSWkBtC6rGhgXglwUL0w4KwrAa7GuKHDqFR88cjVKDc4cmoNuXTxeCeAyXoj0l+zRyb2uWjcRaryqN8v3v79CbF/xxYqCTImFnDDLk/BRTMkIf+4KUNJKPPInIR28J2QSAD4CzO2EQjaS0bK8HIY6Waa3D+xZPNl2zQQlIK7tXhA2toUWz8w9fmMMRXC4wwUkyV1cNs4QR9m0zlmLXiJeirBqNXeFAc281/LZUz33gAa4kiaQ7n+66Hp2DYPa8zIYCEj2yrsIiHrydWqoWvw/Ew9oFzDofslKZt5ywnsOPadnX6kFf1I3eNDMz8IbklA6PYxcKYlby6PGSW/caHnZvs+2ixFv2mruPb6TcE4WoNxpOxg9xJP+EH8bXkimWPuKEilijRgiLuKZXlFUcYWqUirKEqBqSnaEiib9js4dUAle62culSuVAPEoVypKkKoZ640Fe89ceWDcQLs2pc5On5UNxH65oQ+HCeu0vIQdWT5qqLeLX/AdtGn5dssf2rOUFMEerb8Aftdn5Zvi/YnZ4tVRT1bHqC6oc9b7KsmLNFjoujG+cv9vdWIrJu7K+N3pHwWmGgWNIChaxCZpt7R2qHACTCV7KWWVxJdnygasrKX1i/JtL5JhhSzQjLYw57SxZAM6GiidkSsSjmL9J6500U6uvZZYWXirRdpdH6d8yFplOiQ3gSOq4FJyHrYPVEva0HklexEUXbFn9OLITgs+2AD2T5i5BogIJ2cK0NtAkGBalo1geLd+Upn9cuXLjZ0e41Wf82OXVt4Ojl7R1aVfOeiG/p5//QUj73xN+Y4D2+/p6spzX4BuRq27VgbP1n4rkWywsIz7hs1shB0wMLPHxz+iu0GqNap8RG3GwbeRTg5twL6vtxKN2rxq3vuNJ6zqIeZdZ1MQeBHccKhoc6VaWUH06WDFQIFaAgUUNlNgXed5hnwhIFMLbjPWaiYXEx4Dt+eYPBGd8d72ti0t842LmohqJ4wNqrx+9RTCtyte3Xm+q7O5vKdGRpnc2bezP94koKc/30H3v8P</diagram><diagram name="PSSN-2cols" id="Rp9Lr2HWVKs0Kkq73E5p">7Vxde6I4FP41Ps/uRfvwDV5qq213Ot3OdHZme5lCRHaQWMCv/voNkCgERERAaGsvJB8cyDlv3pxzEtsTr2brGxfMp1+RAe2ewBnrnnjdEwRe4jX8FdRsohpV4aMK07UM0mlX8WS9QVLJkdqFZUAv0dFHyPatebJSR44DdT9RB1wXrZLdJshOPnUOTJiqeNKBna79ZRn+lNTySn/XcAstc0oerQlq1DADtDMZiTcFBlrFqsRRT7xyEfKjq9n6CtqB8rZ6+aE89/lvb/I9L4x+CsLjnb68iISNj7llOwQXOn61ooVI9BLYC6IvMlZ/QxXoooVjwEAI1xOHq6nlw6c50IPWFYYMrpv6MxuXeHxZ8E3JiJbQ9eE6Zify5jcQzaDvbnAX2qoQIxAUyn1SXu1sKpGqacyaFIWAoMjcSt5pCl8QZR2hOLGriuPPrDgpQ3GKjZ86nCA89LgGldcFog0XXsgsA9xBm693bfjKJN+hkBeXrcFvGUmm1V0wkyQ2Z6bvxpeF+c/0dWXe46HZypd74zsdQUxP0MC8SooOcvDXMKk65PpTZCIH2PcIzYnC/oO+vyGrAlj4KKlOrEV3829w/6Ui0fJz0HjJKSqtuF6TJ0SlDSkFViWSNVK8QjZyw9cVh+EfvekRuhZWDHTJvZ7vot8w1j34jMdHW9hDC1eHOf1ksuQB14R58gifBErOxYsLbeBby+TiVvkklTvCboLYP9uykDltuJSeap42sqYl5w3PS+9h3qgF543UqnmjdnXenNsr6FesOAbjTemxyWU7ewT8YUVCxxgEgRUu6TbwPEvfzy4JbjnAKylueNFkSQ4Zx7LtWP1E06GuH22WytbQmHHkDOPQusKUQZ7wiKzQgaXI0faELFRExIHkrnjgxgjaooUKkhlBkWJSgkIAbYd9AqYKBIldxVRbscISRFGssIIkqWGsFIiLG8NKSd+kJZgQ9kXuR/MHG1s2jQnlExN1YaL0msIKEhrGRAGH7yAm1pYfgwQuPVMM4OsdIILCJo6O6CbhLEA66MT0WwU4vJ5ccrEPXxEliQwlKXKj8KNo+3Dw6wasSrOaxIS19bHa7ejnq/EKx8L4aW0NxL6yWuo0+1N1bhstgnv5Tie0RYqcBiLjTNuk9x0itXYz0XB2dbbJp/sYMeE2t3RqTCgqzbJigaTop/tfCBKpSK4sJFhBTUOikjRlF/2vuPuft0y1BHD5flp5+DHuv6jV5v5nwy/L/Y+cK8NaUsdK1aSYyxVrSEEVL+J+EpzAtkwnQC5GRbB5NgyWeksH9oA0zCzDCG4fuhB7f+AlFBUgbR4MOlSDPOzJ14GshY8iD7FWj01kc5BaMReD9bErczEyc4kpK4kc96GtxMlnttL+Az2fVtoypXBuK2Wd6EhZCX+DWRDgOC/ePCzjKciFqy/0PpT9aD6KLlBS2n5qo/ZLx1t3D2mXybatuQfPF/WzYaqStR8up/XGOhLV6S0dfIRhf/sUJyUUJ2Qtvo0qroIE/f5AvWw8ty/AN4A3DTM2x9voYEKeaL0lHrkgMzNM6CdFFPXBRSZXqrCpopp9cEqVNQGMBodlQsNzQzJ+OC53NTiIXaFd2O2rl1JfUmVRVvHteHVIIlCWLrH1BEXrC5wky2JJYKdmSMPAzgouu8GcZVIqFUNbSiM7d2FvC7QV1vEpScspQfXR8tsN9nBGg4Up3prPsggtd3CRuYXFgLcV+0uixDF6SnvofFaIpcr7bVvUY8rU3HEpTXIUPKapxExMEUCUf+tq4nobP205WS1J7soBQXumB1Y72MS6kchy/wvvec4OIJHESudeoRj9zsCGt/xNK8JxG7xAewj032ZICvEFJvxk4jV39rAEsP1hKHmPXvy3l5kbzwm7UU/pZK87IfWCEYAmEw/WQsdZybVjSOX98UjOfoTCHmwszSoFT0iezCpS4ie79bBK1k59ilUeHnAXG2wwIbwvXqETqKW8otXPK6r8/PjV+2t9/WsErkZ/z903blLEzTuOV7q9sX6AV3BgmtF6Msvki62Lc2SpUs7JhFeWLxzjHI5yxe6smw0nfvrk2w8XWE44Q4Xt/lHmL7yTmxglnvRHiMTgKbiKC+kQYxjMvD9PJsTwgc3TYe68bykdJoQynn19ZHhcwqYAGb7PA2kHSFLjayHJfLF1kaTG10+ShY43jF30Bp3uOmW5E66lLJRMS1fgk+Hi7t8ERd13/2xJHP0P</diagram><diagram name="PSSN-3cols" id="Adrb8sr7OcQ0gwsra-Mi">7V1td6I4FP41nrP7oR4g4e1jbafzsrudmdOdszMfqURlB4mL2Nb59RsgUUiwAmKIVfuhEuAG7n3uc29uAg7AzfzlfewtZn9hH4UDQ/NfBuB2YBi6BQ3yL21Z5y2ObecN0zjw6UHbhofgF6KNGm1dBT5alg5MMA6TYFFuHOMoQuOk1ObFMX4uHzbBYbnXhTdFQsPD2AvF1n8CP5nRVt1ytzs+oGA6o107Br2/uccOpneynHk+fi40gXcDcBNjnOTf5i83KEyVx/Ty8HX0yTcjDL/dfPo+el59+/I9usqF3TU5ZXMLMYqSbkVT4z554Yrqi95rsmYKjPEq8lEqRBuA0fMsSNDDwhune58JZEjbLJmHZEsnX2teKb2jJxQn6KVgJ3rl7xGeoyRek0M2OKRGoCg0Xbr9vLUppE2zgjUZCj2KoulG8lZT5AtVVgPFgVNVnN6z4mCF4qyQ9DqaYHLrRQ1a/60w23G1zJjlmhzgLF62+8i3Kf2fCXmM+RZylblk1nwKZoKgZzOZgp6QT3iVbkY4Iv9GZdXhOJnhKY688E+MF1Rh/6IkWdOo4K0SXFYn0WK8/p6eP7Qg2/6R7hxqls0abl9oD/nWmm6lVqWSHbp5g0McZ5cLRtkfO+kLigOiGBTTc5dJjH+iwuGPjgnNTGoQhoX2iTNG43Fjyy/xKh6jV46zaCj04il6TR7lmVT5r+IoRqGXBE/loNc5KqwTYT0DuGqFC1uKOzXxiNzrHOAU3U4bujbc43b1/Cn93N117zdOX35DT/2CgyxKMRi6ThloBoeg/ELpWRyINpfRHlfOiTgk0BVzSFd2fDOdkqfpQ13f52knHeCY5fZ6KlQqwrHLVt6jhBDXd2Kv64KijuxStlnyqG68SVok00FNB9EV8xDRIRTJZdymI4h+U5naAOjc/rtyGbPEKJAxjKRcRj+VmoqQzPROvVVFlUM0xzmfrBjWe9VDN09TkTwi+1dkjUoBivzrtMBPtsaht1wG493Jc5NAf+zUt7OxZ8E4ZoVxWNuhtO7sKJ0zEXmgEmhdELRBCxNkyh3r6vbbxZSqWOEJoi5WeEEQSsZKjcLIBSsHYcXYNbPUmFf4uQ/ZWHEVwkrLwYYimACwI0zwgmRjgpVxL5joHhNt8w9BkORaO7t1JTBxJjGlda7KC5KNFTlFsgKcdAcMVCp/7S9rmjWhd3BZyxiaLrAN2zIBdByrowilgaHhaJuPXo5XljXUTOhuPqZc+AHZ8IMaN72lAXCBX8ZEcKh3BDluAA4tyaiqUWbcGwBfgqQQ/8jWDwYH8n2LlHSjBJT8JEOpqNnd1I204f1Q03ZxVvsBHOgZl9IXq6kWbHuOoa0zesMaAqMAyPIEFEnahpYtwFUWqqxzj6E9hMbWSALmPiQBY5uNyZ3YNLooXL+luHka4bD12BO6PPjkwq2D2nfz1RfNFuPIS/M1pSDXWTEMcjmXkLwdG2RVRfMOnuPAq/Rc/aQf3gAMLL09nCRWr3O9ikSg4moGfllI//qsqvzmAPWDp7pYz0BeCXdQgHsmsAu097T0pH9jGR1kW5cyfZNlAptgdOgyAWDJjWKgxpLDC1Y6ndJpjRVekHSsdFH9vEwJV2XBrTHBC5KOibOvPO5fDNlTiZJfy36UaT7g9DrNB86+RKkM/NJapuYKFcaD4cdN+QFHMsIupcumgFOr4PR6jbN9MgZ6xmVVjbMwIs8H0LYDa46syWA3KaPWC4NplEKawCKlrVE6JA7GXnhNd8wD3w8zfkVknO89ZqJSoC3Sm87UYI4G5m0qi1BqXgs47nNJ/Optp95QnK9PdzcU310kLBgDaNpZW4nNVfVlJdbZxUqvWAkafVtpTw1y89Ijb54WAqPH5SLbBuScLCyj5VnZzyoPAAAU7WdLtZ9Ylvx4L+ZSYRgslqi/uQy+9m5VPUloinrjE4nu9CaW6LKiuXqKgyXFGVXBV6riOqhX7c6mWVLPEvxGKX2z9zbsyNt9bznLRrXNrbr30XzmrHuzfZqGKpLsG245aFl222EnV1ezJC9hgDWewz4Au+ogcS/ALLUAZnLRgS0DaA4wTpDk5Quwg+fT95Ojdt7kaKiFXdceQhfaJjBtcjrJbMoINOGQWM+wHNfQoGmClsAWPEQysDso2NVgTr1zkLapEnYMbFCXlKFab3gwLD5nb8nKgqDjsfL1+P6D9vUP+2oywctw/fXzZ3B7VeOhWyUWfAnpkS4OLvWq6oBt7rZt3WS/UnPNHk2lE0MFTZVcUfD/vHTcWKGKuMdm6L+hZLslt1t7BO1wD6J2b104jBZFdl/wjn62AMkldup7Ynmiorz00SeGD5K1EpWk0HtE4cgb/5xmpFCMMNmnEq+veg9PAJtfJaDXMSi++L9yJWjJbuwlSAcn3SWpV5wAPJks0VHouNniKJFU3jqP8O+kaM0jNV9ucTCP0H6OyiO73+9fYIr7e3JI6K0JBbwtJmEuoyiTlF9WLY9JmpVzajDJqa9/4xzThB0xCS/oWExC+zkqk1SVaIqL7hkDbFfVh2iSiGvs/469IMr8zthMYVb+TEh5Hq1FT79l+Ep7IU1aRnIEmd58+fvBNJd1qAjJMW9WlORKQrkM/XgU16zuUoPi3uZycIH6HE5Ea+rjBR2L+mg/R6W+Wqub7mL8C0VvLIFibqQot5Rrxh3kT2Rz+wty+eHb3+ED7/4H</diagram><diagram name="PSSN-3cols - sum" id="h2QXO4Zu7b4o7LZRWTFa">7V1dd6I4GP41nrN7UQ6QhI/L2pnOzu7s7JzT3TMzl1SjsoPiIrZ1fv0GSBQSVMAYYqu9qAR4gTfP+5Enb3AA7uYvH5JgOfszHuNoYJvjlwF4N7Bt20Qe+Ze1bIoWy7KtomWahGPatmt4CH9i2mjS1nU4xqvKgWkcR2m4rDaO4sUCj9JKW5Ak8XP1sEkcVa+6DKZYaHgYBZHY+jUcpzP2GI6/2/EbDqczemnPdosd84AdTJ9kNQvG8XOpCbwfgLskjtPi2/zlDkeZ9phenE/R7YM3GT+Y/yxHN0/LL9+ibzeFsPs2p2wfIcGLVK5ouxD9FERrqi/6rOmGKTCJ14sxzoSYAzB8noUpflgGo2zvM8EMaZul84hsWeRrwzulT/SEkxS/lPqJ3vkHHM9xmmzIIWyvQzuBwhD5dPt516eQNs1KvclQGFAUTbeSd5oiX6iyWigOXKrirJ4VB2sU50TkqsNJTB69rEHnv3XMdtyscs9ySw7wli+7feTblP7PhTwmfAu5y0Iya76EboKg525Cgp7wmPhVurmIF+TfsKq6OEln8TReBNGnOF5Shf2L03RDo0KwTuOqOokWk8237HzDgWz7e7bTMB2XNbx7oVcotjZ0K+tVKtmjm3dxFCf57YJh/sdO+oKTkCgGJ/TcVZrEP3Dp8EcPQZRLDaOo1D7xRng0at3zq3idjPCB4xwaCoNkig/Jo34mU/5BHCU4CtLwqRr0pKPCuRCvZwNfr3DhKjGnNhZRWJ0HvLLZmYbvwiNm18yess/9vXy78fqyG3rqlzjMoxSDoe9VgWZzCCpulJ7FgWh7G91x5V2IQQJLM4P0Vcc35FUszTIs65ilXXSAYz131FKhVhGO3bb2FiWEuL4Te8sSFHVmk3JRxaLkWJOySGaBhgZiaWYhokFoksv4bUcQ/aYyjQEgvf/35TKo4lEg8zCKchnrUjgVIZnp3fXWkSqnaI4zPlUxrHfWw0KXqUgekf0rsgFTgBfj24zgJ1ujKFitwtH+5LlNoD936itt7FnqHFTTOaztVLfu7aHOmYgiUAluXRC0RQsThNSOdS339WJKV6zwDqIpVnhBECrGSgNi5IqVk7Bi75tZau1X+LkP1VjxNcJKx8GGJpgAUBImeEGqMcFo3Csm5GOia/4hCFLMtbNH1wITbySmdM5VeUGqsaKGJCvByfLAQCf66zitiRpC72RayzaQD1zbdRCAnudIilAmMGzP3H6sarxyHMNE0N9+kFr4AdXwgyY3vWUCcIVf7omgYUmCHDcAh45iVDWgGY8GwJcwLcU/svWdwYF83yEl26gApTjJ1ipqypu6UTa8N0xzn8/qPoADPeNSebGabsG25xjaOaO3HQPYJUBWJ6BI0mY4rgBXVahy3noM7SE0dkYSQMeQBOxdNqZ2YtOWQVy/prh5GeGw89gT+jz41MJNAvfdvvqiXTGOujTf1Apy0sgwyOVcQvJ2bpDVkeYS1nHE6+xc66IXbwAGlt4WJ4nsdaFX0RHoWM3Al4X0r8865rcA6Dh8aor1HOS1cAcluOcCZaC9p9KT/jvLlpBtXWn6NmUC22B0apkAcNRGMdCg5PCKFalTOp2xwgtSjhUZ7Od1SrguC+6MCV6Qcky8eebxeDFkTxQlX8t+lmk+4PU6zQfePEWpDfwyLtP0BYbxZPhxU37AU4ywK3XZFnB6EU6HOc7uyRjoGZd1HGdpRF4MoF0PNhxZk8FuWkVtEIXTRQZpAovMbQ2zIXE4CqJbumMejsdR7l8xGecHj7moDGjL7KFzNaDhAL3LZBGXWnAB512XxFdve82G4jw/LW8ovp8kLHUGMM033UtsrqqvXmIXu/bSgV6Cdt+9dISD3L70KJhnRODicbXMtwE5Jw/LePWm+s+pDgAAFPvPVdp/Ii358bOYS0VRuFzh/uYyeO7dqVtJiES98YmEPL2JFF1OmuunuOqbYOw6Hlyp4iTwVfuHaSypZwl+q5S+3Xsb9uTt42A1y0e17Xv16NJ8ZqxHs33a55ok+7ZfDVqO23XYyfFqjuISBthgHfYJ2NUHiUcB5ugFMMRFB1YG0B5gnCDF5QtQwvr0487RfNvO0dYLu75rQB+6CCCXnE4ymyoCETQgMm3H820TIgQ6AluwEMXAlkDY7efQzgrTLjyhbGg3fb0g1OsdD7bDZ+0d/bIgSLVfbsTrDezhBVfMCOVNZs8VMywWq5tGOpHTFwNtC5dxztDI0qHj/qPpwOFUx2AenBXlV9U3L7c/uHADuKbhOKXJMEepE2FOVeHLPV8nnBk3oQ2cLW6AK9R7NI5zXJoGFKdpSPOXZV4QRpuOpJW5XC6F6o5RXpBqjNaMHHTC6MkvdN1BGXiwDGbDtPzDw3WywZe0SMY1W58ur+Dg5KI9aIDSR1hh1xHlLlcqo/itckj5qxK0fm3xcVLHVwQ3flF551IVDmCAJ8XlAWwTLt5Pwdr1nd//iO2vwaf57V83Dd4kpcUqJoHzt8QZU6tuXOqi/X3bdFxaq7l271uillrSVCXtEAykGLi0VqgmjM92PnvLM7rdzIOfKBcE7TEPovZgUzqMzvTvv+E919kBpJAo1fbsGgQJbNLHMen4MN1oUR4RBY84GgajH9PcKZRJ0/xTi9eD1sM7gO1P7dH7GJR/za52eWOl39ibfU+eSapIveEExJPJCp/FHbdb8SM6ldfuRzpTQrx9N3xj48l+hF7nrH6krpRC8COfP5NDomBDXMDr8iTMZDT1JNW6G3WepF2NQgNPcumLujjDRFCSJ+EFncuT0Ouc1ZPU1R2UV5IzD7BbKh7hSSouHP87CcJFbnf2ti639rcvq8WhHa70S46v7CqkycydHEFmMF/9erKbyy+oiZNj1qypk6sI5TL087m4dsUEDVzc61zjLLg+jxPR2fXxgs7l+uh1zur6Gk3t3yfxT7x4ZQkUMyNNfUu1EEpC/kQ2dz+LXhy++3V58P5/</diagram><diagram name="RSNET PSSN-3cols - sum" id="LI3LTisvQeRDns7Kz9p0">7V1bc6M4Fv41qdp9CAW6AY+d9HRP1+70zlTv1s48TRGbJEwTk8UkcfrXr8DIBkk2AgshEmemOkHGwj76zu2TdHQBrx82n/Po8f6XbBmnF8Bdbi7gxwsAgIsD+qtsed22eB7wti13ebKs2/YN35Ifcd3o1q1PyTJet24ssiwtksd24yJbreJF0WqL8jx7ad92m6Xtpz5Gd7HQ8G0RpWLrf5Nlcc++Bgn3L/wcJ3f39aMD4G9feIjYzfU3Wd9Hy+yl0QR/uoDXeZYV278eNtdxWkqPyWXjkeBLvIhev98+/Ovz3e3lfzbfLredferzlt1XyONVobfrgGz7fo7Sp1pg9ZctXpkE8+xptYzLXtwLePVynxTxt8doUb76QkFD2+6Lh5ReefRPxY9af6XnOC/iTWOg6o/+Oc4e4iJ/pbfUr2JYj0KNQ+zW1y+NQQ3rtvvGeO4aoxpId7u+98Kif9Ty6iE7MBPReYQTXSiKDskkN5bg4FwF500sOCQRHEnpU69uM/rVmxIk/3vK2AuX68oqf6A3BI+b/Wv0r7v6d9XJTc630E+57Zk1z2GYEJx4mLAgp3hJfVJ9ucpW9NdVW3RZXtxnd9kqSv+ZZY+1wP6Ki+K19qjRU5G1xUmlmL/+Xr7fIYhd/1G+6LjEZw0fN/UTtlev9VU5qnXPQX15naVZXn1ceFX9x970a5wnVDBxXr93XeTZ97hx+02AEa56TdK00X4bLOLFovfIr7OnfBEfua/2WUWU38XH+qvtTCn8ozjK4zQqkud2wKAdFXPxtACGdrkL34g69dGIrdYFMGiqneuEPupQOzV9Kn8+fdKvN8FUelO/9dcsqbwUg2EYtIEGOARtP2j9Lg5Eu48xHFfBTBQSepYpZGjav+GgpWme43ldmjZrB8dGrlNTkVUejn1s6zVKcHFTB/aeJwhqZJXycUuj9GiTMU/mQUUF8SzTEFEhLIllwr4ZxLShjDIAtI//oVgGtywKYhbGUCzjzYVTEYKZyU2vjFQ5RXKc8pnyYZOzHh6epyB5RE4vSAWmIF4tP5STI/RqkUbrdbI4HDz3cfRjh77acs/G4GDJ4LC2U816cIA6Z11sHZVg1oWOdmhhHWGzua7nv11M2YoV3kCoYoXvCCHDWFEgRs5YOQkr4NDMUm+7ws99mMZKaBFWBiYblmACIk2Y4DsyjQlG454xoR8TQ+MPoSPDXDv76lZg4p34lMGxKt+RaayYIckacPICeGET/dVNa2JF6J1MawEHh9AHPsEQBWy12MkeyoUOCNzdj9f2V4Q4Lkbh7gebhR80DT/kctNbLoRn+FWWCDmeJshxCTgihlGlQDN2OsBNUjRnb+jlHwwP9O89VMqLFlK27wJWuc1Ojl/7HI8xHsBx3UPGbXimBycGsPFVbbZ55Ymd7eDQHxAHggYg2zNVNLpziC/A1RSqyHt3thP40MFIgrgLSRDswzazM6BAB8O9c7Bku+KIeVjHRbY42fK5PBY1O14wJ386OMtFIY9es3jVwLL3X+fRb9mPuYTCtQpy2mg3xO9a4qO/kUEGRZDBDQT0fwFrRbwp2uiK0uRuVUKPjnFpaq7KOfVkEaUf6hcekuUyrfxzvE5+RDdVVyVuHsvvU31DfHWBP5Z9UZe83YUy7sI+bkkxgXUXDdwQCW543de3YwcIA/Dlq6jnaZo8rktJWrIgksjWQGBRbnA0uWlIlw9bL+bnmc/vlUX3Wzh5wIsvo/V9FdD2H9VO980w12l1A6uMLgjbtpImJcOMLm+9iWHPjhQWQp2AXXuQ2AkwYhfAMGfkQDgUYFxHhr060rBArNs4uu/bONqVBIHQd1CIfAyxT99OHXQbgRg5NG0EJAiBizCGA4EtaIhZYGPda3HHipcQbrsYiMV4yZMtGvXx4ZE/bc9xP6dTE2sNUbWUTrAO29RbP7llRnsQbjNVkIBhCoL8jo4OKAgVe/TauK1OkA5/4APP2SNk26Ne7ZO5lW0dgGXyzGoAfFnSkU+K10aJgMarAubGTSvT6CZOr6LF97vKLDSdSfWjDtid/vA2YFeZpv4gF83iL9JyBK2RY4u5T45dWr1ech1kt7freByT3I9WFe3KWzclvEUfbEr4jsYyJfX1uKZERm4KpuTrV3pLGr1SK/DGjAlTGkuNSZskM2hM+q0WVjAm817eKRiTYGBGKig539FYxqR+zqjGhBG2h4yJy4zAvrJRGt8WYp2jf+dRsqo0r9S6IzWPWjZoyJP+VgGsfAptcis7R6EZPaz/frKlqx5oi51j+mypnWt16puycqTfWmcFK/c2Fyzz1g+xYT/V+gkdjWT92HPGtX5AJZT6lGc/4lXb1qibuJmGWeQQb2OH+eHgbSzKIgq1ALro4VkaFIz42d+BM0h8R5iY5UGJhMyrx/HNTdljzMmarXtsYAZIMDPalD2ZcPv9fgGa+gLvfgPTXVyxjpO6Z4/tUn4CuSkRluX2VX6hI4+bTRlZ+f0Jt8ppxlKguoSfgc5WLAUDI1OhI8RlIWNjSWRvr7MVjfyubmiwSJ1A+mTYq+iL/nxt0d+l68AgBFqgg6Yi6n0NUd/ebDSNRsOGHHJbjSX7+9X5P+1brxZP+fNudlG6wmDn8Yx4OWXLZJuX47hbElLsDiQNMXIdLvMh2HVU91v2zZ7LIp6tZxF0/OPx99dFEkfNtn0x8qb3q5tIm9i5nUU42UBS5XT5HFkLnC/5XZm4tdSf34w0nvUMRI654Shn6iG1OUji+U4YCjt7dmk2oK8SHyEvIK6wL1HFZzqu33abgcN2dRoYe4mj7OfQtNY6njJelyzs/EeS/EZ+/nNznT7l4Lc4/hj8tdqVeBl78xofP2M+ENfkm4Q43W35mr73j+ObAoXE0IrVccTlPLcrUjqyGGm0iopEYRGLnZJjOJtKcsFc6soKkmNzk5NJTkfBByNuoTsHsqso54jszGhMn9SLegoG/W0wzfKvHypGG55dG5WEoAQNXAIodMTPV42NPzAXE9WJJRaQdmMptBpLxB24dUPAEuHyp7GxJAYKc2Saj+vJu2Oa5eLQGtlYxzRr2n12opmyzOXxtLPr66SdMfGN0c44OJ7aC/fXX11Xai8f7nnSzsfNw5l27h53ca3JmXZWpJ0xCudCO8vH/vi5liZoZ83eTdlp6U7A+KB57Fh3LidH8qQYBuboxKPZ1/wkJ9ngbVRyKkX17ZScb478l0oOySR3PjWdPzVdVsJgrGOv5OPkCYI6HyvbZ+w7k0o2nKOXHMXQCUIXQuiS0nL6bYuAFJfl63LXSLSU7/lw1W6YIDMwgRA5sPHDn6BpGCVmyv7P/mhWa6wMf+TqrnhRRxYgKZAdOhDDAPgY4CD0QzQtEHUXQZroQNLpA4o3ciDp9IJ8wweSWkKwC2c+Dt26yHcEx9u6KMfK+aDRsbEinPk4FCt8R8axYtNBo/OuJCOc+TgUE3xHxjERihB4X8fMdAfYEx3+Niw/63n4GwxMHf4mlS1Tx/d7Ho018CsPrnEPzv4Nhx839wsDwwjTsejzfR0Eh1SRqUpczSym549+Mw5ZIIGshhmU7Kl8rzfraRPgT5ycY3Hh5Vau8yA7eNZoennK6LdmDUAlrFcgl8IdKhbhsnGweGZq+sHScNTGmW3o5cV2bOSpzBQIzGaW0vLmZ6zoxIpAKA3FCt+RcazYxGK+LWZqMCb4joxjIhAhcGampLHTtMzUcIQdZ6bcaZkp48To7JgpU/A7zkwNhx/PTLlmESatsz6YmSJh2KSmqO2CttBTeo9QProIphuzdu2t1xbtC5yVaTDLaFahbDb9HT2UufTqZv1YXdPhcCtkx2srqmUbqsILSduHApbpN2DnS2CnowqvfPxEzrHiT3h7NP3Jue3FrZ6MEhnp5Fy54DTUgTxsUocmNUpHPVpQk4Ltmu+02wyflthtQLjNcfxiSPXMm+vIcE0KNgBddlv1LAMbaVSB8wYSm2GSRmXG31xucWKIJx5W28NkjLnBn02VddsPU7mKezRV5lf3KOcqkDgQNLZMc+FDSBMA0siQ1Mr2aTMixDSe3yqcJWWSJ4WzF3aQgMp+jjvqeGdwTUH0+O7vyTclzQijyiGbMXoo1IRRviPTGDXOeBveOLeHMgxQm63ywqOAPp1C0scM6diGRw6y3th1qNEVHH1vLPscS+obxrJx+nzem0BZIDs++nQRjhzAQGgWYL7Ins+l6N3IVXwwV60lDJ0A+x4C23/DvpC5bFuoS674y3gVeyQnM52HWDrEnj/XIRZp5/MQV0Psc4VKPfu0mF7mWbnkcn97Hj3e/5It4/KO/wM=</diagram><diagram name="PSSN-3cols - forward only" id="6oUAOxgWIcrVzvGirhWA">7V1dk5s2FP01O9M+LAP6Ah7jTdNmps2kk8w0eWRtrU3DGhfjXTu/vgIkDBK2MQYBXu9kskawF3N1OLrnciXu4MPz9vfIWy3+Cmc0uAPmbHsH398BYCHLYb+Sll3WYhMra5hH/owftG/44v+kvNHkrRt/RtelA+MwDGJ/VW6chsslncalNi+KwtfyYU9hUD7ryptTpeHL1AvU1n/8WbzgrRZx9zv+oP58wU/tADvb8ezlB2cN64U3C1+zpvTi4G938CEKwzj79Lx9oEHiPOGXGHrWxnr9++u3r6a7mGxNc3p/n7nlwzl/kl9CRJdxu6ZBZvrFCzbcX/xa451wYBRuljOaGDHv4OR14cf0y8qbJntfGWRY2yJ+DtiWxT7W/Kb8il5oFNNtwcn8m/9Ow2caRzt2CN+LTA4njkIk4PVa6FOXty0K3Zk3ehxH89z23lfsA3fXGa6DI3GdRcquw67qOlTlua4ch8bquArMaXUcrnAcCdhZJ08hu/SiB8l/m1DsuF+npPyOHeCstvt97NOc/06NPEZyC/uWmWXRPIZuQrDnbrJHgm8A3WERgzsSx0FrYI6zrJF4ToFc35RqjWUwUjDXu+eqRqNLPJeMNFw+OBoh2PtwYZFxOlJGZP+OrDHw0uXsXSIq2dY08NZrf1r2HfNRtPuW+NnAYvN7cd/7Le+EbGvHt9ZxFP6gD2EQRumJ4KODEU72PPlBUGh/cqZ0Oj27W2IvmtNjx3Eeo7OSGlY7r9A5uKJzRFtEAy/2X8oauqrH+Bk+h34agQrkOAc0hzCxDjfRlPK/KopWyVCOFmEIS4YyxyiGUgDll30BppzrxdRQsSITRF2syIYQ0oyVGvHrDSsXYQUckuRn84osGjVjBZgDwkry8+HDWDEBUUuYkA1px0QNFXfDRDNMNI0/FENAMybAgDDxRsaUxrGqbEg3VqAKDebiL3xzGS7Zr0lZSoZRvAjn4dIL/gzDFQfFvzSOd1xBeps4PAgny4ElQBkmsU+AKt36TCOfXS2NWmafkypJaO2T0Ls0pnWBgV1oA5tgiByHtDRCmdAAjpn/WOXxihDDxMjNf7Be+CHd8EOmI8EPwhv8UiZChtUS5CQBjohmVJG3Tmo9c1XjyAkQA4I9V5lYHhwNYitUpgtV9lvnqh4oqDGSID6FJAj2o56rFUlQFfJwCwH7pyAsptu4jA8v8OfLJGxnXZb08CTJvvtTL3jHdzz7s1mQQpGu/Z/eY2oqgcEquZ70CvHkDr9PbDH0ZQ/6u32C56CS9wnkvVEI1klFsC7Hwu2V4aiq+eMn9e4OAn+1psN58kmqnpZg1W+wM7+1oCwPkxHd+vE3wYLs83fBaezznu6SjRLbNVCph7TozFsvUu62+MZnL2Y32TJtASY4u68zuqqBxJNhoFuTgjXpX7cc4bFRuSFPS7kSoln/whoVeWcjOs95DB6fJ2FnDwt2WCJE4DaFnWRIljpdw65GCckFsONEat6I9GAJ7lAQ7doGcpGNIbbZn7MhvoxLjAzWp4A4LjARxrAh3JX7RjPc26776SrikkvGga1GqpUl43LerL2Qq0alT4EpuAotuKp0KyqckYmjsabhkSTrgNNwPEDghKEDNwhzu7crHMYl1uEvfOA8e4RkFtu9+6pqnLJi7Zn/Igq1P85Yz/vxrlDHXdirYK5bYRp4jzSYeNMf85QWikNM+lMfsPn9I3NAPnuIf5G74gSdyprxUs+JtOyl/F+2ei8ZCJ+e1rQbSj6vSEnllcNU0vrgP1C2gabTDtsohjpiG3GebtmmqqJJYZtPn9ghgbdjRHFlfCPuq4HyTRkR2ugGqSnPC+lm3MUmCpfIObvGXCIb6opLYGnmaDdcgqqqmwpcYgoO2E9QC+hTrE5X+xp5/jK98YD5SwqB5OZju82UiBh4vOf1rxdTUXrygRBRfscNk4jupcAnSTzo4iLQNhddZ5GTwlFY6qLGHCUb6oqj+Hm65aiqDLIS73yIwp90Weabo3NoryEWym+0YVIQ1hf9tJDuHSeFiLkw4rm7XB5dm0IkQ0h+4NRxBhNVZDB5P17d43oEJF+7/H4rYAZUYKazx/XovBxoqwXN+bOU7M9qPU3R//QDc4I5+fQDD+vxBzbLzzswbJhSUQwB6VFJ1/zQ4zzSHuCG68JtWFNMFZQIZrsYbtjWCzc1e/sQLlnEOHlkQSYbSoKN5rGpxahR3EoXR41MVELHBe1EMmVpoS9Rj6ryqY2ZpcgrBZo5NPgV6lv3pay/7Vsn0030knNPZd1BPm4OZqysTV7DGiuRWw7LiMng3fR5p2UawCqUx5YLNDExjbpTQc4W6ViSBDY6/lXl47nnOhX1QhpW8+tIifWQEDibV5FjG66rVE/nGg+wvcRGTLMRU5n7UYdqDdMus61jiEV3uidcXMGv5/Fgm3MAhh4JwppkSjRNY5IjMySHeC3RmBIBWiVaOvf4jmhsLOtNIlsieUtNOVQNrd0tmziWNb4Uz4l8dG+eG02tn+w5sbpvb57rMc2lv7Jc4GQg0XWHqQG9mSjcYyZqHMlSUlcAkmFnr1DT57GKIc2L1uAeF0jrAW6kLtwGzohWw9kHCtxsSct1DTc1g3Y1yVLcWrnhVSRLxSzit5As7Y2mhjUqymlRbNldpUWRbWtLiyL3eD5BOR5qqO0m4BiTjpRCNaVFEXZHnRYlY82IINhzRoSMNSOCxHDam+faXuZcm+ecnvOX5PCcv9t7X/b9BLGKcL3vfama4XCJ53p6I0L/jlRDk1E4Un4jQv+ObGFFlNvqsUcjOWXR+aaTv2RDkOhN5Nk1YsIbVlpdabgxVmRD2rFSIwq+rVReCxPKovNNMSEb0o6JN79Q6+kFuEBN6LW8oituCWFHV5+GTq+rT9tvfkXXwcAvWfrVPJw7aww/aSVq6GhGWI3i7UEKY2D3LENEZDs2PScL4/4dOdIMgyyM+3dkjQzDTey0KozzZMilwhg4egNb55ZE0S2MG2NFNqQdK0NKolyXMG6MCdmQdkxgFQI3YVwZ1/QrjJsj7LgwNnsVxo72vMzohLEu+B0Xxs3hJwtjUzPCqkqAlVWT2G/vOZEfy8f1Kt1mcZWZjoB0PYjFknp6gwowVTFkV4yrnS3J4qiJjXATq6zR/ytUiIFPuk7rS1REja0+cgUt6YeOpne0I0tOFriKXFznjG0eDRjkRyy1ZYr0ygoA3HK1od6ielf7i0NvIC6kRbuPek8IoNqwxcOCLdINW7slQT6OuXX6wmK3JXzKhnrGp1q3Oer5Sa0V1+cLuolpHqZrONi2EMj+d89FkLwAsTS5rbtCeleVPrcurupiBOyRdrEl0jlXMcN1IMldZRopbGs+qlVviZjT6GCbUZjUy+8PZ0p38Vc4o8kR/wM=</diagram><diagram id="9CoFUNpS6_kSaeK7aPlK" name="Overall_flow">5Vpdc6IwFP01PNqBBCj72O+dVnd2xs7szr50UomQFQgT4ld//QZJxCi1uEXU+tRwCAm5557Lvbca8CaePTCUhj3q48gApj8z4K0BgGV5QPzJkXmBuNAugIARX04qgT55wxI0JTomPs60iZzSiJNUBwc0SfCAaxhijE71aUMa6bumKMAbQH+Aok30F/F5WKCeY5b4d0yCUO1smfJOjNRkuUQWIp9OC2gxB94Z8IZRyotRPLvBUW48ZZeXp+RHlo4eU2ixy6ff/aQ3AJ1i9ftdHlkegeGEN7s0lEfjc2Uv7AvzyUvKeEgDmqDorkSvGR0nPs5XNcVVOadLaSpAS4B/Medz6QtozKmAQh5H8m7N4yhvQSzA2+ZJB81ffIUvaYQHTGPM2VxMYDhCnEx0v0DSvYLlvOWjPykR7wdMJQVbOoYUAlDXaomMjtkAy6dKIsRg5TVKaEHPLl7wrdhugqKxPIMB3EhY5tonEzEM8uGzpUCxyQpeyXMXvQq1a9ygiASJGA8EN5gJYIIZJ0JOV/JGTHy/cAOckTf0ulgvd4Q0P/jCFM614dzW51n6YL4RnhkVMUFuoslOY1Y+ZV4AR6NILvNZ4rU1oaM/T4fDDO+FcPv4tVm4/PFp01uTZhFD9iZNAOpIs3d60rSbkmZHaNOD8HN6lOR2oHPhXralwqqo+1Vk+eGn1Wtavho7W6i4Nmf93igIwcj+kz6OHhLvaSbzm5Ytnwkj8as8Ec2lF6EsIwMF35NITcOJryYlNMEFIu+bSwLXDNk0o43HVVcPq5bbclytE1a73ZphdRoSjvspWthxKqqdtfCapUX9MSSz3GE24+1ulG0EzXeD47LkUGYG8nq6UrBIKFypVda/cv8jtOoTWBWGP/mg59UMeqpybT/qVb/OCdi+/Q9JdbyC5mFrNCXcD2o0cHKJoHLCZjJBvUrrNFOmdfQ6DbRWp1nO8Qu0bnBsW6BtF2qq17C9UDtBfTpN6VPI01XVVTNtFCVz4LalSO888saN9PzgeeNXaCc3n/5Vh0LbPXCuspnkP1fzdyQhrrE+sUhBIFTNqGV0aiYJATqn8ELt00Ie8hUaxodSX9uJiOqsbk9E4OklIo22jHWJNpSHwH3kIZUNS6uSqDYblrIXudqtNPVuZdnSLBuWpa7317C03mH/k7p2nMO2LK1aHYDTzz1tcLjcs1JroI7djTzu2jGOX8xiWJMGYRS+v9Caazgr5FwqryKA1mPx/XJhjTG7HmNgX4zVqsVLxqzzY8y69I6LslpZS0kZOD/KIDwylTm7UQbPjzJn/d9v+6NMXJa/FyxSjvJXl/DuHw==</diagram><diagram name="Detailed_flow" id="IbV7ONUnpbN_BnUniLAk">5VrbcqM4EP0aPzoFCIF5tHOb2cpMZcpbtZmnlGIUzAYjVpZv8/UrIckYhGMcsDOZ5CGFGqkl+hydboR74HK2vqUom34jIU56jhWue+Cq5zgAWPy/MGykwfZ9aYhoHCpTYRjHv7AyqnHRIg7xvNSREZKwOCsbJyRN8YSVbIhSsip3eyZJedYMRdgwjCcoMa3/xCGbSusAWoX9C46jqZ7ZttSdGdKdlYv5FIVkJU15H3DdA5eUECavZutLnIjY6bgMaYCsxY8fX9Ffj/CG4SAKl33p/eaYIdtHoDhl3brePhvb6IDhkMdPNQllUxKRFCXXhXVEySINsXBr8VbR546QjBttbvwXM7ZRZEALRrhpymaJuovXMXsQwy/sAVDtn3l74PmqfbVW/vPGZqdxj2k8wwxTZZNPIJZdQfxAuPSjkwWdqKEb+HDPvt4+3S9uX8j1z42Vvbz03S3WfI9gwqemGz6O4gSxeFmeFCm2Rtt+26H3JObLcSy9sTQD1b6CoOKCIRphpkbt4nrIEfTLjuQDGo74xc7zFKacNsdQyJHTLVGyUMHoOV7CIz8K4yW/jFgOkjQ90aqFT1rqV0PHO/TERalEIZTEUcqvJxxjwYTRElMW820/VDdmcRhKtuJ5/As95f4EXTIRiDw0cNSDV9s5hQO8rtMkNVhv+wbUkg5Mzij3fevCGZRA66sgtiSVXfYaXAA32PkblP2R5+c5Pg0nQA0nKsCWVWQ1jRkeZyjfiSueiQ7AvRezZtt+LzS2Z5dCCAYK81WRPIAdSNt0N3HobViHVim+RwfTNYJ5d/cHhdO3zGiCkwUTvkfCa5Wi2qaeIaVos9NBCeBeEdkGX2HmwkphcVx/fiFXsEeyYP3oQ+nLXAa0q44uLMvzbdu1AhDAwGuUXzuTQP9dKquUr/pBF0uiIesqqJtFWZW3dF3VYQ1VHw2VemTQX+kI2lK+nToMPmreMthv2YbQejVCGwSnCmXw6YTWkCTXK2sb8Cra1rTGd11T29zAd7i8WSCwrYq07ZHMYxNBdfmvCvsbdLP2bUufG5yXN3OOBBuK4wZuSEmKte0mFstXihnqHpMEzefxRBpVF+HmmaRMzeGfQVVbi2U97tAycL+AZ02e+l344+kwGPxmOgzMyH3mOgQ0rENkJXDu0ts4y7EPlN623aq/q6qGrhS9noHgEzHw9AXuHuZUdcdvdpzYlqGvMqjBoWezl73OqGgepehjyKzEUO+/hTjWHyVxivtap4e8izisgMX96mGmKABqHYkb/XlO1txNtjadjDGikym/K/OYY93iFFPOCpLuHJbKKYwz1GzvCervmSjf+sICT5YoYVtu5J+OuBcpI1YTyIcccI7HhC2oAFwz4MOi+tbyxznVeR/wa1DlKuB+e/wuL4zI8pCw031sEJlKkmL/G0P3uLhuUMIFeK6BC6zBxe4Al9rXvLoPBDsfjZopaL6PDCUWoI4fe/7oe8+/0hgXX5/auG70xUrx58CWPBfwg0q+dswNCWqA7+IAvhZ45xzA86C8J/R/lHTYLiwzyHUaSfrJpGN/DdcJg/4uEsOnxRxojHWd7gIDc7cbzHmz+AmNrPOL3yGB6/8B</diagram><diagram id="LbN71ML4x2gzh3EKrnTf" name="SS Gen">5V1fl6I2HP00PjqHJILw2HGc7dnT7mk7p93dR0ai0kXjwbhqP33jmIAE/0SIJOK8jAkYhpvfvUkuPzIdNJhtPqXhYvo7iXDSgU606aCXDoTQcX32a1ez3dcAAMG+ZpLGEa/LK97i/zCvdHjtKo7wsnAiJSSh8aJYOSLzOR7RQl2YpmRdPG1MkuJVF+EElyreRmFSrv0aR3S6r/VdJ6//FceTqbgycPiRWShO5hXLaRiR9UEVGnbQICWE7j/NNgOc7NATuAydNPhzEKXvY5j8TWgSDb+su/vGXq/5SnYLKZ7Tyk3/tekmnxcTMv32/M+XFcKfksjrCnh+hsmKA8Zvlm4FgpOUrBaKf4JoDqcUb471b/gums0hZMGHyQzTdMvO49/qcdB52PV5cZ13YRZh08PuQ7wy5GEzyVrOoWEfODrXIAVVkULPKaEhjcmc1QYOK5fBO98ZtdHzi+h1/TJ8GVKH8GWY6ocP3R4+WBs+g/j0LuPDmmGqyQrP62lM8dsiHO2OrJlys7opnbErvgD2MVwu9lI6jjc4qoEoOoHoycDrwqAQeccC7xhtbwarqwDrPPplN9Cw0igJl8t4VESTYZRuv7GC8+SK4vfDYy+7m3ey0paXxnGSDEhC0o/LoPEYe6Nd00uakh/44Mju5/V19/1NTA8uxErfRcvsc36ZXUFc5cquXJJVOsJnThTCS8N0gs+2yCMWR4UB+Cyp3CN9L+pSnLAY/Vkcto8FBL/CHyRmd5xHXr+oeb5bbGF/4/xLh8Ok1A4sNgNcKTT3uJTa+YjO7KZrBKzXCh2QUDQuA/17koEre+UyUz27mCoHR0WmSoQHvaBZpvqtYKosm6aZGjw0U32rmIr0EBUYHlPFbdw5U2UYTVMVKXgHt6JqlZnyzel9cW4t6H1RB0TEWqIDviYdkEfshnWg1w4d6MrdYVgHAgXrwgYPUdlElMNSH1IKbkRNEyzrjCY8xGOSc8M4U1ga10XPrY2eOXhU1rhNq1vWZ3frIAYqyzzTC5KoH7w7hXmRbv9QdOTFOQ4AqosdEbCWTHJaYiAGKgto+2XAsrVO9lT4PmTgym65TNXALqq2w0EEjsoK2n6uWmYhAgc+MlezqLKErC0xEYGjkMFwB2Qt4QiN01Ul9UGZrlVcRKDfQ6xK78vza0FvBSFAVglBS1xE4KjklNgvBF1knRCo5D40M25HIfbHWh4OXDFuu1bRVV4PA6hpQRw0zdd2GGP2jdt+GUYWum+8OCdz/IHPah7tkPqgGUnplEzIPEx+I2TBMf0XU7rlmePhihIdQ/vN+XxxoBb2lgLxVY0wZUbX7Fh7cjCM6LBlvmSJ+L1qOizredM6LJTohF7k0jDMa3XKh2WBqOCk65/pN6QgoB3Glld8FGV+yAX2OFsmlBlY7WxVniCXLJnGlbkd1pacGGMBX3slHOsPaFebVnc7gw60C0NT459hi2h4TV7VST4FoEgn1zid7DGIrsnFqPU217muvDyFVM48z0LWkrG15D55etwnKL8gfPOx1bD7pEcKgJRLaYEW3FVe1nX9osBWuzyKLD0mCw89GRlQJAM2xlbD2VOa2BrYxlahwg/KVrvSp7JwqMtWeeHa+OCqsn2J/XSVX1SwgK4GfaabvtpVkeIKXq3yWhXaZWLJnknVebacltG8Fhg2sfRogX1SYHhjGj2wylZGYBzWB9yYpp7CIqissHbvTAMdTVZGv2mFNbw3jR4pKFkZ5rXgznenOdcvCmy1a3eakpURaLIyvIatDNP702hiq2xlmGerPdlRJthq1w41JSujKltLVkbTg6vpTWr00LW0fjFO10fcpeZcXyo8M1TdpQbYvU1N5Xl2ycpoXAtgG7TAPikw9qJZ9RW3epaPbhkQCQF3N4N3/ScXuS5CgeMGCPY9rxiHnnS8X1ElHPjU83o9z3OCnuP1pX1LoAOKh1HDGqLVYGp9sCu/rGHZBPhEMItY72uKdXA+1oHhWNeaF9b6WFd+VGXZ/O5EMItY9zXFOjwf6/BGsc6K+f8S2Z+e/0sWNPwf</diagram><diagram id="foUFHrTZMXvXL9mn5CFi" name="Arch search">7V1dV9u4Fv01PJJlffjrsS3tdNpy10zpbTtPs1Jwk/SGmJUECvz664DtxLJiHzuSdaTQl5IEAjk6e+tony3phL25vv9jOb6ZnqdXyfyEelf3J+zshFLq+VH23+aZh+dnCKHk+ZnJcnaVP7d94mL2mORPevmzt7OrZFX5xnWaztezm+qTl+likVyuK8+Nl8v0d/Xbfqbz6m+9GU+S2hMXl+N5/dlvs6v19PnZyPe2z79PZpNp8ZuJl79yPS6+OX+L1XR8lf5+furpe9jbE/Zmmabr56+u798k8030irhMv64+v3t8/eHj6XtyE9Nf15fJm9Pnd3/X5UfKj7BMFuveb32fkuQbC/7778e/ftzPl+zD129n+Y94d+P5bR6v/LOuH4oALtPbxVWyeRPvhL3+PZ2tk4ub8eXm1d9ZzmTPTdfX8+wRyb4cz2eTRfb1ZfaXJsvsCeCfnn/Eu2S5Tu53op5/lD+S9DpZLx+ybykyM8z/9Dwvef7w93aMaZQ/N90ZX1YM7zjPq0n51tvYZV/k4ZOH0rvh4fjL93j+6eOnH7O7m+v3p9lHYJHiYGqLXVSNHWH14BEuCZ6vL3axrbGLjMeOqwbxULGjQ+bdu8XN+7No8upx8Tf/PJm9/Xqx+nTq1wKVXGWzR/4wXa6n6SRdjOdvt8++roZy+z2f0vQmD+CvZL1+yKfC8e06rYY3i+ry4Xv+808P/tk8GPnFw7P73RfPHvJHz3/r5g8UZjbYOK3S2+Vl0gjCMJ+fx8tJsm4IG5UP8zKZj9ezu+qfp4ErQkvzfUiqkM72pBa4t/fT8e3qadCod5GMl5dTWwsAUgtuKJ3/D4/t+9mfj6+S73fvkw8huTs7/zA9+ySJbZ1bFlevNhXtJjzz8Wo1u4QRA4UxAzTGu/BuKgoFpqmPxE6o/YY0BtNC/hv+SmfZx9gONBVmXOJX3+KZ2PKf2i2AxTeKhTcKhVx4DkztjZ7SofzYoAw5P//zYn72+sP5JFz5/3n8l7C7O0itnb1Lti5KADBb3Twvln7O7jfQzGaldRbkdAO9uHtKwGEXkEoMYwmlSVKBKECdNKZKUbedgf/ZfW0P6H7O5vM36TxdPv0a9vNnElxu3nq1Xqb/S3ZeuQrjH97Tz9/P1ju/KHv0T/HO2dfbX7N50A/au3O8NF5RnQKachUJBZyWi/ICul5PDjglZjmAusABRAiiaRJgNpFA/7m6KaGQAFWoyeKeKOUCSIN4UJByJ0Aa4AKpf8Qg5ahASqgalBoupwMXUEqRTaUAaUUXSvtUx7qR3VpPcyADBKgYwFdUTRNxoh6YAgANF/wUgIwBAI0Y/DEVVAppg2tQmQKi/Ziufjb/3r3DoVMQSSuiKV2REGtdqOCKhIpabmqmVgJR1tDzgKhUmCcCiACEhgi6DUsrVoucQgLWUMwNNWoF9YZVKwhE/sKP1AAbUiEqkLNIZaiQKioWvaEaG55UISIYeqiKKxbzUIVIQTaqFj3h3V5dMygP+Kh4QNQt+hbXom4xOA9AZDb0PICOBiDSRS8Ljr4lNjG1wPagDICrZqfRyGe+z1js+TGjYRBUc9ATXg/7EURIRzzgPAi8mHtBSGOBd0j1ZTYoexTF0EueA/KcQvMcV8XLWvKcqMnzqDnPfbN5rs1S6WCeQxtRyCo63pLnIt/3zPO4Oc8Do3keANpYk2V6e3NgIVfusRv/KN7Wk499OdlWR0PWmqJBPTdUeKzlkTLcnGoePnDhzLLFYrz7L6rFVRLVQFNQQ6XlhB0e2saBbOXaENqcKvIVCdmeCuwq6NTw1hStvA2PBqXL0HBnShEJBHUyHRL01CbQdxyEVlyGuBa1rApLofoAi1pRFZXhiA+LS8N9KEW4lBQ5Q+LShq6TPlziWoQHSnDJBVwOPFsabjkpQmXZATQES1c7TH2h3F4jQ1tMIS5BgvIKWnuWyKFZzBtuLx2KeUMYN+wm1iwuPFTz20yMI4jWgD/GglfJcEwhSzn8MRWtOWZjangLoOqqCUdQDe/YOjSoKIIYA9CO8vCbQU+/2WNlAKyFUDRaiOQ0u2E7LQTkqRsarNsRtLTXQkDOLzT6jsZmSzmU7WZFCu5tF0mLZC3pRr+FEIwLox5UYLbjQtTaE5FIu+UwAMAZoQKnG02XssK3HZxm2y7lodbHCs4ijZCA04XOCylSyHZoGu69lPWftXE0FTeMrT+FSzgUegyhGJstB1THSKKK0el5QFmDI6oMo3XukBkJSVgxOp+6hBVJGJWaLLRU4Er3exysi4H38BW4R1LdUzbKFlyMeowz34+FYn/Pq52L/3jEwygI48APOPeKSwOKtQAdRbuvDrxqh1xhgKMTYXzPB+haneHZtRxBWzsRkBt38CgeOjsRxVC2My4PwIyLS+x0pBMBuenIBiow3IngToqdstuLmtMICTgd6URw6gY4DXciOEq1s3sYTavGHLKaxBxHU3FDqWeqK7dxaB0cpb7Zv5LBEVUfpb7ZfwpCElWU8uYBMxKSsBrc6mnq1MJDF+AFwAHH3ODaSipsK2M9i3xhX1mxi3+w40tRdpQ7MAES5Cs1Adva6Oggu4EbHT6uRgcJmxode17t3OhgTY0OFhhtdESAigxFo4Oa33KBciPfdgRtbXREVl0PqbPRUQxlO+PGYMYtkhYJ4zrS6EC5/7QHFRhudKDccdojjIYl6QhlB757GE1L0jHK9mWHOJqKG0pZT11hhGNVGqOU+frPOUiiilIy6T8FIYmqwfO49OmgEVjdjHEdmiU6OHue0s3Nqpsxys7nAWUOEqxCqsej6HN0WH+DN24WSYuECdzoc8Qou/MdmAAF8qkHWe843+eAo77Q0gCox9XdJKSxzyF/tSsnsLCxz+GZ7HNkn6091XH0OYxv6KAeRnFzO4KW9jmoh1Hs7BFWs5ox9TCKnT3CaFYzLm95tD2MhjVjCrowHnMcTcUNo5qpcJLBUeHbdUm6xt56ObLtF1JSqMpHkd1JKfbW+y7uxeb6wKt7avoWdtWFEhIqcFGSL0cFAFZckjxTI8kLuwpDMmKV2Whg5GJU6A+ozZEg16BAjwK5uCR0J5ppFHTarQVQxdVMo2rvfre4mdah4CbQgwHKpEXCBE4008rPbS0TIEE+oPbpdPHHeD6bbMJ2mUUqWUpDp+gmkLBKoUxyE0h5O0i1BaGtBwE5CbFTNNPsldl68wkjeRrquVWFS+TJYW9VoYzWQnmxTi+n49U6m3mod5GMl5dTa3OV1OIbSlNVW3iVbl/Zne+9UVYjtkz5m0d/JctZ9mE2cd8jiTXFHstcygTg+EKHGDqbihnCfTYyuCRmhpupamBXXMhdxJRJWE2SG0Qf6iAaoek1seJNY01D2V5lF8c/tVfZRcoiYYZTQY3h3O/IDPkP+mI6ase+4Y6/GuyLK2wE4Lfqkq5u4wKAJ65LtMSCl3VF5331zynA6g09URv2lSgCq1j9mAerVefYKgcrrnNmiVhl90Ur8QS4Dj23mj5qVtFyFt3cavLsWa0Cdk+ItxfWBcTbyQDZubbBoXV1keLi1D04F9QVLhu5AB0VGLbo6ZEuQuNhPUJP3mEMCz5VsExZJAxbky6ivtKF2B7QTqmGPXiapAvz4LfchXfYaojj8uCJ0kWoSLrgA0sXpo8E1yRdmAer5ca7A8GKy3ZXky76orUmXQw+txq23mmSLozDtRhH56SLnhAH9ATBzrsiZ5GQgShddK6r90kXQ3OB6RsR9EgX5qkAogg5dLLFgQomB9MArgKeRSOf+T5jsefHjIaCNZ9z4fWeVv2IjnjAeRB4MfeCsDi0tiQfUn1ZsAFrpxBtBjQncx16RSS2+lfMZTHXfTW5Hjfnemw415Vqp87nOrwzhau8E3NZzPVATa4TrzHZfc9osjNIEYPizCJpq2rQM4sYZL/A4JtYtiNo6ZlFTLJ34AC+tcNm2zyUrazLCsNTK+uWSYuEdd24m4GBNmRYQAVmz9lioI0LaKDfur2reVQAWMXVWXbjTmoG2sZhAVbNHubGQBsNjgiruFQq8VimflgVzosYfFbFeLRLD6QaPi+QgbYZ2NikavDXHlxRQ09bL7MUCfKFAyJ6FtShYeRjPCmmC/INId30DgXdkgSKYzeYyf0Kw7hpDyyX2umVQ2VihmwDgyMHXjLT+xdUKxZImMGGe2VLZtC+KCqSDAl2XTz/koG2dlgAZFTnXzLQXgQ0QNaAXFzncThx/iUDbcSwAKq4zr9koK0INsobvfENqL+h9rsyaZEwgRPnXzLQLg/MTIAE+UrP+RhmGa78MrkuoiZ0Iw62yv3oL5NjajeaYEt17XoTg578jq3yJWFj5stf7VwXs8bMD8xmvlK19fgyH9zIwlbpsabM3/Nq58yPmzKfU7OZD1ATOx2ZXM8uNXuxGKkuCKWbsQY9gZpBXPtGjMyMVBOZS04+H9jI7ON0RPE9Ie/cNTS0OvGtckhpNTJz6GwVgOs0H5fhyhEjs4/TctWZCgwbmX0bTvHo7p4Ci4Y+Lk+UI85lH6crqjM4DTuXA6VSynDO5UATVnHtw3TCuRzg3BrYGammncsBNQfVYZzLB8K7vaIu4N1OBEXSIiECJ4zMAc6NgXAiMAV8nK4mZZIEjv5pYJXL6TAjsyZ6LZwDAHrF1Vhyxcgc4DRV9VYskDCDDdcdSYzMmtZIAa7WmJNG5gCnJ6q3uoEEyDZchTQgkHHdjeSGrzmECGoWIBeZrzlUanxBJH509zXD+4fgfYVF0iJhAjd8zcXvs5YJkCDf6eNV9UueYMsbsrr+xeYcKj0BDFvma0h16Gn62CrfF19zqFR8xZbqiPpa2Cq9o/c1hwDJFoevmVcXhL6kHT6wrzkECLNmfM28msi++QOaQ5SKYjmCtvqaQxskxUF8zcVQts9WxenlgNkKV53miK85QilRdqcCw77myIaTzzr7mkPwIirCVUo64muOqBvgNOxrjmw4fExifOSasEpRYdUJX3OE0pbXHammfc2RQePd0L7mXvAGVNQUTAS4jHdO+JojgISDmghMAR/l/kp1kgSO/mlkw/ZLRb5mTfQag3uoEa69nq74miOUmz37KxY4mCGG6EBomEH/GgnX3k8nfc0xyr2g/dUNJECGaEbHA+QiyZAA2Q1fc4xy8+YBagcS6Cr1eCESPzr7mjuU4+CTW2Ncl0a64WuOUZ6Q1oEJkCBf6Ylow6zK+1jeNK3JQ/DmBmR1/YuvOVa6nxZb5qtP9aLRbl3l++JrjpWKr9hSHVFfC1uld/S+5hgg2eLwNYfVBaFv/Lxm7gGEWTO+5rCayL7x85q5h1FR3I6gpb5m7lGVE5fFvuZyKFtnK+5Bd+GUSYtktnLD15zNcW5QgVlfM/cMSpLafM3lMADAiauUdMPXzD2MqmEPcJr1NXPPqnsUtgtEqgmruC4+cMHXzD2MtrweSDXsa+aeQePd0L7mXvAGVNRQ412ZtEiIwAVfc7mesZYIDAGfAOQb1HGzwdfMiQ3bLxX5mjXRK4H2UMuURkKvjviaefHrbWcKVL5mTmzY+ymxQ2paIxVJhgS7LvqaOcG4F/QAdQMJkK26kkE/kHHt3XTC18wJRFCzALm4fM2cGLwzAZWvuUs5zsFMgOuoVid8zeVqyFomQIJ8pSeiDbMq72N50yV5Qjc3YKvrj97XzClE97M28zWkOvR+c2yV79H7mjlVKr5iS3VEfS1sld6x+5p5kejofc1lOuZxC4yf18wpQJg14msubTVFrIyf18wpSkWxHEFbfc3UBklxEF9zMZTtsxWj0NmK4qrTHPE1U5QSZXcqMOxrpgYlSX2+ZgpeRFFcpaQjvmaKUjXsDk7DvmaqVDUc0NesB6q47j1wwtbMIOqcBUA1bWtmrt7Eqgjd7fU0BdvuipxFwgNOuJqL1Yy1PGAK9yh3V6oTJHB0TxlE9jFdB6lyNWthVw5uoDJcGz1dMTUzlDs9+8sVSIjBho2fMi+klgUSw7Xv00lPM0O5D7S/soEEx1Zdx6Adx7i2bbphaWYo920eoHTgQC6HCEg2Ch+dLc0dWofQQ9vLpEXCBG5YmjlEq8PMBEiQD5GMXHC76ZE7wdsakFX1L45mrnQnLbbE15Dq0JvNsRW+L45mrlR4xZbqiFpayOq8IzI0v1vcvD+LJq8eF3/zz5PZ268Xq0+nxR7rvfbldLmeppN0MZ5/StObPOt+Jev1w8XscfMT49t1eiKxNAsjDPfz7qab9G+m9WSTfzYuTy1wzkDLxqa/codQvo43P/QqQ3fvIvxEXjlDg7t3rV22RfOkjMJRnhg7OJS5xYuaTXn4gnqQsoS6yB9us/Lt9tnXB2eunMF3Fut7OPzAjG/N5PDARN7DS4LAQnxhMPcQnDLuaR7jRbpQMqi66UiicSCko7qabxEdEWacj6IXPtoGIx6Ej6g3MB+FjWOsio8MDWorj0mOr0fIY/Xmhk08Zr6uKo7yeyGyp2jsGT/VTDZ0ZdU8W+llsjbBQzePEfDgGyUyyQGmFjEZNV+RFT63FyZ7igYbhMnY0DVZPDCTdWvMa+cyiZUGI5fVG0U2cZn5qow2V2W2SCEHZ6GchPywZWGoWySXNYeCeRax11ezu+zLyebL8/F98Wz2W3Ze6I+AmudCFyR8rwoJItkPymSAUHAAxq/Pc+/b3fezyWp5/cfHU37+5+e/T7WdpeWNqNap2lgnS1xyBLCJuv5GItjEN1IHNunIyzxLNaydbFwl/Mu/6+cvgLjLQLMWADafTTYGpstsZJNl9sQGWrPL8fxV/sL17OrquXBMVrPH/ECZTZLcbD7/U0T81yf+2ea9MrpdPTOvztmLhHF1fHwumb0CSaZRXWClkiHbDMv5dnxcHAnfF/onseFx0OZh8EbhgOsbqXMgd2NjodcAuA5qpVc/HJZeZbuzXuh1h14jYXxiappebdj6JRxcY1nxFAigDNioJ7xD0vZOmvE9UA9LZkErNyc0+S2VLD6bzmZUv/oUGCHgsBGtv1FhJN/3Ruq0NGlqyLaeHUOZFoiQNMznCs466ru5xzpeFmQJTxkxS95K98JWpmnUSq93y/QxWWTfdp5e3WbpcESlV+hVt0v5vqTyiodEqvSSwdqQfVmOZ4unAB7nqFUxGsvkCEIGHTaZICHEvtMhzbt0m5dQusIZCKICly0/SCgJZ9Q9nNnDZZqud3ku++zTLImTzXf8Hw==</diagram><diagram id="OecJ_pp8aN1hCPcdnS58" name="Page-10">7Vzbdto4FP2aPEZL98tjm4TpbbqmK6vTdl66XHCA1kHUmATm6+cY22CDAwoFLNPxS5AwUtjaZ/ucbeELdnU/+yMOxoM/bS+MLijuzS7Y9QWlhBMNf9KeedajJMk6+vGwl5+06rgd/hvmnTjvnQ574aRyYmJtlAzH1c6uHY3CblLpC+LYPlZPu7NRddZx0A83Om67QbTZ+2nYSwZZrxZ41f8qHPYHxcwE5+/cB8XJecdkEPTsY6mL3Vywq9jaJHt1P7sKoxS8Ape/+zc9FY6n78QHK83rr+EVtpfZYJ3nfGT5FeJwlOw99Av15ns3+mfy9d2rn29+/Lh/EYXdS54N/RBE0xyv/Lsm8wLAfmynY8f/IP9PH8I4CWd1yxt8K4ZdIQjUC+19mMRzOK/4lMxBz1lXNB9XS8gNQ5Jl3YPSCrKCeUHOnP5y9BU68CIHqB6sSefTw4fBl7f48TKI3n4WH2e9nzm+W8GCUYDY0Hj5OBgm4e046KbvPEJsQd8guYcJrwm8DCbjjO13w1kIk76MbRIkQzuCLoOhvYn41gVcR/xpZJVBUhNGK/hS6CRKM84MN5hzuQE3MUhxQTfxrrxxcMyJA+aj3otUKKDVjYLJZNitQg0AxvPP0MBIFM0v5feuU2jwsjXPW3fDKLqykY0X07C7u1B206EnSWx/hKV30qPTST8/GyaliaD1pRgZXq+mSRvFLM9b54mdxt1wC165WidB3A+T3VwOexX13GRNiQGFZJYXv+iLwwjY+1DV3Doy5DP8ZYfwdZekvCRMIwh6XiGlMIgyIaVW2miszRrHMizykcrCtzY4MyjldGVoohUSyqw4Xx06g29j6AWDl+DsT2p6DkICmQDijOp26Ahrk448b8l2xjr1KtaJUWi5zvNKiHKhhdGKyrVBXSO9VkWIFkhigY0QnHBMCsaeKNYdMiz/Y50pgYjQLYl18RvHOvcq1qmRCChyjFinTCLDKcfFUVWUZi/w8hyCniuGtOZ6iXBVV70Nf9Vc+O9TABxbMnaWDK7SIr2SFiIUgvCWFU4eqmJgkDJI7VPJoM9BUZRG6bJ4JyMvXyV2/vV9X0/i7k30/uP49fj9Obs9T1C5eczb4Pb0lPmG9xX7/aW7Fq8at2cblz2R7nq3B7iKoDYEcoEmUGOY3FO8a+wexgXKqz/FtBbrYx9OvGvhP1+/x1slaYPfs1SS5y3Zzmj33+/J+4pAZ1jtF+u1QgJERKUckKyNfeRYP1+/x9tYb4Pfc6xY99/vOVSs7zB8Gr7E/w6Oj7cCcK6Oz56isbNscBWXFjg+B6sa6iyfhjXlbD0fb4XEnAPiT+zwkRKy4tJ9UO4L6MTFaWs6f2toj089YMpRv41X+v2E7ZOljOv+7yFcH0gWUTnxPK16Excz03sxqd/l46+W0DZpyfMWbWe4F4zzJN7rjJ+MToxQgokhWLP9or1WSTilkK0ppbDCWDJp1gY/dry7WI7ex3v9Th9/493FbjvbeGdexXud+XOgeN/h/TR8oXfxH70P/F27ffyVABfrrY3uzwH2+2yVjd36IrzSl3r/50DlQ53907CsuLia3stK7ZYff7XExXJz1pLTGBGkKRsCu8qIZ2UJEan5KDSkEwqSCqaq5ISgB0HhnCrOidH7KgrFFDGiCSTUhmKhlKhOo03qWzMBygUxIMRptcXF6Pyf55XtFG1Lxwlc0hikdU/xnDNkIK+WQipOGFdrDHTmOVTdUG2zJc+r12eGdZq5CwUFOceaHO3HN/VLd1Br+cx57lx2+pYWCiCgMUuerycbLM3s4JAYTiB7bz/gBEEiI57Qc0Y1klopA8kiZDSaHM18ql1jsklrWJ3bvGnjZGD7dhREN6teSAGno16aDC7ItDrnnbXjnIffwySZ5890CKaJPUzV1Asmg8W85AhiLTdZvG1D/U4SO7PzlxJP6lDEVtdrR8K/pj9Xi+MYeX4pyA1Kf8+1Geh5DWCIyu/UHwtEh4rpV0DsLI5jgFg8WwUK3EWlKRRcuTXP2eYAMVzLt/2o7nAQOySOXkOsSZqOSCUIyHj6UyHvIGYOOYvfEAtUinVa3IbxCWKHW7N+QwwIQjIjc2udNolw7TfahWdT2cZ6Wq67YX1a3umkHvjh85NiV87OLLux/KT267Gt+eXIjs42oXResMKv9SSjLP7v9mWUzpsEm843mcMdf68vIw5GfOOXaocb/H5jrEDMVg+fUczDjNPhBrbnGHO0Qolw6SGPj1zgnwBjjUopJ3GX45NB3Pbyn2qKyk+vKIxWnzB2uEHqOcYSLf2VBZ39w7jtHgs1GNHSTkzlHcS87R4LNRxRzDQlMkvg/KMxP3IBcgKMNSICBENSvsDaP4jbXoAwTFG+M5RqTIV/Fzze9vqDYYnKT788pRhDc/V49OxW5+oh8+zmPw==</diagram><diagram name="Copy of Page-10" id="Jahk3Z4OaUyisuWzoDz-">7Vxdc6M2FP01eVyNvj8eu9mknWk702mm092nDrGJzdYxHkwSu7++F4NswHiRHWyEd3mJJbCIj8493HsQ3LDb59XPSbCY/h6Pw9kNxePVDft0QynhRMOfrGed9yhJ8o5JEo2Lg3YdD9F/YdGJi96XaBwuKwemcTxLo0W1cxTP5+EorfQFSRK/VQ97imfVsy6CSbjX8TAKZvu9f0fjdJr3aoF3/b+E0WRqz0xwsec5sAcXHctpMI7fSl3s7obdJnGc5p+eV7fhLAPP4vIXnXzGqz8nbww//vP112kyfVh+yAe7P+Yr25+QhPO026FpPvRrMHsp8Cp+a7q2AE6S+GXh+B8U/+lrmKThqml6g0c77A5BoF4YP4dpsobj7LdkAXrBOtt8200hNwxJlndPSzPILPOCgjmT7eg7dOBDAdARYLF2sGAUIDY0Pr5NozR8WASjbM8bxBb0TdNnOOEnAh+D5SJn+1O0CuGkH5M4DdIonkOXwdB2RJweQPwwssogqQmjFXwpdBKlGWeGG8y53IObGKS4oPt4V3Z0jjl3wHw+/ikTCmiNZsFyGY2qUAOAyfozNDAStvmlvO9TBg3ettZF6ymazW7jWZxsTsOenkI5yoZepkn8b1jak23399n3V1FaOhG0vtiR4fPuNFnDnuW4eV7GL8ko/FYEWo0Nkkn4rQELMofjinzu06ZEAauZ5dm3fUk4A/q+VkW3iQ3FGf6II/i9W1Z+IEwjiHpeYaUwiDIhpVbaaKxNjWQ5GMVIZeWrDc4MykhdGZpohYQyO9JXh87h2xt6Q+EtOKezWlyDkkAqgDijehhCIockJMdNWWusC69inRiFtvO8roQoF1oYraisDeoa6Y0qQrRAEgtshOCEY2IZe6FYV9cQ60wJRIQeSKzr7zjWlVexTo1EQJFzxDplEhlOObZbVVH6vcCbawh6rhjSmustwlVd9Tb8bQHYR/yfUgKcWzNaiwblqC3GK20hQiGIb1khZVclA4OcQWqfagZb2Q1bU5RG2bwMREjoNUDe7PgcILMHoLvYbH1nb2NlHvGpet+xelt/ulW+LZ090e9mzwfoiqBCBH6BLlBjmDxRwRtMH8YFKmpAxbQW9bHPreAuZqb3ctJo+/irJi5WmzdqctyctQc89yrgm4yfos/GOsPqtHBv1BKgIirlgqQ29rnD3cVy9D7cG50ff8PdxW272nCXXoV7k/fTVbi3mD99X+hdDEjvI7/F/vFXA1zMtyHaPyfqRvs9Y+kqMNorgWk0gDqrH5ocoJ51hboYm97rSpMF5K2Y0Ktw3Q4s+pES0uPSnVHuDerUAfW+0ziflv1Q4yjhls+eSPgBDyjPHOuGcBcWEOSMqJx/XljAr2IJYfPCH3/FZFBrCI+btPZ492udX5MFlNOJEUowMQRrdlq0NyoJpxQSNqUUVhhLJk1t8HPH+1Us9Gte/ONvvH/PS/2oX2v9mjygjuK9xQLq+UJ/Fav+2hYA+SsBPa4AHOQKICsb7fri1/rCZguoo/KhyQHqWVauYl1h4xogb7WEdbqa8DJOBOnLh7BLTAZXlxCR+Y9CQz6hIKtgqspOiHpQFM6p4pwYfaqkUEwRI5pARm0oFkqJ6mm0ybxrJkC6IAiEuKi4MBer8wfRq0srhpaQE7ioMUjsDhGdM2Qgs5ZCKk4YVzUKOhMd6m6ot9mW6NUrNMM6y92FgpKcY00u+0SOtcF/EL3Lm4PeZYYCGGjMluj1fINlyR1sEsMB5OSFCJwgyGXEAUVnVCOplTKQL0JSo8ll/SfG9okN0/NQNOMkncaTeB7M7na9kAa+zMdZQrih0+6Y3+J4UTDxa5im6+JdD8FLGndTOY2D5XRzXtI9jZmrXhPXGyTO/Hxf9ulgXlcnrCXrr0nQ7WY7R7JfCnODsue89kO9KAQMUcUd+3OB6GDDvgfE+812DhDtS1egyt2Um0LBxVvzgm0OEMPl/FsP23UHsYPp6jXEmmQZiVSCgJBnTxD5B7GDp+g3xAKVYp3aezE+QezgG/oNMSAI6Yws/HXqH8KmBdC+8o16aq5HYXNqnjvhZ7hzg7tOPJoz1g0DhLEbreSrkH6jjQW3uVnKqZvP0VW6yvEeOcrp6jyeDzU/bZ9912drzzr7DCMmsWGME02F69Oznb0Vx2F1zCXV99DbzA4vs6u9tYwI5vsVT5y5vugBc+X5JVCcuRo5tqR7N+TU6rZHJZ/wrB55N8iM1GpA/3jtWX3yfshprSYU/mHuWcHSAebeF4nC4Sb8wDD3vWqUDvfghwU58z4ztANfEeb9ZYbQ3L0oOq+Xdq/bZnf/Aw==</diagram></mxfile>
2012.12631/main_diagram/main_diagram.pdf ADDED
Binary file (6.72 kB). View file
 
2012.12631/paper_text/intro_method.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ We provide a more complete description of the two learning algorithms for MNTDP-S and MNTDP-D. In the deterministic case, all the architectures (given how we restrict the search space, we have $7$ of them) are trained over the training set, and the best path is retained based on its score on the validation set. To avoid overfitting when using MNDTDP-S, we split the training set into two halves $\mathcal{D}^{t}_1$ and $\mathcal{D}^{t}_2$. The first part is used to update the module parameters $\theta$, while the second is used to update the parameters in the distribution over paths, $\Gamma$. If both sets of parameters were trained on the same dataset, $\Gamma$ would favor paths prone to overfitting since they will results in a large decrease of its training loss and therefore a larger reward. When they are trained on different sets, $\Gamma$ has to select paths with a reasonable amount of free parameters, allowing $\theta$ to learn and generalize enough to decrease the loss on both training sets. Then, the most promising architecture is selected based on $\arg\max \Gamma$, and fine-tuned over the whole $\mathcal{D}^t$. In this case, the validation set is used only for hyper-parameters selection.
4
+
5
+ The model used on Permuted-MNIST is presented in table [\[tab:fc-arch-details\]](#tab:fc-arch-details){reference-type="ref" reference="tab:fc-arch-details"}. On all other tasks, the base architecture for all baselines and MNTDP is the same CNN, a ResNet convolutional neural network as reported in Table [\[tab:resnet-arch-details\]](#tab:resnet-arch-details){reference-type="ref" reference="tab:resnet-arch-details"} or AlexNet as reported in Table [\[tab:alexnet-arch-details\]](#tab:alexnet-arch-details){reference-type="ref" reference="tab:alexnet-arch-details"}. The table also shows how layers are grouped into modules for MNTDP and PNN.
6
+
7
+ As presented in section [5.2](#sec:exp_modeling){reference-type="ref" reference="sec:exp_modeling"}, the stream can be visited only once, preventing stream-level hyper-parameters tuning. Exceptions are made for HAT [@HAT] because we have been using authors' implementation, and for EWC and Online-EWC since these approaches fail in the proposed setting. The constraint strength hyper-parameter $\lambda$ must be tuned at the stream level since a task-level tuning of $\lambda$ results in little or no constraint at all, leading to severe catastrophic forgetting. The stream-level hyper-parameters optimization considers 9 values for $\lambda$ {1, 5, 10, 50, 100, 500, $10^3$, $5\times10^3$, $10^4$}. Note that this gives an unfair advantage to EWC, Online-EWC and HAT, as all other methods including MNTDP use task-level cross-validation as described in §[5.2](#sec:exp_modeling){reference-type="ref" reference="sec:exp_modeling"}.
8
+
9
+ For all methods and experiments, we use the Adam optimizer [@DBLP:journals/corr/KingmaB14] with $\beta_1 = 0.9$, $\beta_2 = 0.999$ and $\epsilon = 10^{-8}$.
10
+
11
+ For each task and each baseline, two learning rates {$10^{-2}$, $10^{-3}$} and 3 weight decay strengths {$0$, $10^{-5}$, $10^{-4}$} are considered. Early stopping is performed on each task to identify the best step to stop training. When the current task validation accuracy stops increasing for 300 iterations, we restore the learner to its state after the best iteration and stop training on the current task.
12
+
13
+ For MNTDP-S, we consider two additional learning rates for the $\Gamma$ optimization {$10^{-2}$, $10^{-3}$}. An entropy regularization term on $\Gamma$ is added to the loss to encourage exploration, preventing an early convergence towards a sub-optimal path. The weight for this regularization term is set to 1 throughout our experiments
14
+
15
+ Finally, since small tasks in $\mathcal{S}^{\mbox{\tiny{long}}}$ have very few examples in the validation sets, we use test-time augmentation to prevent overfitting during the grid search. For each validation sample, we add four augmented copies following the same data augmentation procedure used during training.
16
+
17
+ In the main paper we report the overall memory consumption by the end of training. However, modular architectures like MNTDP use only a sparse subset of modules for a given task. In this section, we report the memory complexity both at training and test time. Table [\[tab:mem_complex\]](#tab:mem_complex){reference-type="ref" reference="tab:mem_complex"} shows that all methods but DEN and RCL are as fast to evaluate at test time as an independent network because they all use a single path.
2106.11485/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2106.11485/paper_text/intro_method.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recent advancements in satellite technology have enabled granular insight into the evolution of human activity on the planet's surface. Multiple satellite sensors now collect imagery with spatial resolution less than 1m, and this high-resolution (HR) imagery can provide sufficient information for various fine-grained tasks such as post-disaster building damage estimation, poverty prediction, and crop phenotyping [18, 3, 45]. Unfortunately, HR imagery is captured infrequently over much of the planet's surface (once a year or less), especially in developing countries where it is arguably most needed, and was historically captured even more rarely (once or twice a decade) [8]. Even when available, HR imagery is prohibitively expensive to purchase in large quantities. These limitations often result in an inability to scale promising HR algorithms and apply them to questions of broad social importance. Meanwhile, multiple sources of publicly-available satellite imagery now provide sub-weekly coverage at global scale, albeit at lower spatial resolution (e.g. 10m resolution for Sentinel-2). Unfortunately, such coarse spatial resolution renders small objects like residential buildings, swimming pools, and cars unrecognizable.
4
+
5
+ ![](_page_0_Figure_9.jpeg)
6
+
7
+ Figure 1: Given a 10m low resolution (LR) image from 2016 and a 1m high resolution (HR) image from 2018, we generate a photo-realistic and accurate HR image for 2016.
8
+
9
+ In the last few years, thanks to advances in deep learning and generative models, we have seen great progress in image processing tasks such as image colorization [48], denoising [7, 38], inpainting [38, 30], and super-resolution [13, 24, 19]. Furthermore, pixel synthesis models such as neural radiance field (NeRF) [29] have demonstrated great potential for generating realistic and accurate scenes from different viewpoints. Motivated by these successes and the need for high-resolution images, we ask whether it is possible to synthesize high-resolution satellite images using deep generative models. For a given time and location, can we generate a high-resolution image by interpolating the available low-resolution and high-resolution images collected over time?
10
+
11
+ To address this question, we propose a conditional pixel synthesis model that leverages the finegrained spatial information in HR images and the abundant temporal availability of LR images to create the desired synthetic HR images of the target location and time. Inspired by the recent development of pixel synthesis models pioneered by the NeRF model [29, 44, 2], each pixel in the output images is generated conditionally independently by a perceptron-based generator given the encoded input image features associated with the pixel, the positional embedding of its spatialtemporal coordinates, and a random vector. Instead of learning to adapt to different viewing directions in a single 3D scene [29], our model learns to interpolate across the time dimension for different geo-locations with the two multi-resolution satellite image time series.
12
+
13
+ To demonstrate the effectiveness of our model, we collect a large-scale paired satellite image dataset of residential neighborhoods in Texas using high-resolution NAIP (National Agriculture Imagery Program, 1m GSD) and low-resolution Sentinel-2 (10m GSD) imagery. This dataset consists of scenes in which housing construction occurred between 2014 and 2017 in major metropolitan areas of Texas, with construction verified using CoreLogic tax and deed data. These scenes thus provide a rapidly changing environment on which to assess model performance. As a separate test, we also pair HR images (0.3m to 1m GSD) from the Functional Map of the World (fMoW) dataset [10] crop field category with images from Sentinel-2.
14
+
15
+ To evaluate our model's performance, we compare to state-of-the-art methods, including superresolution models. Our model outperforms all competing models in sample quality on both datasets measured by both standard image quality assessment metrics and human perception (see example in Figure 1). Our model also achieves 0.92 and 0.62 Pearson's r 2 in reconstructing the correct numbers of buildings and swimming pools respectively in the images, outperforming other models in these tasks. Results suggest our model's potential to scale to downstream tasks that use these object counts as input, including societally-important tasks such as population measurement, poverty prediction, and humanitarian assessment [8, 3].
16
+
17
+ # Method
18
+
19
+ Given $I_{lr}^{(t)}$ and $I_{hr}^{(t')}$ of the target location and target time t, our method generates $\hat{I}_{hr}^{(t)} \in \mathbf{R}^{C \times H \times W}$ with a four-module conditional pixel synthesis model. Figure 2 is an illustration of our framework.
20
+
21
+ The generator G of our model consists of three parts: image feature mapper $F: \mathbf{R}^{C \times H \times W} \to \mathbf{R}^{C_{fea} \times H \times W}$ , positional encoder E, and the pixel synthesizer $G_p$ . For each spatial coordinate (x, y) of the target HR image, the image feature mapper extracts the neighborhood information around
22
+
23
+ ![](_page_3_Figure_0.jpeg)
24
+
25
+ Figure 2: An illustration of our proposed framework (discriminator omitted). The input images are processed by the image feature mapper F to obtain $I_{fea}^{(t)}$ . Then with its spatial-temporal coordinate (x,y,t) encoded by E, each pixel is synthesized conditionally independently given the image feature associated with its spatial coordinate $I_{fea}^{(t)}(x,y)$ and a random vector z.
26
+
27
+ $(x,y) \in \{0,1,...,H\} \times \{0,1,...,W\}$ from $I_{lr}^{(t)}$ and $I_{hr}^{(t')}$ , as well as the global information associated with the coordinate in the two input images. The positional encoder learns a representation of the spatial-temporal coordinate (x,y,t), where t is the temporal coordinate of the target image. The pixel synthesizer then uses the information obtained from the image feature mapper and the positional encoding to predict the pixel value at each coordinate. Finally, we incorporate an adversarial loss in our training, and thus include a discriminator D as the final component of our model.
28
+
29
+ Image Feature Mapper Before extracting features, we first perform nearest neighbor resampling to the LR image $I_{lr}^{(t)}$ to match the dimensionality of the HR image and concatenate $I_{lr}^{(t)}$ and $I_{hr}^{(t')}$ along the spectral bands to form the input $I_{cat}^{(t)} = \operatorname{concat}[I_{lr}^{(t)}, I_{hr}^{(t')}] \in \mathbf{R}^{2C \times H \times W}$ . Then the mapper processes $I_{cat}^{(t)}$ with a neighborhood encoder $F_E : \mathbf{R}^{2C \times H \times W} \to \mathbf{R}^{C_{fea} \times H' \times W'}$ , a global encoder $F_A : \mathbf{R}^{C_{fea} \times H' \times W'} \to \mathbf{R}^{C_{fea} \times H' \times W'} \to \mathbf{R}^{C_{fea} \times H' \times W'}$ . The neighborhood encoder and decoder learn the fine structural features of the images, and the global encoder learns the overall inter-pixel relationships as it observes the entire image.
30
+
31
+ $F_E$ uses sliding window filters to map a small neighborhood of each coordinate into a value stored in the neighborhood feature map $I_{ne}^{(t)} \in \mathbf{R}^{C_{fea} \times H' \times W'}$ and $F_D$ uses another set of filters to transform the global feature map $I_{gl}^{(t)} \in \mathbf{R}^{C_{fea} \times H' \times W'}$ back to the original coordinate grid. $F_A$ is a self-attention module that takes $I_{ne}^{(t)}$ as the input and learns functions $Q, K: \mathbf{R}^{C_{fea} \times H' \times W'} \to \mathbf{R}^{C_{fea} \times H' \times W'} \to \mathbf{R}^{C_{fea} \times H' \times W'} \to \mathbf{R}^{C_{fea} \times H' \times W'}$ and a scalar parameter $\gamma$ to map $I_{ne}^{(t)}$ to $I_{gl}^{(t)}$ .
32
+
33
+ The image feature mapper $F = F_E \circ F_A \circ F_D$ and we denote $I_{fea}^{(t)} = F(I_{cat}^{(t)})$ and the image feature associated with coordinate (x,y) as $I_{fea}^{(t)}(x,y) \in \mathbf{R}^{C_{fea}}$ . Details are available in Appendix A.
34
+
35
+ **Positional Encoder** Following [2], we also include both the Fourier feature and the spatial coordinate embedding in the positional encoder E. The Fourier feature is calculated as $e_{f_o}(x,y,t)=\sin(B_{f_o}(\frac{2x}{H-1}-1,\frac{2y}{H-1}-1,\frac{t}{u}))$ where $B_{f_o}\in\mathbf{R}^{3\times C_{fea}}$ is a learnable matrix and u is the time unit. This encoding of t allows our model to handle time-series with various lengths and to extrapolate to time steps that are not seen at training time. E also learns a $C_{fea}\times H\times W$ matrix $e_{co}$ and the spatial coordinate embedding for (x,y,t) is extracted from the vector at (x,y) in $e_{co}$ . The positional encoding of (x,y,t) is the channel concatenation of $e_{f_o}(x,y,t)$ and $e_{co}(x,y,t)$ , $E(x,y,t)=\mathrm{concat}[e_{f_o}(x,y,t),e_{co}(x,y,t)]\in\mathbf{R}^{2C_{fea}}$ .
36
+
37
+ **Pixel Synthesizer** Pixel Synthesizer $G_p$ can be viewed as an analogy of simulating a conditional 2 + 1D neural radiance field with fixed viewing direction and camera ray using a perceptron based
38
+
39
+ model. Instead of learning the breadth representation of the location, $G_p$ learns to scale in the time dimension in a fixed spatial coordinate grid. Each pixel is synthesized conditionally independently given $I_{fea}^{(t)}$ , E(x,y,t), and a random vector $z \in \mathbf{R}^Z$ . $G_p$ first learns a function $g_z$ to map E(x,y,t) to $\mathbf{R}^{C_{fea}}$ , then obtains the input to the fully-connected layers $e(x,y,t) = g_z(E(x,y,t)) + I_{fea}^{(t)}(x,y)$ . Following [2, 22], we use a m-layer perceptron based mapping network M to map the noise vector z into a style vector w, and use n modulated fully-connected layers (ModFC) to inject the style vector into the generation to maintain style consistency among different pixels of the same image. We map the intermediate features to the output space for every two layers and accumulate the output values as the final pixel output.
40
+
41
+ With all components combined, the generated pixel value at (x, y, t) can be calculated as
42
+
43
+ $$\hat{I}_{hr}^{(t)}(x,y) = G(x,y,t,z|I_{lr}^{(t)},I_{hr}^{(t')}) = G_p(E(x,y,t),F(I_{cat}^{(t)}),z)$$
44
+
45
+ **Loss Function** The generator is trained with the combination of the conditional GAN loss and $L_1$ loss. The objective function is
46
+
47
+ $$G^* = \arg\min_{C} \max_{D} \mathcal{L}_{cGAN}(G, D) + \lambda \mathcal{L}_{L_1}(G)$$
48
+
49
+ $\mathcal{L}_{cGAN}(G,D) = \mathbb{E}[\log D(I_{hr}^{(t)},X,I_{lr}^{(t)},I_{hr}^{(t')})] + \mathbb{E}[1 - \log D(G(X,z|I_{lr}^{(t)},I_{hr}^{(t')}),X,I_{lr}^{(t)},I_{hr}^{(t')})]$ where X is the temporal-spatial coordinate grid $\{(x,y,t)|0 \leq x \leq H, 0 \leq y \leq W\}$ for $I_{hr}^{(t)}$ . $\mathcal{L}_{L_1}(G) = \mathbb{E}[||I_{hr}^{(t)} - G(X,z|I_{lr}^{(t)},I_{hr}^{(t')})||_1].$
2106.11539/paper_text/intro_method.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The task of Visual Document Understanding (VDU) aims at understanding digital documents either born as PDF's or as images. VDU focuses on varied document related tasks like entity grouping, sequence labeling, document classification. While modern OCR engines [34] have become good at predicting text from documents, VDU often requires understanding both the structure and layout of documents. The use of text or even text and spatial features alone is not sufficient for this purpose. For the best results, one needs to exploit the text, spatial features and the image. One way to exploit all these features is using transformer models [5, 15, 53]. Transformers have recently been used for VDU [26, 56, 57]. These models differ in how the unsupervised pre-training is done, the way self-attention is modified for the VDU domain or how they fuse modalities (text and/or image and spatial). There have been text only [15], text plus spatial features only [26, 56] approaches for VDU. However, the holy-grail is to fuse all three modalities (text,
4
+
5
+ ![](_page_0_Figure_16.jpeg)
6
+
7
+ Figure 1: **Snippet of a Document**: Various VDU tasks on this document may include labeling each text token into fixed classes or grouping tokens into a semantic class and finding relationships between tokens e.g. ("DATE PREPARED" → Key and "1/29/74" → Value) or classifying the document into different categories. Note a document could have "other" text e.g. "C-5" which the model should ignore or classify as "other" depending on the task.
8
+
9
+ visual and spatial features). This is desirable since there is some information in text that visual features miss out (language semantics), and there is some information in visual features that text misses out (text font and visual layout for example).
10
+
11
+ Multi-modal training in general is difficult since one has to map a piece of text to an arbitrary span of visual content. For example in Figure 1, "ITEM 1" needs to be mapped to the visual region. Said a different way, text describes semantic high-level concept(s) e.g. the word "person" whereas visual features map to the pixels (of a person) in the image. It is not easy to enforce feature correlation across modalities from text $\leftrightarrow$ image. We term this issue as *cross-modality feature correlation* and reference it later to show how DocFormer presents an approach to address this.
12
+
13
+ DocFormer follows the now common, pre-training and fine-tuning strategy. DocFormer incorporates a novel multimodal self-attention with shared spatial embeddings in an encoder only transformer architecture. In addition, we propose three pre-training tasks of which two are novel unsupervised multi-modal tasks: *learning-to-reconstruct* and *multi-modal masked language modeling* task. Details are provided in Section 3. To the best of our knowledge, this is the first approach for doing VDU which does not use bulky pre-trained object-detection networks for visual feature extraction. DocFormer instead uses plain ResNet50 [22] features along with shared spatial (between text and image) embeddings which not only saves memory but also makes it easy for DocFormer to correlate text, visual features via spatial features. DocFormer is trained end-to-end with the visual branch trained from scratch. We now highlight the contributions of our paper:
14
+
15
+ - A novel multi-modal attention layer capable of fusing text, vision and spatial features in a document.
16
+ - Three unsupervised pre-training tasks which encourage multi-modal feature collaboration. Two of these are novel unsupervised multi-modal tasks: *learningto-reconstruct* task and a *multi-modal masked language modeling* task.
17
+ - DocFormer is end-to-end trainable and it does not rely on a pre-trained object detection network for visual features simplifying its architecture. On four varied downstream VDU tasks, DocFormer achieves state of the art results. On some tasks it out-performs large variants of other transformer almost 4x its size (in the number of parameters). In addition, DocFormer does not use custom OCR unlike some of the recent papers [57, 26].
18
+
19
+ Document understanding methods in the literature have used various combinations of image, spatial and text features in order to understand and extract information from structurally rich documents such as forms [19, 59, 13], tables [46, 58, 25], receipts [28, 27] and invoices [36, 44, 39]. Finding the optimal way to combine these multi-modal features is an active area of research.
20
+
21
+ Grid based methods [30, 14] were proposed for invoice images where text pixels are encoded using character or word vector representations and classified into field types such as Invoice Number, Date, Vendor Name and Address etc. using a convolutional neural network.
22
+
23
+ BERT [15] is a transformer-encoder [53] based neural network that has been shown to work well on language understanding tasks. LayoutLM [56] modified the BERT architecture by adding 2D spatial coordinate embeddings along with 1D position and text token embeddings. They also added visual features for each word token, obtained using a Faster-RCNN and its bounding box coordinates.
24
+
25
+ ![](_page_1_Figure_8.jpeg)
26
+
27
+ d) **Ours (DocFormer)**: Discrete Multi-Modal
28
+
29
+ Figure 2: Conceptual Comparisons of Transformer Multi-Modal Encoder Architectures: The mechanisms differ in how the modalities are combined. Type A) Joint Multi-Modal: like VL-BERT[48], LayoutLMv2[57], VisualBERT [33], MMBT[31], UNITER [9] Type B) Two-stream Multi-Modal: CLIP[42], VilBERT[38], Type C) Single-stream Multi-Modal, Type D) Ours: Discrete Multi-modal. e.g. DocFormer . Note: in each transformer layer, each input modality is self-attended separately. Best viewed in color.
30
+
31
+ LayoutLM was pre-trained on 11 million unlabeled pages and was then finetuned on several document understanding tasks - form processing, classification and receipt processing. This idea of pre-training on large datasets and then finetuning on several related downstream tasks is also seen in general vision and language understanding work [48, 38, 31, 33] etc. Figure 2 shows a comparison of multimodal transformer encoder architectures.
32
+
33
+ Recently, LayoutLMv2 [57] improved over LayoutLM by changing the way visual features are input to the model - treating them as separate tokens as opposed to adding visual features to the corresponding text tokens. Further, additional pre-training tasks were explored to make use of unlabeled document data.
34
+
35
+ BROS [27] also uses a BERT based encoder, with a graph-based classifier based on SPADE [29], which is used to predict entity relations between text tokens in a document. They also use 2D spatial embeddings added along with text tokens and evaluate their network on forms, receipts document images. Multi-modal transformer encoderdecoder architectures based on T5 [43] have been proposed recently. Tanaka et al. propose Layout-T5 [50] for a question answering task on a database of web article document images whereas Powalski et al. propose TILT [41] combining convolutional features with the T5 architecture to perform various downstream document understanding tasks.
36
+
37
+ # Method
38
+
39
+ **Conceptual Overview**: We first present a conceptual overview of architectures used in Transformer Encoder Multi-Modal training, illustrated in Figure 2. (a) Joint Multi-Modal: VL-BERT [48], LayoutLMv2 [57], VisualBERT [33], MMBT [31]: In this type of architecture, vision and text are concatenated into one long sequence which makes transformers self-attention hard due to the cross-modality feature correlation referenced in the introduction. (b) Two-Stream Multi-Modal CLIP [42], Vil-BERT [38]: It is a plus that each modality is a separate branch which allows one to use an arbitrary model for each branch. However, text and image interact only at the end which is not ideal. It might be better to do early fusion. (c) Single-stream Multi-Modal: treats vision features also as tokens (just like language) and adds them with other features. Combining visual features with language tokens this way (simple addition) is unnatural as vision and language features are different types of data. (d) Discrete Multi-Modal: In this paper, DocFormer unties visual, text and spatial features. i.e. spatial and visual features are passed as residual connections to each transformer layer. We do this because spatial and visual dependencies might differ across layers. In each transformer layer, visual and language features separately undergo self-attention with shared spatial features. In order to pre-train DocFormer we use a subset of 5 million pages from the IIT-CDIP document collection [32] for pre-training. In order to do multi-modal VDU, we first extract OCR, which gives us text and corresponding word-level bounding boxes for each document. We next describe the model-architecture, followed by the pre-training tasks.
40
+
41
+ DocFormer is an encoder-only transformer architecture. It also has a CNN backbone for visual feature extraction. All components are trained end-to-end. DocFormer enforces deep multi-modal interaction in transformer layers using novel multi-modal self-attention. We describe how three modality features (visual, language and spatial) are prepared before feeding them into transformer layers.
42
+
43
+ Visual features: Let $v \in \mathbb{R}^{3 \times h \times w}$ be the image of a document, which we feed through a ResNet50 convolutional neural network $f_{cnn}(\theta,v)$ . We extract lower-resolution visual embedding at layer 4 i.e. $v_{l_4} \in \mathbb{R}^{c \times h_l \times w_l}$ . Typical values at this stage are c=2048 and $h_l=\frac{w}{32}, w_l=\frac{w}{32}$ (c=1 number of channels and c=1 and c=1 are the height and
44
+
45
+ width of the features). The transformer encoder expects a flattened sequence as input of d dimension. So we first apply a $1 \times 1$ convolution to reduce the channels c to d. We then flatten the ResNet features to $(d, h_l \times w_l)$ and use a linear transformation layer to further convert it to (d, N) where d = 768, N = 512. Therefore, we represent the visual embedding as $\overline{V} = linear(conv_{1\times 1}(f_{cnn}(\theta, v)))$ .
46
+
47
+ Language features: Let t be the text extracted via OCR from a document image. In order to generate language embeddings, we first tokenize text t using a word-piece tokenizer [55] to get $t_{tok}$ , this is then fed through a trainable embedding layer $W_t$ . $t_{tok}$ looks like $[CLS], t_{tok_1}, t_{tok_2}, \ldots, t_{tok_n}$ where n=511. If the number of tokens in a page is >511, we ignore the rest. For a document with fewer than 511 tokens, we pad the sequence with a special [PAD] token and we ignore the [PAD] tokens during self-attention computation. We ensure that the text embedding $\overline{T} = W_t(t_{tok})$ , is of the same shape as the visual embedding $\overline{V}$ . Following prior art [57], we initialize $W_t$ with LayoutLMv1 [56] pre-trained weights.
48
+
49
+ **Spatial Features:** For each word k in the get bounding box coordinates also $b_k = (x_1, y_1, x_2, y_2, x_3, y_3, x_4, y_4)$ . 2D spatial coordinates $b_k$ provide additional context to the model about the location of a word in relation to the entire document. This helps the model make better sense of the content. For each word, we encode the top-left and bottom-right coordinates using separate layers $W^x$ and $W^y$ for x and y-coordinates respectively. We also encode more spatial features: bounding box height h, width w, the Euclidean distance from each corner of a bounding box to the corresponding corner in the bounding box to its right and the distance between centroids of the bounding boxes, e.g. $A_{rel} = \{A_{num}^{k+1} - A_{num}^k\}; A \in (x, y); num \in (1, 2, 3, 4, c),$ where c is the center of the bounding box. Since transformer layers are permutation-invariant, we also use absolute 1D positional encodings $P^{abs}$ . We create separate spatial embeddings for visual $\overline{V_s}$ and language $\overline{T_s}$ features since spatial dependency could be modality specific. Final spatial embeddings are obtained by summing up all intermediate embeddings. All spatial embeddings are trainable.
50
+
51
+ $$\begin{split} \overline{V_s} &= W_v^x(x_1, x_3, w, A_{rel}^x) + \\ & W_v^y(y_1, y_3, h, A_{rel}^y) + P_v^{abs} \quad \text{(1)} \\ \overline{T_s} &= W_t^x(x_1, x_3, w, A_{rel}^x) + \\ & W_t^y(y_1, y_3, h, A_{rel}^y) + P_t^{abs} \quad \text{(2)} \end{split}$$
52
+
53
+ **Multi-Modal Self-Attention Layer**: We now describe in detail our novel multi-modal self-attention layer. Consider a transformer encoder $f_{enc}(\eta, \overline{V}, \overline{V_s}, \overline{T}, \overline{T_s})$ , where $\eta$ are trainable parameters of the transformer, $\overline{V}$ , $\overline{V_s}$ , $\overline{T}$ and $\overline{T_s}$ are visual, visual-spatial, language and language-spatial
54
+
55
+ features respectively, and are obtained as described previously. Transformer $f_{enc}$ outputs a multi-modal feature representation M of the same shape d = 768, N = 512 as each of the input features.
56
+
57
+ Self-attention, i.e., scaled dot-product attention as introduced in [53], for a single head is defined as querying a dictionary with key-value pairs. i.e. in a transformer layer land $i^{th}$ input token in a feature length of L.
58
+
59
+ $$\overline{M}_{i}^{l} = \sum_{j=1}^{L} \frac{\exp\left(\alpha_{ij}\right)}{\sum_{j'=1}^{n} \exp\left(\alpha_{ij'}\right)} \left(x_{j}^{l} W^{V,l}\right) \tag{3}$$
60
+
61
+ where $\alpha_{ij}$ is defined as self-attention which is computed
62
+
63
+ as (attention in layer
64
+ $$l$$
65
+ between tokens $x_i$ and $x_j$ ).
66
+ $$\alpha_{ij} = \frac{1}{\sqrt{d}} \left( x_i^l W^{Q,l} \right) \left( x_j^l W^{K,l} \right)^T \tag{4}$$
67
+
68
+ Here, d is the dimension of the hidden representation, $W^{Q,l}, W^{K,l} \in \mathbb{R}^{d \times d_K}$ , and $W^V \in \mathbb{R}^{d \times d_V}$ are learned parameter matrices which are not shared among layers or attention heads. Without loss of generality, we remove the dependency on layer I and get a simplified view of Eq. 4 as:
69
+
70
+ $$\alpha_{ij} = \left(x_i W^Q\right) \cdot \left(x_j W^K\right)^T \tag{5}$$
71
+
72
+ We modify this attention formulation for the multimodal VDU task. DocFormer tries to infuse the following inductive bias into self-attention formulation: for most VDU tasks, local features are more important than global ones. We modify Eq. 5, to add relative features. Specifically, the attention distribution for visual features is:
73
+
74
+ $$\alpha_{ij}^{v} = \underbrace{(x_{i}^{v}W_{v}^{Q})(x_{j}^{v}W_{v}^{K})^{T}}_{\text{key-query attn.}} + \underbrace{(x_{i}^{v}W_{v}^{Q}a_{ij})}_{\text{query 1D relative attn.}} + \underbrace{(x_{j}^{v}W_{v}^{K}a_{ij})}_{\text{key 1D relative attn.}} + \underbrace{(\overline{V_{s}}W_{s}^{Q})(\overline{V_{s}}W_{s}^{K})}_{\text{visual spatial attn.}}$$
75
+ (6)
76
+
77
+ Here, $x^v$ denotes visual features, $W_v^K, W_v^Q$ denote learnable matrices for key, query visual embeddings respectively. $W_s^K, W_s^Q$ denote learnable matrices for key, query spatial embeddings respectively. $a_{ij}$ is 1D relative position embedding between tokens i, j i.e. $a_{ij} = W_{j-i}^{rel}$ where $W^{rel}$ learns how token i attends to j. We clip the relative attention so DocFormer gives more importance to local features. We get a similar equation for language attention $\alpha_{i}^{t}$ :
78
+
79
+ $$\begin{split} \alpha_{ij}^t &= (x_i W_t^Q) (x_j W_t^K)^T + (x_i W_t^Q a_{ij}) + \\ & (x_j W_t^K a_{ij}) + (\overline{T_s} W_s^Q) (\overline{T_s} W_s^K) \quad (7) \end{split}$$
80
+
81
+ Here, x is the output of the previous encoder layer, or word embedding layer if l = 1. An important aspect of Eq. 6 and Eq. 7 is that we share spatial weights in each layer. i.e. the spatial attention weights $(W_c^Q, W_c^K)$ are shared across vision and language. This helps the model correlate features across modalities.
82
+
83
+ Using the visual self-attention computed using Eq. 6 in Eq. 3, gets us spatially aware, self-attended visual features $V_l$ . Similarly using Eq. 7 in Eq. 3, gets us language features $\hat{T}_l$ . The multi-modal feature output is given by $\overline{M}_l = \hat{V}_l +$ $T_l$ . It should be noted that for layers l > 1, features x in Eq. 7 are multi-modal because we combine visual and language features at the output of layer l-1. The final $\overline{M}_{12}$ is consumed by downstream linear layers.
84
+
85
+ Why do multi-modal attention this way? We untie the visual and spatial information and pass them to each layer of transformer. We posit that making visual and spatial information accessible across layers acts as an information residual connection [23, 54] and is beneficial for generating superior multi-modal feature representation hence better addressing the issue of cross-modality feature correlation. This is verified in our experiments (Section 4), where we show that DocFormer obtains state-of-the-art performance even when compared to models having four times the number of the parameters in some cases. Further, sharing spatial weights across modalities in each layer gives DocFormer an opportunity to learn cross-modal spatial interactions while also reducing the number of parameters. In Sec. 4, we show that DocFormer is the smallest amongst its class of models, yet it is able to show superior performance. Code in supple.
86
+
87
+ Run-time Complexity: The run-time complexity of DocFormer is of the same order as that of the original selfattention model [53] (for details see supplemental material)
88
+
89
+ The ability to design new and effective unsupervised pretraining strategies is still an open problem. Our pre-training process involves passing the document image, its extracted OCR text, and its corresponding spatial features. All pretraining tasks were designed such that the network needs the collaboration of both visual and language features, thereby truly learning a superior representation than training with only one of the modalities. See Figure 3 for a high-level overview of the pre-training tasks.
90
+
91
+ Multi-Modal Masked Language Modeling (MM-MLM): This is a modification of the original masked language modeling (MLM) pre-text task introduced in BERT [15], and may be thought of as a text de-noising task i.e. for a text sequence t, a corrupted sequence is generated t. The transformer encoder predicts $\hat{t}$ and is trained with an objective to reconstruct entire sequence. In our case, we use a multi-modal feature embedding $\overline{M}$ for reconstruction of the text sequence. In prior art [57, 56], for a masked text token, the corresponding visual region was also masked to prevent "cheating". Instead, we intentionally do not mask visual regions corresponding to [MASK] text. This is to encourage visual features to supplement text features and thus minimize the text reconstruction loss. The masking percentage is the same as originally proposed [15]. Cross-entropy loss
92
+
93
+ is used for this task (LMM−MLM).
94
+
95
+ Learn To Reconstruct (LTR): In this novel pre-text task, we do the image version of the MM-MLM task, i.e. we do an image reconstruction task. The multi-modal feature predicted by DocFormer is passed through a shallow decoder to reconstruct the image (the same dimension as the input image). In this case this task is similar to an auto-encoder image reconstruction but with multi-modal features. The intuition is that in the presence of both image and text features, the image reconstruction would need the collaboration of both modalities. We employ a smooth-L1 loss between the reconstructed image and original input image (LLT R).
96
+
97
+ Text Describes Image (TDI): In this task, we try to teach the network if a given piece of text describes a document image. For this, we pool the multi-modal features using a linear layer to predict a binary answer. This task differs from the above two tasks in that this task infuses the global pooled features into the network (as opposed to MM-MLM and LTR focusing purely on local features). In a batch, 80% of the time the correct text and image are paired, for the remaining 20% the wrong image is paired with the text. A binary cross-entropy loss (LTDI ) is used for this task. Since the 20% negative pair scenario interferes with the LTR task (for a text ←−→ image pair mismatch the pair reconstruction loss would be high), the LTR loss is ignored for cases where there is a mismatch.
98
+
99
+ The final pre-training loss Lpt = λLMM−MLM + βLLT R + γLTDI . In practice λ = 5, β = 1 and γ = 5. DocFormer is pre-trained for 5 epochs, then we remove all three task heads. We add one linear projection head and fine-tune all components of the model for all downstream tasks.
2110.05357/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-11-15T03:22:35.205Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.9.6 Chrome/89.0.4389.128 Electron/12.0.16 Safari/537.36" version="14.9.6" etag="TrJoYnnGAI0qq4f4GsMo" type="device"><diagram id="HgmNZkJChOOaM86NJ45O">7X1Zc+NIkuavkVn3g8LiPh4zq7N217bbttaqdnv2qYySkJnslkgtybzm148HiACBiAAQJAMAmWL22JQEgSDp/vkZftyxX16+/7fN4vXzP9ZPxfMdxU/f79jf7iglBHP4j73yY39FMLO/8GmzfKpuOlz4ffmfRXURV1e/LJ+KbevG3Xr9vFu+ti8+rler4nHXurbYbNbf2rd9XD+33/V18al6R3y48Pvj4rkIbvvn8mn3eX+Vmsb1/14sP32u3pkTtv/Dy8LdW925/bx4Wn9rvBX7cMd+2azXu/1PL99/KZ4t7RxZ9p/n146/1u+/KVa7lBfQ/Qu+Lp6/VF/tjspneOn7j2t4AnzA3Y/qS8v//2Xt/nC/LVnyDm4g8vX74Y/w06f9f4HDfPcn2f/gHgofZP9cdxdtvQXdFd/t9c+7l2e4QODH7W6z/nfxy/p5vYErq/WqsB9h+fzsXVo8Lz+t4NdH+OIFXH//tdjslsCxd9UfXpZPT/Zt3n/7vNwVv78uHu17fgN8wrXN+svqqbA0wfY9HVPsL4/rl+Vj9XNFLHhy8b2T4KRmI8C/WL8Uu80PuKV6wT2X+5dUyOdCI1I95dsBSrzC3ecGihwWFxV4P9UPPzAYfqh4HOc3C0hePAGyq18rWrap8a8vL6+VBEr4db3ZfV5/Wq8Wz39fr18rLv2r2O1+VDctvuzWbR4Wq6d3Vugse54X2+3y8Y/Py9X+D78un9usdvJUvbB6KENMUHX4p8v7F5td9Xe+v9u9WCHONFVEYm7KHzzQ3FH2UT8Wj4/ly4CG/2G/LMKYuAv/r7xAtHAX/va9osf+tx/N334rNktghcVdAyeWsm1hX3/ZPFaXVMVM+A6filpNJKNpUzwvdsuv7efHgFG99Lf1shRnB1vNkTEeEGX7KftPVr2wqUL8Z2EP0czD6f5rBw8qoVp/pST08k5t9bT8GlVWVqHcV7rBaqtKPUT0VUvrfVw8tp/zuXj+Wlh90nxpgnKk3crxTvzy8PFOvf/fd+pvf8J/l3f0l6/w84DGrC+XX7lfj8ZUXUMuM6gzikUAJEKQCBUajSg04W48R6WJCCg8cnwCjfba94UrP2Hx4F6BjyUEaAqfEELggAxSuq/cJISg55NBDpOhrdcHwBGxsUl4Ef3mj7nvXxHpPoSKitg+mcH2qWEKtSz/02L7uabWkEvyvHgonn9bb5e75Trqh/zdu+FhvdutXyKOys5a1ffrL7vn5QrexTmwOObkwOd9tR/95fsn62ij9cePy8cCPS0Ayottsa1/+vNluVr+yUrjvltUH+He4NAuGvPh3a/v8nDb+Mw2Ec1AIpqB9Vi0VHbrG7tD7qpf378DVdyCQYmCp+UG3nh/4Vux3eUBACdzIsDcEDCzwMsJue1yA2+c3Rcm8JMigNwQcB4CtqvF6x/rfah2YvrjSHwwPCVAYumuNw8QihV9r2OWYgSLMC2/2TC/G8mpio8100mb6fb6b4sdkH1VXqGYRbNWXsLsLp4MSiBhg0IiEhO5a2fmgkIWmfYjOvI3ww9yEddARumERBCJZYIuIuiXXAfw7gj6vXzbSQBPSH5MEvXvGXJE2M8DKo0V9pOEzMgbUPp5w4ABfs8a+JNbomeKQGAAAvOG/uSW/Zld6CcN/W65nkuU+SkhQG/5n/mj/2MBMmk4SG/5ofzh/5FGYVqGJ+R7LiH+76LhLQHQydlbIZOT3wdcsELeNQqZ7m0lk75rVzLJ+pZjKpmK78vdf7gvAT/vHyaq3w5Psr+4B3UAfqj6SUbkIi4W5xY/KRmUanCThNTgWdqv5vMKWvLVPrnipxvi2cePH6lXutcGOx2s2psO1i5FNwWug6I+qU7ENcEGniUY1Vpiw7GkPs5BxVDJObwD57XwjgB7cYN9APsKvRgZRVsIZkL3Ytj+Eqj4DiFSQ0KUIBAE61klQrdhq8ip4kBUnzhILVviQEaTBnmThs767dLtMU0QE8uVc9wekDDdthFMHS9hpwrKlPXgNCjjNafWgw/JiplIVtRNVoIQoRvX1GSyHGRA4E6VhuniA82C+MDoc4QBH/6RtjCY0XolbvWvSScgH97J9/KQpjmrGYLR0AEX1e9TZLtuZyCTc5ybOTnuOq2OMXGn2DRnNRoWw12PW4sWzEKTGJjD2pK0++/OtyR1VfD4ZoNxCALArErKhMAcu+IzhwsskFDm8O80g8K4QNwzTpL6fVT5zIiD6UQQOyYhcyzIUhwPEfodzpCOjyCqCeK6Bghhqo0gxX0+50uos9BWnM3ULgZVoDlwBzsPN2RheTmmKY7yOQ9+r2gCrBdeKdrFmdwmXsRkjioTofUhJzqqVGMPbKN5pqz/7OYAtA+HqxegYVoArSDcROexps5Lt52YbWtpqilVFQ7MEBHIeZwD6AMaLX40bnu1N2z73o233oorOQDm9v2iIswBs/tP0PFqESZD1GgJbtZ/rvOTS0MHqifQnURGITI2dIWiI0PXF0vlyWRG6N7OZo6aJsI5H/JThpLRLfcFMTlKMtrMO5tEBV4Nlyef25C+XLSQfJpcNLud2/SXqxDpi8pJg3caomJmE5UJC1mCfKPw3fZskiImkpTbqU1ymUtylQuwzpMGraaShlnrXxS+dnGIneIcOaCvYwZVjqf8Xqy26839c/HVDnjcP/Fh4/76ulk/FiBMq09wZfFis/6rh+1r49296VYgcvBR4E74LD/HxED72Z3uyjRyixi/WpfGSqp5FUdknyAYO2W6+iFs//O6h7ApRL2w0dbv0eDYeZKBbDyhEWee5uy6PLclKGSs7mwXM83enc1xL6Xu2ydV9zIg0li92Q6EtyPrjG2aA9wOerOj8xrHOrLmCSM53gDHx27UHMBA2Jw9LQgSxne8ARDMKvZ8Sn4njCl5A/y+NKGfFAK3KSzzd2cfCxCqp0TIbWxL/u7sI43CtAxPqFAevTu7hai7eIozhaBztWp7Hdant2qPVlnGYwmjQTYH8ux4SH2mrSFcXu7s12DYx0cCQzuSCNPwk3AVpKz9uDqVpZFnscQenhO4KmJJnjyJ6WZe7j+Py8vFM3CXmhxO0uBDLQc47PkNJxCOtU5GxDJc+Q4WwiOF4uWheHranyj8DBBong/kggRmfg1QfQzWTOkx5LiXHRUJWT2XAPbcnadFoT8+BlyCv8hHXYBKAL5sX/ce18fld0vgMXPIjIQDPuN1/M1xn7je6NSkLMlB2YTs2UAOOSCfV9bnkX1/fBCXkCSw7sHQa2BtvU/znw6J13HLWbRMSEJdCS3roQ2IYKUwNZxpjGlkr9hMpE7I/1wXqakldaPXx4FpflIn5Fmui9SsJLW0Y3skBVibMEKdidQJCYvrIjW3pDaAa6KUANfsYvRHQqbguigtSkoLjoXWEPHhyMH0TKTuLu84z6N/t9sBcWwCy7phvl//rfxW25tX32lwZDiUIBLoqUMOILdXLzNE/L27YuktuPfqMlyG9MBwhENHY6zgXmYI7nsZ/seN4R7DXVatHg9iUMSzHI3hCXF73oQ8vbv8HO0985giTS2Fx+Zog2fZmQHes/LlaOXk62+ugZ9OoOp0KUWYHXyoEw9U/Kdy0/fUjDxOyGK4XNuUmbN7jRhvzD1y8wVc6QETiAqGFcWGKmpCT4YIhhQLnduW/yuRynAQKRPSE/CK5eu2y0T0BRpnFfXIjsnQdc81EgJDqMaw7VGOOIQUIy6I0EZrIhSpG5nbtiN+z1k0DTveZphD0+jGrJtlUibLDA2rSNBxTjCb/TGyUgOtbrH+/WkEXD4tuRAEE0OJaZfqUtsSyhQWigohBeYZyrdlmNb4Mp44kIcFKWjlObabn+DfuWLigieONCUYKyWUgVhbtalIyi5QIYzURAkCEfnliFFKjcRlqSZnEA3ilFHNQc0zhl3BSD1QARmhpIYgAwNHeFhyPhvFTUjgiQYeHTFT68jxC6MrMR3HQrKrlrzDPUxCfL0SgTAMUQPUk+AWGc5JO5lzwTrIuVhzjABLtO5N2z40C+pi5Cgy41R1VJfll6OIGbmQsW4ndztPp+662DSbz6aOi/QP03gavPq0WTwtC88JW4jgFKS8/vhEdZCAs/Ubyjzgo6jdk/FGoCyJVlyCViTCi7DvQdMRYpUltwqTEBU5GOQccQm+B/zAwLK7kdBNjVneIjRhmHBuFFbd3OgopSxVruJMgcHATDOvQhMjogi4OpLDtzDGbyxMrthEskkO8KhCh0pjwzQQimg5XuWfSsg4XBvW7kFkmbRbLzDwUOF6v9DBOgvJQWVTMKuMCPeVJ4Yasz614lhgQIEhngvBMZKacS01l4AHN3/06EyWQpTAt8RYAFy19s4gQCpNKWyMcClx2mCeU2B26xnK3yM2IAbagK5SACTGqKBKeu1CHGREGXBigfea6+iZNTUgQoA/JRn8yCIeKji6HEvGhVKAslqUzjJ/t+aiybECdo1BCK+YXf9DcHtUbWmNwMExoEFAS8jLQUpCzc6Q8eqvHKmtUpcd67N9Z1iyaQ5PuJ1bhCn4H2DClJG87ZQRhcA7wWA0JAADC+axLNUGGYK0AQ+ZKeA914Ye9S4ZjdCo6bccYkgwRVppZZhkgll3uEUqqa2XDE6nMEBK7lq5LiGrEObZZkiqnXpAUFb47D+GHi/2jO2TU1Ol2nSs3ueisM80MmAZQANQA865Vy2jJKLWBklMGGOEhTZoLujrhLkwN3clc/YVFKF1RzgnYLyUy3+5ZIJAXEPcpzBnBKDiJvfM77C4DXZvHCvJXdGjKBpgPgGmYkkEsawVPngYBE0MgmKlMRM66u1KwYXhRNiwnUeSAxY8WDNQMxSCK65UhmJefcKW5HPO5VvbFfoN7zQG1AUeTQOqp8qx64RE2SylOQri+OZKMm90nK1NV6DitMTUAKoDNJNYAaIBKeAZiDZZLc4x1fx6oAzHOgkSIl7BOFPM9T+2XQ0GtzEGIq6E5HFPI3bLWcR860U4OqITdORARw8U4ShwNMEAMAiwCGV+1EsR0TYcpRAYG/iR9yR2k1k3aRXOca5WlzQcqm2kVFxKgTEn3PfLCUMEzBxTUmoMJlFET7rnkJYpa23I+3fkA81EcWEQFlYBASOB9N6SSS6R1pIQW2wpIZgJfduZ6H2rtDlBU00V/psLrrQZEAdDkXXSrSE23Cgnxpeuf8ytzmYUKYrU2ZipYgCXub7V2Zyi7LrYNJ9bZq6x0GZAXd4TAxrRLkzXxB7/G+kVPxBk4ItLUFCKgXfhaiPaxQ9EC6qZZARrrCLJMXuLfQgzzGa1jN/2lNC4I4DdzH4IzLRg2CtMYEgZzW3phm1g135ZVerRE9M20aftWxBO4cv4A9eQPXcyyuaCDMZeii/f0ZNJKbMZN/0+gBt79EQ5MJRIcCyBI22GSFtHYvNlylphQaP59znMbLi17E0fPMWUcWyRi+lAQ36bmVDNMS/yQUeAEoLIttSLxrTjLSUQJtjYQwJCeGRA1Ey4v81XneAkYQA52iBtrDciwcQQ4x1DcbtEiinOtSKcieikGzBJ1tcBS8cO5Swt6DCkhdF2A4wkHIxVhvpik5IauTa3BzQvkBprqRXFHNeHfrX7CFZNSSGFPVtQ2oQ9SpO4PXbrN7OeFwf02NrUNmQ4wkpozI0UChvXwn+s16MlwqBoNLwHuNGYeyXrGhxEDK42F9hwErxLRq8nYVjsG1BSeY/Gh/K1BnHbpsAVt/XDUrZr+exxERXWoGF4Un3w3VZKHDw9Yk+VbEqFxpWSYfAOYC41Z0ZkOCoiOKFC5QaWzLk1GysqiI40BuvCXf335WNl2uTavEfh0fxK5Ch8b9ImcOmJs1oXdxauEWsOc/V264EuRMocxlSknoVL5L7feVRLyTZNfhheMbPvNBx8VGUgdJGKdpw3cW6PBrUQhhkpI7M9uu45j6A8oN/bOhCvBbGpGgiO6IYKfN3ZH9vkqLC0oS74tUz7yVcBmpuAh82ohFAZZ4hC6s95iYfinVLRmkGhGbjqVIBQqOBUHK7C/6SAkMRwHZrO+aRm1ITMqFQvp1AIUN2YcIUF9WqdbDuDsYfitqdOQoQVDlubj+onbML+aY/Hj9Fb0zk1YVZkxCPy4ypGhgSjnEYhlFGEMojoSdvvuWhtNF/ZyM9yTN4hTTImTVPVmxC3H+F2VH6q4utg1YwOm/P9rypvPKg6JbJzfCizQ/eo3yB/T5FkTEuFGcW2AjJ6Ws4lBEhGaowJcyej/qgAzrU0FDNpFGXdzIgndA1GClOsCOPqMNTK8Vohw2wDP8bSekdeE39q0lhRZE/jBbhWAsjmssLOtxII3p1KIrVSjEg52kQK4toar7ertxNz07T1lv22TAIflZaCeCf5RNmjEGbn8GOu61n1x8Kl7B0m2h6laMk0oeKYd8kJl8nSKtnjmbLdV1PKBKbECOz5bbIUbKoFZVpJReI9j7O4bWTE1Ms1ll10GPZIsWKF10l8sFG7b7JIQNn0S3XZx6lthUVLABQYHSUYODlSUmPiRUczCcBtSEn+06pBfUkNYgw8XrA6YFWUN9eJ265eApqSYeulkOh5lTGMC6ktFoTTYn7npm2UI5QIzJXyawJPQ8utWmeCap1BV1+DtTUQDGEF/984HjROO6U2oCIIA51k4mNuJIWwi4NCMrh+gHfcaSAUEFgK6+nLHKedJKFe5w2gZ1pdU05EUpiDETcMPDTDQ7RIIci+TculsS5B19yKbqZHizZIaC40ePOMCb9JiYNSwMBhQ0C7aNAcsf2C86CFTpunm7eSosNLZzEvvaOHOL+XTmNZtTy7t3M85X89bIvN11KU7p+LrwV8drl4saK7eti+Nt7msP7tdbN+LLbbcq8zXtoVcVt4Bbx512oo/wE/yQ6p5uK4w/7683ZKWQ9YNMY9eC4wMxSFfguR4y2SI07Xjbw7/rPdFy8+wE92t9xtj3wfSpTfTGVngASoGGvxGKGxjNy4iPjjhoh+RMgWIiRXyK1hnQQR1zhleqjWUCPOysQ345jVHYr1mTvInNREU/AKKddhfDDFiGniJ9ROXXUXPGi8KZ31ScI4acun5QYu7AODYrEtdQ886/9mtNGEcWQZCoG7bb+QnvApBWwNc1OOwk0AkCwOZkrB16TCB+9WOfYsiygyGxrJRt2vt75PHip4E0/gjlgYQJBsLLbT3jYqpQyStDGe6zT5K/PkzQV8wZsY3PkmOWUzlsg80rTjmGn/BwQMC0CA5XsVOvwUtrsRxJJMtpzJFuKMp10kYIEFWGjgXo8WCUw6cmlo1dwUil4KZGXCKXrti+XEij4h85f/JHwKOguMKJmVtizWnZZD8f1erLbrzfam7rq5b89U2gnVUsk1/wVAgABHxnKoOfSc2374xjPsnkJQv75/x+ldK/NeJt6bCuJbUSqI7Wrx+sd675GEIDrx9K8XRXbVjbGlBhBN2TIAD08KI1fw1gBRJCSzdZcZxjcQ9wGy65M/li/Wi9oUr5tiC8TZs+KmXbpwIUFRyH5vigZrq5sWB2PEYuPDs2iahNq2N6BphusDzjvc60cI1UiRg+Yg7eSqAsUC8RjGwo4L5zJEyJhaJCGv9gbwMXZdST8+sOqzLHbOmS351uDE0kgacEx0xBJpOWzM34vFZgX0o7h4eSieniBcvzmwPSaGIG8KhxI8dkyHxYimJCELOEvPvqSIiYYB1u1Eug30+vMZJJLHI1lIlrJe7OR4OkiP7r9fHPtpMGQdzZKTEGvU5E8asZrXz6OYS7UhbwPX4UKzCBlFen7qi+dRNZbSyaHC/7a2L37drJ++PO5uurtTO4Gi7tVOg9lWyZDb+pNdp/NcI5OuxtNLiAQ60sGDSIWbf/1VH6M8epFDuB0923ALw4xDM4mf7BjmaF7gsQRWDq3yz/2HtaR73AFVb3qlCx2gFoznEyoIJHVfqkHKqHuYBRGxdJTHpRN1QbsrsyXwINvLR/S0Ke/zyj09Oa7terInUE9YGCPou7czFltHPx4vKVK9aaPIGRyhSOcI+3iG8isr3aVYd1eDwm1/Wa5ev+z+ehPzbpjYvZWkoeZ9mbdVQiE4JBKRg6w8PsOl7jGrF2m6onmKEW7+C+lkO51dSXebUBjl2A9AeEIp0gFdZDiwifSl/7P6PqTpn4XCARrtgUuBuXWv4PnLnaWTLn2gzaPDrYg4Ne9/hf+D659sYY97638Xu8fPd8fUF/L+UZKSIG/cXujSEKxRTPGRnoEI6bwK16M1OxkOTQsfDlczNDac2IKcNBvm+aH8s1N2Vl992XytAXLK8JM9jfqLrZyKaHVJ6FiXhO6Q6fxdEjwhFzOS2hoAvjZIMlsGTwSXqh6/4WqvQf8zm0rCihuOeUQoBOIE26MDI4mgmEWSCV33nEfSC8jYHJXe2mOgP701GzFHrb0ZK/3VRVE3bw8ppZVkmBAsFHaDWX70/rXBDTu7ngqspbIV0hhHdH/HLWfxQoQ9bbM0sI1pG07X45HNTBPqcefX/pxzwara6T8+L1d3zf5Ecg7HXGFKi2Pi3P7Ejjp/qhEDibajg4Tm2hsrRkB3as4k4/YICzSnJ64ZK41FRH9eOFDo3EjhsXkzerIJmmKiBeVpVDxV+x5D/Rah+bnDFeMiqYVdJW+wIiB9oh4k2+9fDghkODKs/VQWL4oa6Fo4ScxvFSwjTCsYCFmIzZto2zEDlp/b9Xgt7iuNIKphhtt9i4zyMPFCCJLwUsKoZgKLmF+HESaMCyw5N8LwHElqMVjP8lhT7JC4dD5xkMv8mpq3PKNmvuMwalzuaoawBE1BlOSsttHOhGMKnAGeYGN3SnEVSauB9AvwxeHlioO117GVZh33nMfg/hROJdSzWvnJBwB3h35h+vBkx0HEgoKzXcx0vo9aaDOusGGMbOxqF9cawaXXBcxt9Z9ttzYaTDVTke3UswnblN1amYhek1UhTDGmdm8Rr4flXARV++eZx7PQrazujNPNh2Pdc9Rbn7M+iZaLBtJ5vHb4josfjRterW+87XbqFfaMsux1z73bq86mAyb3b3+yEy7DhNkkq3NmNZbN+FCTthhgNmTny99+KzZLoLN1+s9LysVTPOPEkwx8fyIlk5gxZf1Dbz94abKUtNrVrpCU3jlqxhSPTMoFTjxJeDp9OJ+7F4dbx1lAfnfPvf00i58GG8Wz+IFUIgHM4vZ/dgcLbSdULjnqkrcmsymaiAYAxERZ7QJRF/izzGCvoKzcfAUhBrH4ITL0eIldEcG1Vem23FS6NjUPPoqBy0y5YFgxKXLUXciEDN7+ZL6LTDlO4YVEdqsygUCLaU39QCxSTCmQAok1RFEgt3aDkQJpUyCxShksDomw88iVUFL09qTtqObxZu3SKJ3ksh9r3PbnKS1t0h0LbrwEG0GAFE4Vt6lVjaM9w4gq8Lwk3EcwYc4RaoEPDAqXdq8LZkAfk8Xw5hpafzUwu6h+gQFYUWbrUpSdGmcX2PsrJS8YVt1Dobavi1ULX10Fx2R43uPGznj8Gkx23L/FRZccZ4FHGw404D6HuM4Iyrm2PZP1HDJvE4YA1xPeSSklCOtbRJXO/VgeMT/3Xy33l3fiw+7NIoBR64/YRSZGSh5bhzIbBjK0KaZPfN1d+6TXLAEDURIZzBXEC4raxdzeiFeKwG1VAqIBKu057zBaSKyNdQy4qIS+xbEXEw3FExRp2+0JkRgvlzy0jbHUCKy0UhJrQrDkF3Suo+arsBsnx5bYGJZjP0D8cDRWVbWH8BTZMhVmyy5NWIgSCDOwSASiZKG1FxBdsrBMVLI2YfXw7OIS22a7B/Ek4nIrKJuhoIwZxEED2G53LG302tYAQiBKjQFPAxtlE26BBpinokzdknHTg8ViRWEsFQVlToXxXCsG1gLbKfCcCUATjaS658FKUnHa2abC70JO0N+j1JKrWEmYnGx3qUooCXtjAwmGxIojBeEZuGAQqdn2jLYO5nZqOGHlunjNhYmMH4vNThbI+Z/nsbM7STRmguDLW04QtFt3NEFumFTT8ZaIxQ4KwefM0RWpxs0Lfb/xuh5UZu2pLKtQDcHMc8A4Q+CDganUlGCIncLqg9lyQTolFzSwnSQyVSG2dqRrUUnfcpN0+xnyJnHXyJn1XH59P2DhhN6dsCPIO34SuPexGcvC3ByQtKEbeDjpERHgqKBnkUTCrXfaph1Y0VD1ghccM7iZNK+bOTdvVYZmyBYYck4BO6zuTnMmiSKqFQNXhSoiBQs9fRabVmmQoJgyAa+wZ/BZVizrhBqoozAXaCEmmWGR8jTQQvAvGX26I5FyLNFy0CyWacln3IvasP/cxzxdHHVVAeCgcSnAsDMsjcDK69REmlBb0KVA2LiUkWY9KpDzBdtTd2rX7zwYJORQxtc0hNpB/Y1pUbRFJqERDlWwPeSJDGEMfJ7mLTnGSumEOpzxtA18zWRsdrSAVVQNaEPTNXgOMmZYRtYfVC6eXz8vmkEGvORnV0cdLHeBRky96I5gIYsjc37mQPdtx35wF2xDlv1m1QjTmrMP/iuug9uZxxwOqF/lebyYIOfGN/VtbBpTjmyh7s4zPEQRUjWKW3RsPj38BZQU3kt346e/2h8tfXCJp4+Ll+Xzj/1r6h3oVtVyy+zi+WthWRX8pf2Q6pPYZ6zWm5fFc/3n52IHPL/f2tF+q0/RWyyw7iuM2D9XMGn9eQkQsEyzfy9H9Db/uNssVtuP8FD3+FVR3/BtvXlqv3vz5TGJch98uSruHUfr1/nitoBvt6rGhvaJWFyWBsxfFgjb7Quq6UG0Ic1kbFKn1MjE8mc5QG26V87FQX02k/7HqoRguZHuGPa8KVVHwHHWrA8nUoT+4jBmsihCE8udHOkSmSRzubh+E5l61JYGm17Q3FPTbuFlOsx20IivnGXmr9ueOD4qHm6oOGq7hPRAEUnMjwaKhJTXjIn3Q8kUJuyuObZF3vWWTcEvp/Rlxxk1TQr/vmxfI4rbbiLJuPbGcQlOkGFGCSEwNcwfkpma0r+nLkCvh4eNl8U342YHvRNf+Gn3J7EtAldeGt7py2RQN0q1Fkx4O3U50SjWUjCa/rnVXqXUXn14J99Lq/JG2V3IDWK84ct6g+Ds8kJ9/G7LLJ5smCP9fbd4/He5dnDdWkPRXEJ4XUJd+hrVp80k5ERo5GpgHRuJHfkR8I7YFN1Ywt2dml1GVf9cCZlvjZhY7d/sCtIxDyAIn0qM3XuEo7bWsSIYteet7mdhydflv2eyyC+L3We7p0C9/33frddvgOvLy0nyP/1iQ6zYdEf1Okx8R482sghPd8r7Jjw34bk84WEMI28KwrQCk7B8IcMRdC8NIIYzyKNB5DwULGNsBQ7PcFRWivsoIdfvoATgdU4ZrB62r5UuSLq0KR5BILdWcj8X9ecrb91njzbujdYf7Yd0+WfvOcHvTiPWIjH8WUKpWVi67JYvka/nf7qfJJJsOp3pCmAvY52lAJHI0TYok0iAkMPBpLg70X2zkTcbOYuN7BIRV49vUDP74hXYhqZiLHNJXW3UTXbejOwc2ylxWZJDpIoUUI8nH90LVG/y8XPKx5erkgdt94B35ioiZSrjiUr3Kc9NVH5OUQnnb12yqBDOIQjprAufVFamKZTvJQezC9zajic31i8NyCAxIqSpYyItBDILWRKK4d1mV+9s7GlR6I+PQXwNf5GPuniwYxzz7oKteNgZBUdamEo6joWohCb5o1eN9pZ0RHvbj2mfq9jdW2lhkDGRQKlJvo5bzqNmwkKUa6FmXdOK7OAlTA23y1XrGsMLIHZCyvbKiE0tsTWRlAmBOa5X281PbKdsfiJis5LY0iYZJbV7g8P841zETmhfvjJic0tsY6e3KSXsuNaLoXUsk3XdtBYlrQXHQmu7hUVcDrATCkYv3lGbyDMjsbj4woFJOoY7XQDyEiKna6HmxXtmJCEeuzJiX7BndoUR3ACxL9gzu8IAb4DYl+uZXWF8N0Dry/XMaEJ8d/GeWd06HlB1TEeNXmG0Rjtmj18AEK8wHuui5sU7ajQhHrsyYl+uo0avMKAbIPblOmr0CuO9AWJfrKNGrzDcG6D1BTtqCeFeuJihbnwkbbrb67/ZwSqbVXnFFi7F9gT/68vLa9XRLmsa93ZDp6TPRuyHtrRvztCg7bZERtsPTO1/HnisX5ecrxm6Xt52mSNNB13ysO9zrK5g4NElU2qoDUYgb5uBdJvRBsZ8ZIllWHIYeE4AN9QG440Ck5FGIKIFUs0CtsieOiWQo915RDlqmjAZBtRR9qylutOEkQ1HdwRcNzdBMaZ+zyNYLIYbZarinXr/l+e/7uvWdn+yxvSGu1HGLKbMxZx84GInvw8hkWzWr3lr/mw9V7NtNVLP1jGfMYtwdVdKj4oVesNKfA1TL1bMrFAZeVr0ASMHaEyPi3KS9+XhgnMEwWdgb90YoDDlaofYx6Yh5zLKIiD8GLuUtuDC77xgqrx23J672rFqLkiirJKf5oKkCuUhI7IvSKrffmx5uic3ieqQqGY3XzjmEUSIRU4t7BrLHKkENtnw67bpJSOY3pzsLQc/gbJ4KjY9/dZNM94NxyxQkcDvSB7pML+/aZNxbAnGmFY5ITGRNyFFj9K5c2WgNEWCdDONIq1C4T82HyXPeZOM2SkWy7lMoEr+uHnx0e6+clfL4V97mBcEfGw+L97l/TqzcSN5aSctUG54lIeP9eFw9eSN5Kf7jGQMnzGuXQjHyO23dbk4b2Fbhw4BXix+NG57tTdsu99Jio436lJ8Nn/V+wr4Yf8hTtZpPJb2G2Xk8WNDWT34d13HqJg5B2FLUFciPGAYaxY25bH05uRJcgleYXvGryQa6TCKIBrXd7b8HIN400hkWEtUr+XO7gf8n9enxa5wKrOxAOK0eVJpl5Z2NOiZk6vK4VDPix+23fs2HarHpdlLVecJIuXgkjTQGlkj1AV0xpHIAu7bpIK3Nqng0gdGdUqNCwQIMjiMAJ1MMbAAjb+GRS6jTTLgsfqsmyz9zLJ08vnKZYkUoRoRGZ6ETCI13Rn7m9T8nFLzfF3CwWnfrBxKkWrtvQqjtvFkJ6Fwcuyojer2cbowofKI1cWxLATongkOErG6KP3hy0brb1e6APFU7VK9/Gm5fX1eVIRdruwqkQNdnteLXfMDDaac7n75cPee3vXnnfawmEKrqF6xSbOweYSk+yDj4c4PDcr1QUMbgx6GKDhzGH9CfWEXu9yykCSddkIqDn7drC2uDylduyPnH+unwt7xXw==</diagram></mxfile>
2110.05357/main_diagram/main_diagram.pdf ADDED
Binary file (97.4 kB). View file
 
2110.05357/paper_text/intro_method.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Multivariate time series are prevalent in a variety of domains, including healthcare, space science, cyber security, biology, and finance (Ravuri et al., 2021; Sousa et al., 2020; Sezer et al., 2020; Fawaz et al., 2019). Practical issues often exist in collecting sensor measurements that lead to various types of irregularities caused by missing observations, such as saving costs, sensor failures, external forces in physical systems, medical interventions, to name a few (Choi et al., 2020). While temporal machine learning models typically assume fully observed and fixed-size inputs, irregularly sampled time series raise considerable challenges (Shukla & Marlin, 2021; Hu et al., 2021). For example, observations of different sensors might not be aligned, time intervals among adjacent observations are different across sensors, and different samples have different numbers of observations for different subsets of sensors recorded at different time points (Horn et al., 2020; Wang et al., 2011).
4
+
5
+ Prior methods for dealing with irregularly sampled time series involve filling in missing values using interpolation, kernel methods, and probabilistic approaches (Schafer & Graham, 2002). However, the absence of observations can be informative on its own (Little & Rubin, 2014) and thus imputing missing observations is not necessarily beneficial (Agniel et al., 2018). While modern techniques involve recurrent neural network architectures (*e.g.*, RNN, LSTM, GRU) (Cho et al., 2014) and transformers (Vaswani et al., 2017), they are restricted to regular sampling or assume aligned measurements across modalities. For misaligned measurements, existing methods tend to rely on a two-stage approach that first imputes missing values to produce a regularly-sampled dataset and then optimizes a model of choice for downstream performance. This decoupled approach does not fully exploit informative missingness patterns or deal with irregular sampling, thus producing suboptimal
6
+
7
+ performance (Wells et al., 2013; Li & Marlin, 2016). Thus, recent methods circumvent the imputation stage and directly model irregularly sampled time series (Che et al., 2018; Horn et al., 2020).
8
+
9
+ Previous studies (Wu et al., 2021; Li et al., 2020a; Zhang et al., 2019) have noted that inter-sensor correlations bring rich information in modeling time series. However, only few studies consider relational structure of irregularly sampled time series, and those which do have limited ability in capturing inter-sensor connections (Wu et al., 2021; Shukla & Marlin, 2018). In contrast, we integrate recent advances in graph neural networks to take advantage of relational structure among sensors. We learn latent graphs from multivariate time series and model time-varying inter-sensor dependencies through neural message passing, establishing graph neural networks as a way to model sample-varying and time-varying structure in complex time series.
10
+
11
+ **Present work.** To address the characteristics of irregularly sampled time series, we propose to model temporal dynamics of sensor dependencies and how those relationships evolve over time. Our intuitive assumption is that the observed sensors can indicate how the unobserved sensors currently behave, which can further improve the representation learning of irregular multivariate time series. We develop RAINDROP<sup>1</sup>, a graph neural network that leverages relational structure to embed and classify irregularly sampled multivariate time series. RAINDROP takes samples as input, each sample containing multiple sensors and each sensor consisting of irregularly recorded observations (e.g., in clinical data, an individual patient's state of health is recorded at irregular time intervals with different subsets of sensors observed at different times). RAINDROP model is inspired by how raindrops hit a surface at varying times and create ripple effects that propagate through the surface. Mathematically, in RAINDROP, observations (i.e., raindrops) hit a sensor graph (i.e., surface) asynchronously
12
+
13
+ ![](_page_1_Figure_4.jpeg)
14
+
15
+ **Figure 1:** The RAINDROP approach. For sample $S_i$ , sensor u is recorded at time $t_1$ as value $x_{i,u}^{t_1}$ , triggering a propagation and transformation of neural messages along edges of $S_i$ 's sensor dependency graph.
16
+
17
+ and at irregular time intervals. Every observation is processed by passing messages to neighboring sensors (*i.e.*, creating ripples), taking into account the learned sensor dependencies (Figure 1). As such, RAINDROP can handle misaligned observations, varying time gaps, arbitrary numbers of observations, and produce multi-scale embeddings via a novel hierarchical attention.
18
+
19
+ We represent dependencies with a separate sensor graph for every sample wherein nodes indicate sensors and edges denote relationships between them. Sensor graphs are latent in the sense that graph connectivity is learned by RAINDROP purely from observational time series. In addition to capturing sensor dependencies within each sample, RAINDROP i) takes advantage of similarities between different samples by sharing parameters when calculating attention weights, and ii) considers importance of sequential sensor observations via temporal attention.
20
+
21
+ RAINDROP adaptively estimates observations based on both neighboring readouts in the temporal domain and similar sensors as determined by the connectivity of optimized sensor graphs. We compare RAINDROP to five state-of-the-art methods on two healthcare datasets and an activity recognition dataset across three experimental settings, including a setup where a subset of sensors in the test set is malfunctioning (*i.e.*, have no readouts at all). Experiments show that RAINDROP outperforms baselines on all datasets with an average AUROC improvement of 3.5% in absolute points on various classification tasks. Further, RAINDROP improves prior work by a 9.3% margin (absolute points in accuracy) when varying subsets of sensors malfunction.
22
+
23
+ # Method
24
+
25
+ Taking experimental Setting 1 (*i.e.*, classic time series classification) as an example, we conduct extensive experiments to compare Raindrop with ODE-RNN (Chen et al., 2020), DGM<sup>2</sup>-O (Wu et al., 2021), EvoNet (Hu et al., 2021), and MTGNN (Wu et al., 2020c). As IP-Net (Shukla & Marlin, 2018) and mTAND (Shukla & Marlin, 2021) are from the same authors, we only compare with mTAND which is the latest model. For the baselines, we follow the settings as provided in their public codes. For methods, which cannot deal with irregular data (*e.g.*, EvoNet and MTGNN), we first impute the missing data using mean imputation and then feed data into the model. For forecasting models (*e.g.*, MTGNN) which are strictly not comparable with the proposed classification model, we formulate the task as a single-step forecasting, concatenate the learned representations from all sensors and feed into a fully-connected layer (work as classifier) to make prediction, and use cross-entropy to quantify the loss.
2110.06848/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-07-14T08:19:47.864Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" version="20.1.1" etag="LWUf2V4ZDAGLr8YH7MN0" type="google"><diagram id="fcdZ6rTaGfZAiKDnHWsA">7V1bk5s4Fv4t++Cq7ge7QEJcHmN3svMwM5vabG0yTylsY5sMNg7G6e78+pEAARICCyyw23HPxSBACJ3vXHTOkTSCs+3LvyN3v/kjXHrBCGjLlxF8GgEAHE3DP6TkNS3RNdNJS9aRv8zKioJP/k+P3piVHv2ld2BujMMwiP09W7gIdztvETNlbhSFz+xtqzBg37p319kbtaLg08INvMptn/1lvElLbWAV5b95/npD36zT79u69OasisPGXYbPpXfB9yM4i8IwTo+2LzMvIL1H+yVt0Ieaq3nDIm8XyzwA0gd+uMEx+7YRMAP86HSOD9bk4MF9pGW4nrw4a3/8Sjsl9l5I+SbeBrhAx4eHOAr/9mZhEEa4ZBfu8J3TlR8EXJEb+OsdPl3gRnu4fPrDi2Ifd/e77MLWXy7Ja6bPGz/2Pu3dBXnnM0YXLovC427pke/R8Fngzr1g6i7+Xifl/MvDXZzBCaD8I8jrvJfaHtRzumBEe+HWi6NXfEv2wNhw7AlKn8rwjCjAnwtwmFnRpoQLmJW5GRzXeeUFxfBBRjQxAaEEAed3AjYR8HLEMwTE44hCemFf+5WZgHPnAS+vpL8e2Brz/Ybg8w3B9xuGgg5AteglhGZ6wvx+DOmF8SGBwDt8g27sX4qLBbYN8q81/fnVH1lPI/QeHz/oj+Q4vVQwRPqmGp7AzcVKpQ65JVZxD/tU06z8F4LljHeoeoAC7JYZaQTgCpF/KlyHr5jJH76yjtyljwlbujZP/pHlBKMRC7rFQkG3q1gQQkEBEsxBkQCuGglL5NlLQ4QEG8xhHRIca6lZlhokIMAAQSQU+gKCdVomDsmU9sJbLESkmNvIQJqYFJ5rzhMyhbhpfkx60dTUUAboHGkG5FH7Tpom0kCdtSQgGI40zp00jaSxdNbIsQejDB2z3kkjJo3B65oBBZqu32nTqGw42ugDSjSq5+60keMbXTBu7Y02Iq8DT5vd8h1xtY3ysXuJHuyQn3T9B3frB+RTfvOCHx5xHrA00Y0K1XDnTL8dt3t6B6kKd170+iV7S3LyV/nk6SV7ZXr2Ss9e/PhL6fiv0nHxCDmhT9ShowkD9chZuodN0ht6dvLRjWMv2iUleGTDAMgqAOQtqZ+yDj6YJOExWnD+09iN1h51D9TArAQjJIARLYu8wI39H2wzRNjK3vAx9JPhHB1oaCyKEeDgmTY/ewqUHJl8RRw7GBZXUfrNlYoSqOefLYd+CbfN25FMDPZUmFqQ0+dDyiWRP+nScsmsl0uarFyaaJpVlk0Ty7avVT6ZtDEfvcjHJCQO4aczZBa4NpllmNbERpZtGtAygWk7Bgt4BzGXrW4CDdocH/EOZ4UCTeR8uzTbsOpcm5gaKLPOGPMEOKnXyRkPwwuyBurOBs7VcQFo5ALdNCa6gS1FZOLRN6QWY2susEymGs1kmUJHwkb0wCMyfsnL8EimJPSSgqDHYuVQcFUbZTQ433TUFzQh4Go4BdqNnGKzjEI5vTWnmJy+4C1phbwg4wi+LC9UDCZgSLLERDMthi0mtg5OsIbI2hla9xRaxtEUs5R1bRwFnEatoEr3GM123nC6R8a7f1F+0yZI2nVSYjXUxklzQbvN7m632VfHO5yegJozcUp/dldmsVnu4Mfz6tgBiCIq7XIFgL5/SUgqThcYj9Bs584D92uWOFDOHyGJA9rvXCFuE5rto3Afh/j4O3kO1zWrJp+Q+xbLkLTyAf9XVJ6kJJAXH45b8niQ3uvv0l9cksASzEByhquafSPN9L6TriE3kbZN05c8f/2WVRuQan+yp4/tMh+uIy+MYdjTSWKnBBwQOC1GZ3i/KDeZVW+XqQm42Vbg7gIy4atCLywC93DwF+eqBqDJebwy1VAM1AvnlQGaNYTAnKpLkWlKhKlPn+kmy9El5bYOOLmtd/SUO9w4AfUnqMEpQb3IaVMIYkqlimx+F8eRu4ilE7YwdX8ncoIFvLxMijysMLLUSgLBPemSpJPQdISeBFzRyD4lHE6pFRKFMcZESF47hrSGT9k3nBZ0tQJLJi/0VGIoHjPpOmvyQiVIHlPkvtJzzkAPV6uDdzb42sUoBxaO4nBAYRC3CVTmtvcQoQBZyXkxZwuEJXPWYjOy8EiKsXU7ulr4HDzdaaxWoUCViTzeMX1rmCb+jFpM67YSTBuQrdUcCtInpwAIbYSMdhUb4b/enkw2e/MWgg2mqaVashB0yekxuflwlnXQHOXXJrppsjmVYzXWASdbx5w/Qo1tUD/bYOn/EDoQyCB4nCGDeBACb5ViSuNdC0atayGflyWof56TdJwBnlQWrecPAGEWmZGaSweP3d7dzuyW8KPUvfBjePATEJA2kgEU0B7c1H5/rGXPtPiwd3fCd3P9Yhhpb6Ds14RtOyW5/J94g5kcY8H3nvFPuML/w0WkBe42+9kH3gQ/hQ/wY7v5YV9qLv6KtMXsV7Az7fKyBF5s6a+LuKoQPwNxf3prl0dcRLTBKbw1tRAsoAlXilqYXGZB9Dk1FrB4w2KaTDrOAOgm4iqF4a74sBSKh8mJL5LB3i/h4zOqivVD8jfqIyMXj2O5qZQj3hVoWQIrETr1OlLaFShKTzghT+sFTHLFx4QhH04uaRmkW7N/7cuHkT5iyZfUssrA8y6hRoGfhrdkF9L6yJVdGG3dIL0WeCR+ND4QWO3W1etJn2IFuDtg8bSl1zM3jvYcRkv2Wa3m0z5vvCQk0MDrmURbuQv2u4vPVCTQTveyRC2FfGuUXiJBd4Pk5TVEJvKJeIgS3eYSY8V9dongWUXhNtcT7m6xoXyUd1CkhMCDmY0udfFSHV5HfqaP/EMy7FgeF1h54Ga6x0NyMH/NO+fPj7MGA+6Nc9D2SJY+CfzEjv1+UvS2luHH85rHmMrHvK/LVkwLC7rG4BHVoAj9ldrFttOJRH+RXSLjCdCSP1kLpdmnDxHnD6Xel3K0kt5TNlHMhmG8tIkiyhrjlypZ3JcqaaAfsq0LLjUjM8m47+VKoMmmmVDbmQGwaHaJCiMbnkw8EWrIjIPbaYzmBJXGBBNSLckU0cajJHVkhVUqyQxBM+9l/4B/Ane3Drz0IlmRar4iaS74aT6pZVZ7AyiyWaKstg/4OHaPj1keivq3CV5FADRNq87TZpKcmSRhhs2Wwbd9yy6S1v0L/5Ab/iSO2uT0iUupUfcdTPZNQ689tdRczSCpVZpXKUz7yO+pC7t3Gfun0q8+8QdVJREyzaokUpH4A69r3voQ69XUdX8eIuNSgAXOFyDQC3ySfCdqiNJcLkeNM+eDKqEGYK0Ue0BiyKR93NaiXqeI4XCsoQ1IjZuaKq2EGhobNdWN6hCsN2rITI/+pagBuJkj5oCsITPp9pciBoSs1rAG5IybWnVOCWfwWgMOSI2bWmhOCTUgl56FBqSGhNvnsHH35HAVeC9Z9uG0p0TEOvfo6QRFjqx5ilWFrNMGsk5bJA3WEXWgCfp05QkqTrtmBnL16IirSF0yoFHvYLtHsW8qzJlFsZPg3K2G5fZ5Dtw9sF0X2CbMT6K3iyA8kJNk+uRF49lKk9KiNB35Hs1uUcs9mn2PZsv53POJZKeif8g83ww2RD73dhm8eZhEmQ3AGR1JAiKxNXRi7CUHGmhpdNSn4IjwK8hnHqyll+9+2VqW3iLy3EOSxvXsY6jiyrCCDTgR149kak9BmdSLq5YPdOTisCMXIPC16ghW5YWKtSiN64oKXWGMDtKlugZwYhhXFha6vEuJZgnlmxUMSIwriwpdPkbHxyGABYejxj0qxOsNboGC3CM3BDXuYSE+6ZEjBl2/aghi3MNClY1U2EAE1AZUG/ewUCVIx1IDoAHVxk1tOaSEN3QuN1mQXd8XNZAocnIDQbo83NZq2JKH9kZvLUgHELeKaNcoHYRcReqidOgaPWFvPQTQ06IB53T0YF10+042YYxPmT9fYmL/23DXsfqTbprOeOtAVRQr2YhYwlvX+1bMJr+llyC+AU2r2gOQX8OxUxfUbyWeSQK5CSk1s1G+fPUvt+GuFDzT3q93XyLO2HaqQx+aGsNYCgpmniEZj1lrU67M8dU+67Ium4S9RTHOLKwKTzAGa9uMoaDnG2y0FotPcXl2EHIWnaw5xm/G4HD1KLTG6hdQa7WyUO0kMo5nm5TNwAuqGU2cLSOP0alFzij5KfDAeUa/0kXM0PlbpjfJ67i6bvnB347y6Y+YfPjy/9rvoz48Rkpr6IHT87b7RZTBCgY1a+YBTtr0sWYearcBUgcVVHIF2I0aiV1N/LY1Ur7X6rkaCXI7bfapkupXelApmcBNSSZo9C2ams1bbEhMHMd0ENAMRzf43AkInImmGchBmubYdmVTnU5WrciF3BInWj1OMAjIoINfIOFSY5BO64GcGpOwLC0IjYkWAzEVjBZNkcNZHY/ztAOXpZ0KWkE67/YS1Oplp5Aymqt9pmKvqK7au9nRNJz2Huuce7+z9uZmDTu9bdpBUXmzUKmHha6ZE2RrxR/LsGO96pBUAxMIDXa19RpJ0Ro1zfXqNhhmkXezX9fihl+B5+2tftPDUmKtTAddkOIkXEdMgdlHJ9n3joYORsOvigbIb6FJk20GWFfOlElAHEzjGA51NRCdM9YmJObUdreyrntcdrR3qHwt7zKZ2jjXYAMBbv6w1nEiMmcDWf2ZQDI5mFdoAg2AOgZh9klzioUY7cYe7Cc0QVbFkMmXriCui+Kvqwne+BLE7aJqc1POFKKzfql3RSPvn1fsNblA7n6d6iyrRr0KYxVr5pr9elJ5Wl/Yy3IFU5hkzaQhaK/AO9pE+3xh2PVDtidxsXTr/3Ev4efXI7JLMb2YFn/NW0ESmLIa72a2HH64FBKrOuhyBAqRt3S6oMnq2V97R9PwaOLjvv3BCZ9GISF7YbPgLtj8ES5JtOz9Pw==</diagram></mxfile>
2110.06848/main_diagram/main_diagram.pdf ADDED
Binary file (94 kB). View file
 
2110.06848/paper_text/intro_method.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ As a fundamental task in machine learning, representation learning aims to extract useful information from the raw data for the downstream tasks. It has been regarded as a long-acting goal over the past decades. Recent progress on representation learning has achieved a significant milestone over self-supervised learning (SSL), facilitating feature learning with its competence in exploiting massive raw data without any annotated supervision. In the early stage of SSL, representation learning has focused on exploiting pretext tasks, which are addressed by generating pseudo-labels to the unlabeled data through different transformations, such as solving jigsaw puzzles [@noroozi2016unsupervised], colorization [@zhang2016colorful] and rotation prediction [@gidaris2018unsupervised]. Though these approaches succeed in computer vision, there is a large gap between these methods and supervised learning. Recently, there has been a significant advancement in using contrastive learning [@wu2018unsupervised; @oord2018representation; @tian2019contrastive; @he2020momentum; @chen2020simple] for self-supervised pre-training, which significantly closes the gap between the SSL method and supervised learning. Contrastive SSL methods, e.g., SimCLR [@chen2020simple], in general, try to pull different views of the same instance close and push different instances far apart in the representation space.
4
+
5
+ Despite the evident progress of the state-of-the-art contrastive SSL methods, there have been facing several challenges into future development in this direction, including 1) The SOTA models, *e.g*.., [@he2020momentum] may require specific structures such as the momentum encoder and large memory queues, which may complicate the underlying representation learning. 2) The contrastive SSL models, *e.g*.., [@chen2020simple] often depend on large batch size and huge epoch numbers to achieve competitive performance, posing a computational challenge for academia to explore this direction. 3) They tend to be sensitive to hyperparameters and optimizers, introducing additional difficulty reproducing the results on various benchmarks.
6
+
7
+ Through the analysis of the widely adopted InfoNCE loss in contrastive learning, we identified a negative-positive-coupling (NPC) multiplier $q_B$ in the gradient as shown in Proposition [1](#prop:coupling){reference-type="ref" reference="prop:coupling"}. The NPC multiplier modulates the gradient of each sample, and it reduces the learning efficiency due to easy SSL classification tasks: 1) when a positive sample is very close to the anchor; 2) when negative samples are far away from the anchor; and 3) when there is only a small number of negative samples (i.e., a small batch size). A less-informative (nearby) positive view would reduce the gradient from a batch of informative negative samples or vice versa. Such a coupling exacerbates when smaller batch sizes are used.
8
+
9
+ Meanwhile, we also investigate the relationship between $q_B$ and batch size through the baseline, SimCLR. As can be seen in Figure [1](#fig:coupling){reference-type="ref" reference="fig:coupling"}, the distribution of $q_B$ has a strong positive correlation with the batch size. Figure [1](#fig:coupling){reference-type="ref" reference="fig:coupling"}(a) shows that when batch size gradually increases, $q_B$ not only approaches $1$ but also reduces the coefficient of variation $C_v$. The distribution with larger $C_v$ has low statistical dispersion and vice versa. Figure [1](#fig:coupling){reference-type="ref" reference="fig:coupling"}(b) indicates that the mode value of $q_B$ will also shift from $0$ to $1$ when the batch size becomes larger. Hence, it is reasonable to fix the value of $q_B$, alleviating the influence of batch size.
10
+
11
+ By removing the coupling term from the Info-NCE loss, we reach a new formulation, the *decoupled contrastive learning* (DCL). The new objective function significantly improves the training efficiency with less sensitivity to sub-optimal hyper-parameters requires neither large batches, momentum encoding, or large epochs to achieve competitive performance on various benchmarks. The main contributions of the proposed DCL can be characterized as follows:
12
+
13
+ - We provide both theoretical analysis and empirical evidence to show the NPC effect in the InfoNCE-based contrastive learning;
14
+
15
+ - We introduce DCL objective, which casts off the NPC coupling phenomenon, significantly improves the training efficiency, and it is less sensitive to sub-optimal hyper-parameters;
16
+
17
+ - Extensive experiments are provided to show the effectiveness of the proposed method that DCL achieves competitive performance **without** large batch sizes, large training epochs, momentum encoding, or additional tricks such as stop-gradient and multi-cropping, etc. This leads to a plug-and-play improvement to the widely adopted InfoNCE-based contrastive learning;
18
+
19
+ - We show that DCL can be easily combined with the SOTA contrastive methods, e.g. NNCLR [@dwibedi2021little], to achieve further improvements.
2202.06242/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-01-25T16:22:23.779Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" etag="nKPTfVvGquZMg4ifRaUR" version="16.4.6" type="google"><diagram id="4qL40A6LAgkbrwE97hGc" name="Page-1">7X1Zl6JY0/Wv6bWe76JzMQ+XDIqgoCgKevMuBASUSWb49d85mlmVDlmd1Z1VlXZ1dldVMgcROyJ2xAHOH7gQt1JuZ4Gaul70B4a47R+4+AeG0SgF/oYruvMKCiXOK/w8dM+r0K8rFmHvPa9EntdWoesVFzuWaRqVYXa50kmTxHPKi3V2nqfN5W67NLq8amb73s2KhWNHt2vN0C2D81qGRL6uH3mhHzxfGUPJ84bYftn3ec8isN20ebUKH/yBC3maluff4lbwIqi6F7Wcjxu+sfWLXLmXlO85gKgm8iaeLGaGueAlZNwu1t6fz7ao7ah6vt9nYcvuRQGeC/TxvJjmZZD6aWJHg69r+TytEteDl0HA0td9JmmagZUoWLn3yrJ7Nq5dlSlYFZRx9Lz19lae765Iq9x5lqNyJhhRICnByWnFM5TNzus/mWdtl3bue8/HBt7u/9w/MT86Flv6IFu14K//JJ5hCO/m1SWeVSV5aeyVeQd2yL3ILsP60vr2M4j8L/t9OXSWhkBoDHnGO0VRTyiDfPl5PsEz+FEUf6LYy3Oe7/H5NK/td31mlvyLM2Ps160IdnmVs4JursLlud292i2DOxTfuD0GubwsQn5b6G/vD345S/Cy9MoUX1edgHwf1HdBQd+AWk6yCkhFRQAi/DYHv/nwt/+hKNzRAn++/ob/vxsXKL22vERsUebpwRPSKM3BmiRNoB/swii6WmVHoZ+ARQcg2wPr+drLyxBEF+55Qxy67smJmiAsvUVmn+DegFB641i7NCmfXQjE0OflZyHvRIPnVfCCXvtNyD9vxclLU1H0E/L659mDmq9REEWerRu8ioA48rbPvLL795sVI+8Eq7NJqwt7UccKBtWTgv4sThrjoImJrD0Fm5ftLzB4DYx/cBqsBVuFNKkxcPBw7k2WH3Ri1QZulQL9UHYMYZFsi+zjxH45DZC5+rrut/QAjHliL36oC49gnij8tUPgtw6B0k80/TN9gvrrBA6IRwZ/DeMT1eFP/3JFdmZLUK/2y8IubKGyv9hoYm+9aJYWYRmm0FbbtCzT+I4RS5jpb+zzfEXRtUsboPC8iA2L2v8D41sAJUyYjTRs0/HE1mwrp0dCezRHHDGtJ7iLux2Jqx1ZO7FTq3uuUQW2d2MnlEduthnN09lCRqbGspqKHKIaKqEafq8Zh17rl6EsBZFtuqkrIiFYh0/2DjYVD2CfQ6/2S1LtfXAeDfHMNpotlL1rKf0WlylZQms3XtLyaF67ppZOzOHelaJ6u0/9bcxWG6PwJ5YbreMycwUUWVtcaUtRYktttsECZGJq9VZiu2nIdxtMCcYiV8hf/2TTTvbdOIpcRKk9KFt/aCb7Q6vuB7gq6q0qDlBtL/uqwXWTPUeC7djU4FDNGOCa4RCyqJPagiBO+/Z+OxVVdGqouLbgwL9OpfVrYmqA9YZ60sVUkIGuZ5LjuyMl2CZavMWVEtwx4sTDysHawJWW6cQYIFpHtNpeBxpdgyPXjbrXcS1sSrBcqeIBAxKS2l5FNXHdT41BBrS3XwPtjcVBpe4dBB4DJAdSyiTQNDZdHMLZvm3W1jyVJa1YW1oPrlvYJpnLIV+uLSXfWHK5MckDsCSwmM7KBwQcz1Xaomm1HmoMWEsg0Imotqog+w4+J7fSkpUTDcjP5psFitvmHLHF1Af7ERqwDtin22JlNDHZZm1qmTs6ULI4oKG1T8cYqe/EbgjkBtppfK3XwXUG6NgowklPMI40RGyB32+lYQ8RuZWiygZW25rDzsZW3cRcVRtgOXAPQG5FA2gDFlwWquE0E4EDyDrJD3TIvUZYD/Rf29iydLHo4Eo+kIkDyF1+lWuB1uDae9uak9OQa6cd/3IMJY+UZB360OJAVgcB12m1xYGWQwZad8QDK/r+GljTwVVWjttsGxdAo2jgxGXkhGi0jYF2Q9mHkhuGg561CjAHNACwcr4DgEVwV+CqUbExkPCLJq2zvwGpwH4++kqycguSwhqLEA9oVX2l/TP+5RT6JZAW18B1VGMJ8V9/HCr0/sFQgQD5Pz0q5PYSFYPPjooORMvXsQL99KjYDy5ixfQBYsUlKjRRfTBU6PjnR8XyAhXqI6ACvYwVy0+PCu0yVpAPhgpEfYQMgl7GCufTo2JqPBjb3B8ejm3K5IOxzStUPADbvETFQ7BNtX8wttmr/YOxzV59OLZ5iYoHYJtXqHgAttk7D8c2VfzB2GZ/1cX6/GzzEhUPwTYvUfEAbBNRH4xtauLg4dim3j4Y27xCxednm1eoeAi2qaMPxjYR7cF6m5q4fDi2qaMPxjavUPH52aYmHh6Oberkg7FNZPpgvc0rVDwE27xExQOwTVR9sJF0zVAfjm0u+wdjm1eoeAC2eYmKT802X7QqnypT1Ri0n933viB3f6r/WyDzZ896L0jozj0WoGPj8zPOt5Dxef3vda10iYxPWzu9ZpkXyPjMmeR1fr6IGZ+Wxb2FjE9cP73mFBfI+Mz1yOva/wIZn7YX8LpmuowZn5YXvYWMT12TvK70Hoxn9IdH4xn91Hg8nnGNjM/PM2D2eDCegaj94/EMvX80nnGNjAfgGch1bfIIPEO/rlo/Pc+AXbgH4xnXyPjUPONLlwt/sNEzVHuwZ7W0q7eIPjPHuI+KTxyV30DFA4yeXb0v8pmzyAsq1u2DjZ5h6qONnl2i4iEyyLp9sNGzqzcOl5+2sn7j3TL0EWLF1fsixqfnFVdvBjwAKi6fAX8IVFw91/n5UXH1BN/nR8XVs1oPgYqr5y8+PyquRtofABWXNchDoOKKV3xeVLzW6sN1hpbXY5Oftp5+jYQH6wxdI+NT8/o3n2f4tDX126PWn7YWeXNs8hGegboZgfr0yLgdZ/j0yLjpJj8CMq6zyedFxhtf8+k+s5bf+JrP/tNi+Y0vdMifHxWXb90/BCqu3rr//Ki4epP286Pi6u24h0DF1dtxnx8VV2+8PAAqLp9ifwhUXI36fn5UXI3k/DxUgHvsgH4jeSSz8n6AAH2Fr+Svttg8AvJnUL/qkIF63J+/AnpCB9yXBPfTXqEits22eNGVZpyuSWghvL6f3UFAr4JtICuB/WTyb8rWviFbd9Wn+BWydW/Jpv162fq3ZJv+etmQN2Tr1V+PN/Qt2T6BL2BvyIZ8Al/A35LtE/gC8ZZsn8AXyDdkQz+BL5xGCNa4D/Y95+CTrAb8kvVl7t3EUbEVX3LS8pTxp+IaZn9ifC/P9vJJLrBfN/3KDCQtc2Kt2Cz47pR9nq/4hdvE7GGzkH3bBFl2z+9AxQ6lJ8C+zxkXSnsA29aXYxvxJttKzYtm0HN29zGY6afC4U5WVbuzZCcNvrAAwHUUIMvSd/ZAO4Z+vt4XFkPWGwlqM4JMoLWg9c6yPXOsF9muWMAvkW3whmzqJ9Cb/JZs6K+XTX1LNvLXy6a/IdvVk/a/RLblW7J9Al9YvyHb8lf7Qvcqhrz0gL8ypV+duaKvzPylc/qSHa6fsfsV2UHZAV99zl0v3ehP463dq0hybdnrUbZfYVnkLctej0B8kGVH88gb6V+1lZyWgYRzqJFK7VfRFmjzzJIG6En60SnPX+eEepvopYNF1OZU4SuNJ5414Eorwn3VJzkvy/4Gnickes1cleDOoEUIOJcHtN7pjvfXWGYrV0B728oi2PdQE554RgCysQLklVbPyyF/gOeZgPy1MREg9wmRDWB7f+MaTb0B97KRVrED/n3p8ZxRpQROotTu1bweqsCV2gKcA44jCV/7QqtzXwcyu6+dl8OruUpgDwlXso1JIvDcrjWvZWhFc7M+sdOQQKfGgbz8o7+aaePUCWkmAoIasAMp8qoqNOi9js0LOqanuUl82JciVOh50rNnSmS9hXOhnNEisC99l/PSAcjUQ904+CaZ+XCyGfj/27O5fcdEPCjyxLAXk++Q2BNK35mBinh6mXHv9YQ7KEY+UdSPmnTnHbPmXU5cdG9qo1dzKd3Ma/RqGqU/MFwQKApBPkizyOWEbCTyRJI3asXwOyr9cZMY3U7Y9jD6xBHqCSduJ4X6ot5b3d6ZMw3FfphumRvd/g99numOxinmd53sDjsFiDen+kJJ9Im9nd+L+Znz3eG38919MR2BYxTxu9oOpdGnq7kKaeKJJF5Zk/nFprudlu2L6TDq97UcRoNoib+y05UV0V9stp+ciHaM4znOja3Bli1DEuTHpagLNTP01SyHd1LUE3VnFkP6R+mdwP9a717icnA6Z4j0yC6K0Pm2qoG+8s56NtNpYQ0XQOB4XhTb1xvF7mWpDcvTYU8swTwvn48kEOx5+euhcOH1kTMvD4FOoB+K3zTf65mN705YjL9Mdv1qZuO7yiOfY91fzmz8ysDkHa96WfcPJ0DG8UuCec3C3zvdMU6zTyz5KqLTF6f9k32iXm1licurfPd0xxf4/b45iFOTinGq9L1jOOH4YUeTUvUyB/HPnUf7BbvoBW7Jb6P2b/rJu5B9H7DIO5FNoe9E9vdB9m0Q3EciQV/NYY1i30Tuzf7P1dQPnfP6Ral3Jkfevszqy72a6nf75lS/j5fgCOQyNrC3fQL8TsDD6C+l74dnNBL7PBntTQ3/dWpBPldqwWCtfeWLIE9fnei9GYZAr/36it+8kUQ+zGVvWc/cO0/WfDNT/Q14fo96ASeYJ4S8oaovfo493bauiHue/qOI6wu5+fxp/r3BhP6BeZ/8tXn/JgCQ1CXFvKpGKeSyVr3KFd9NMd+IQtRVFCLpb0et6/0J8iewi9tW1A272P58drHzqPvswqXZ7Ud1eAkEoOSqO0gQN3HnXlsXJ55o9kcxjNsW03/Z4zp7UE/M66bgtRXRSwfHfm0yId/RffrZXZDvSTHvygz3ex3Yewkp+bkIKXJLSNlvZ413tz/I21MTfyshfVQOoIh/G9fBfiDXYT4d1yG/wXUgaN/XQbs9MYiib5/4T5R4goOtLz8YdnWZN/zhw4jL7fDnDXHJfjpx8VCX9Oh7xIWlaNymfhhxIe70+rE7vAV7Yj7g4Ym7jVHyUaLIl6jxJ/KEoNRl6GCYv0qgcOlvjQZ8M0z8ZTxhsB8ST763qiFfhvJechfCXCHnB1QpL0r6lrNPs1IDCnyPx/+Fj9tF5jnQFLuwhdj8K6d3GQSh8XtOz5EIQrwkrov6Bv58UDBg4INSr8L0ZcuUZp8I+tVW6iZOMPQTcudBq1erP5wQ0/cegGh/39FznMS+0Q2jYNC+fRDpV1Uz9L1q5ux1RWYnF/ajjlVaPivsz+ZZOg7sQj8X8chpS3HSLVyPEll72vBy5IsT/49AzgD54t/niz1UHfwRUPlm4xRFsaeLp2VuacHPxQr7S7CCtgTyH1Rwgvxml4Rknljstur8VVh5ITjf6pJ8d+Z+o694re0f2mckWaDoy8faaOyJem2a23HNe6onfpjq3/GYzkOqnkKIJ/a1or/Jjd4XLH+cFd7x9PljWgFFn+g3cxZz+0jnz1X7vSGQf4fa2Secfkvt7F8+WPujrHC3U/zNCu8tlvCKC5CAC9wQAVBTx2Fy0sgL66iyl43Ga4LwdbUO9kX+hLpvb8dbimq7h5YG9XoK/uLgviexEFht2jHERbItsrMMN4M176hN/7VshKDQJ+T1zxUbYZ+Yi1h8y0ZQHH/CmVtMfn1o/MNReeeFIAzafFhFEbw3IU0SgAegNwyZ2B1QO6RHkKvOvcnyQYz7upolP8DSKEM+EeRtaPky2gECz61xyTs9CAzUOh8wzPrNx3B/yCPhyG0O+Dmt4etHwkG1iN8b0r77GDiB3Txy9XHqfge7/0h1/5xHCK7Vjd59LeyusrGPeYTgvrLvMclz8oFqe1dCJe4l1C8PIgxv09si3ZWx3d7dNozssvSSVxnwLMdDJcEP7/qh+NNlMx1jLl85vNPqQ0GiZO+g6Ufxsve8OXPGBC6W9vak2ld2u6florTzF8VCPTtAr3aYnAZT0NNyFNlZEZ5Odt4jCCMX5Ne0Kl8u87J0YSXmB1iJhoNtt0Xiy2ug+BNF3mE1rywGiDb1yqgodWs9Fv2Qd5rvG/DeQOkbBgSKKkM7mgNKYyf+e2x5ays3TzPjZTALrjiNKHn5oPbgwNKza916bnka8YMbI2/3cuw2Lcs0fl7InxX25aQnVZE8+B+gX4DDfiS4GwEso1+XSfG0e14CrgaiiB2ekODZRdl4RXkFmGu3Z+6miW87yl8j6/vA8WKvD0fGN94s+HZz9lWWYO5lCfS9rdd/grsT8ba/4u4fQyoFlttFpyewApAkQLb6OGwwfwcbL8XHe8HwfLKvWvzus9kRcNTELj0eJsJ/+vrUfVXcjgjgj42LazL6koReUc7hEGfwH4OX9tK6f/Gph38An3ec7SfAh7iNWbdPrz0UfH5OWPnyBtstTLAPhck7zvYzYHJb3H8s6/l3w4S68zTZdzKTv4DJO872M2DyjqbEf9z4p3Lj7hIM30mVibcB+c+o8m08+Y+1vOlN/5zlfsOOf4PlXp3tZwSW2wH135emfAcuvpPN/n2YvONsPwMmd4aX/oPJh7PZvw+Td5ztZ8Dk9gkF9D+YfDib/fswecfZfgZMbl8c/Y/Nfgo2S/ytUYFvlVf/jM3elj2/MTuh3m3Nf3tvlrgd6/uN6cj7cfG79WBvRxR/Yzry3TD5bXqw/7qRnp8Kk9+lB0vefmnpP5L6K0gqdfsE+C9tuf7ryMgPaLmSH0deH73leueDEL8vK/kOXPxmLVfyX8dKfm1Y+c1asdR/z5/8E5j8Lq1Y6t4Ltd/zOhv2xqvtouek7sWHAP/ByR70hTTsLkK///VIinli0DffSsWeiFtCfPdr0x/y+tndTyqht4z3J35T6et3lNavtvyDTwR+czKcv/wAEv7eTwS++wNI/8w2b7v4d71jc88x/3td5o6/svdnN3l5nfl2hhmUwJ/uvC76Ea/K3EfE229duWH9Yj0l9Au7AbstKjc9VPCXNKrKMH1t7lf73znF/4iTVjDk4ns4F8f8lgihWBC0rz5PRt4GcRRjn14+NvLRsw/dB8bbH1l+ZbRp4v05SmHgGCQwybv/DBr/QeTut1ho6gmlX7W0LtGC3UML+UTdeTeZJj9kkgnK74luOpmUyf+tJk04knxi9+c7PgTyE3jAm+r+y1T9wrh/9LdP0ZeHQb58/up9H+X9GzT/rp3e8eWQl1owjG2oC/70L/fyvRBomZuPh7y4z8TeetEsLcJTBMDFlxLvxr9OBSH/fAXRtUsb8IvzIjbMEv8PTAhX/HTeIGPJTznwoy2WwWDpg99GcJFvBE6G/yp+rDrgF3s/iAb6ak4kU9Q10blhbkCpAT+ZwROkvJD0QDCcJkMGA0HiOD0SlqtQDiN0IQyD1DZYhN0cqwClWaTUZlvLAXKANXZpHqkMs+MjupuZERbl5mqTHk3TlpNxu6hSnFsgOjcfBDoXDASBt1SC2Yq9dFBrZW5VxxiasC3EKEl6hrZg3cKe3jcebuCrZoPpiFAYq27AbvxilTfGjuzXJK7L0tRdTnacOu4O1czHg4aOyOmiwcxuDO5pGHGUBE7EAzn5vd/oWutY4Kz8UCBKw1mBy7cEM2XkcDlfWMNWHajGdGSURztOiRFHrTLKX3i16HO6IIdE3fusgGjD1XC+XGnGEnWtFSroxcSuULBqS9sZNw7UmZHMzsKTExz+3bGivhQZD6e1RGyGvtL43EIOZzmFZWtXzKwMiBTOx/5+3y2j1iBW4z6lqBLF8QZRhGU0XkSZp6VgL21vZJHAFRMcvmM+rJEmZp2kMHNxMxm740xbmsLaDCyBGnXUalltem6dS3ZMgvwwpFhtVtFHmltw1SQkyKmb5Et0nlPqXg55NuJ8gA9nc1CjkW3nq01u2Iko5u5myO4sDC2TpTynWBYgS5i09rHMd3XTNGzv4DN9qFOjgl4DT+dl3bBwfIWVx2NCouVW6pyYZEa+YixXY3a2hPM9S4tDtU2GNaZk5XKZrg29ckcunjtIMxUDhK3ajZaVK3MVkzbrVQeF02c9FR3jDN+NWsdUQr1vPX/cN96kg1M2c5LPDP14NhPJGuioC9siVzpS9ecEowiLRO72nj0sXNfTUW2xymqFyRVyO+2UCmsSgySW4dLYYKsEBfe6DFB9oGRqMEwrdxJvDpuxOGBmAsBVj3UebxsDdtaDlDKcEX1YLY8HhsT7EulswRxgjSl0pDVdVxhrLNnZLBk0nH9AEpx2C2sUkFNw3GKKYOF4vuB0rqCnUpIRHjcWDduMMtvbV6xXQOQIOD2mA5Khtto2cMkxG+QKvdinjRrxnMJvY8qrsP4syG43mlGUqyUcz7kje7Uch4lNwi3gL353AGGHbEH4HuZqzzVcq66PcigfqPFE2czGNCPTKQV2LLajeTQ/sqY3pSzLqPbQfWhk6SDklAIhQKq2MXk8bDNWaiaramwKjckJTRlvCMb00EV6nO6XptI4ahAXVLLCVsdSCxcDEql5ay1qm2RKWirDInG128JpqbWZEaFUMh+2xnoatMViEMiLJirX5mhhmTYQ2MKw1uSkPkePSD5mSpwue3Sr9CLhQVkUZI3N9uR4Me0B++YDxAwy2y7YZDkO4PlPzuhbms6g8QH1xFHAmcNgHUQGefbSoEG5xCUGVGUlGeWVkwUCILKMlPNk3X1BVCJ5UHwf+IckjvYB6tU4G6wX0gaN0wlqZqg70pcH5jBoFG4+TGtlfcSnVG1R2BLkH35utqqozHiJZ3Ys2ZcGvKtIw3C7HNX41mc2cXa0j8NiIjaM2GjCbiUcJ3uCTdTowEnymvfmOItvaXe2bxh6qM31ZilUXh+N+Vwg2bpgj1IgO8C3bCxQgOizzJIE251q68UooM+3CcNF0cWLBlkJvhnCSIgi02FLxau4nuM91JJT+6HAdZyq1FpIVBP29M2bIZ7bTXHA1L27WZjRMV8g8SbehOtEATqZLHBtK65hLVv3awZVodLxBMbuej/punqWIKxTW0e0yMcLkPT5FcNqI2AVvs6pCdqmllbtsuPisBR3WxR3IZ9NgE0NzG1sTuTkTSDNDxtbE9aJsXRmdT+kxz2/Z+WwVXcjtT2o8Uo/HiiiClGnGB7ZWe0O3FQbzQtDS/oMyJzag2C8SLJe2vRj4GTOiG92NHSMTA+mfU0n1s5LTVrfhjupZXbKLkTY2QJJNriVYW6ykfRwu0s16gClKw1Gjzbx+b9kf4rEyVnFO5+XjUF3gF8LAlZdxFFmQlSKYusJ8k7kU2E1dOoMoVy2tNlFvqRjdnfcVPiu5oekr/TcRpRWRyqdjsQeP6TLRbpudke2PS5b1RiQqrMDl+kLLlJMdjRXDaktthD4q25kqOu9Xic9HAsZMlOBNzA7Qvywm9EoSrHbxfIoAjvzW3DMzkBcZe4B1Qgc349sLMv3LUJrex2ZK/ykGe4Ck1R3QPNKMuQWxhKzeF2BaC7WEkxYJVW7pKX5kl/O5uU0QwBhsMXUVveCZvVdaeZtasJkO/EJgpkIGLBFKyntzkLY/XJD787Y8g6dO8MrGJggm4iXW8KAqa/mdrO+DQrcz0Wzh9A5ZGO9Ss5HpSNO1MecdXw+yWwQxozZmUKL+ZmPS3uwkmZINgf3Vbh1uWdp6wj25G3TkXfMjsO0wJc4ZBs0JEI0HgyhUTRfdrk7y6A5Z4bR40mGzFMH13r4R1ZX1JoOOtLJi+2Ay1NfbEVUEc92XyTQgWhvvtoPq6Agu6GfaOqe44aUWI0PADb8eBzoyHBtAuxW+RZeBR5iCjrSof5BU2uSzw6IPOE3xpg3YrJaZ5EdMbOA4ejyfJ/Q9CQDfI2H9zL1qS1YYZ+3QZNYUNCzQNWybtPlUOwTVjb0QF4OUpFTOT/ncTfZUqaz8eUOkgsF1KKLo8o1MlfPqd0qWHujjOJTeh80zE5weWIVrc2BPOdkdT/u4iNIY6hoBIw81HMRk1rMXEWb1VA/BIw65GTZb4qZaKKgvlht6ynFLAdzxK3NU4CoaiunPbRcQRHHYZFD4WszNxvMxoyDsyq3MUIlBRoWhDcMzne2LfcgFu2JWJHnWAhBMhKDcukm4rB2j/4CnmK73VJesmnCoV54ExAzV8M45YnZJOp6yhqsRWUzltlNPR4vjhbgq4UVM1h85NHBeiKtpRSTenNzDnc8SrbtuleN0QZhAc1ThJl7HAjcfDfiTLh9e6BrxFsqK86OQLIawOjdFIrQTreCn4NkDgEE+xO8ewiKidAcu2IrsuN4AwMO2EiU8G95whzs8T7FdyVWWIEj0bs6wXM0xI8usMryOF9QG4R22S02Q4YbFNmNSaNxeEKMSNOcaDCUylmbj2FWH8C8mY/D1KGYumFmIogwzpA7DMN1JdM0uegxs+GMcVHS07zLjenqFImPpQWEL7JU4xQO8lronQEzy2BsZGvA6DvziC8NA6W0UY9QGlgtBQFV1vZKHyvcROPFhtflOGyWnACsRFarqCcP7GCbkQ425HV9RNJMPnYO5YSiUdoF/HeXnJgGrfaTkKwaYqNynD+DiIiWcxCLtPlRWwr7gvZAkW42jhjvQBRHwL9uta1oIdALeLx+4JRspiibghvwg8DZeijt7yp8Wyfams31dCUQG4L1Ojm0ghFvhp0Z954o081x3NXbaiTtUfPIQlNAW3G8DiJzdFB9PgUBcjA01umAOw6nKrUF/ExxBnq43lkihcQJgmgwsU9MgORJtmjPOAWygl9gbWarwCa8PxfArbjWBtN8ASknRupOjiAmNIeMjr4kZRrHcXaWsgPZ94NxOZfHgRFODcC3ACYxfdqlY6G21gIWrPPRIYPuzR696So/UlUPyKMCUs1Qmg3DgOLGIWA/dr6zy5IlBG5Su301azU+02zzkC2TUjPg3haoXEC1s0BK2hk5sFCiQW00Mw7pccD5A1RU6QGQPz8MY7bcmlFu07NNMrIcThUjotITUSemXo0dDoqg56sKh3kwTdhMG8LMsMvJfZCNF4NU9iEBnenVei6kDtcwA7UnfLCSTzpmEW45C+ZGqvZkSy8nc6RadVMktI3j1sNL2kR7k8SbhW109rHIh3pK+CK3zV0ycff76YwyYNJtgKhDnTMG7VT0CU0AZs9WyqTzqNl8CggVrtXbs1+Hc4i70XCFzKV5tY1I3fIcJjioOQl4OAc9ldktBguW4C1Bw/qsPQKQe9vZaFt09IH06XrE1Eq3WLiEjZX5lmKswSLU5oEq87w9IZ3jaowC3Zp2H8NklrTMVoKkeZbkyHEcZ53JNbywd9pkwMCENhM7FfqhGWeY56/W2jRr6oHiC7rFamGxaIoqWIWHhGO7A6sak2A/YHZtrARrl1/wINTmq4404rVeA08C52GKQwLqppJZmht1r/Y6oomT+Wayxqb7DWNGKyIYpHnH4ErnWGLqjjetR5WCWAl2PB1azXqBzdbzcNvbVJEPWs3a4XOsLGsLVpt84hx7fjBSexgclsrBHc43phn5gPQi7siX2nE2H4DQ3B8ccT9L9ig8aCYGfuPMYEiy4zrnxkJ+hKCALreb8tJoN2Cw3ewozXs5ma+y7jgH1kmOANor6E2A3UhjauqnoBZDNw5EyTwhexTRC95iZglDVdlRShEM1I1E782m+5Unz8fjcVghBcjjBWZMD/tkN4lCZM0D4s5hu2nC4YOpPK5hVezkgUKQDmVDDsAjIExgVke7NY6S64qPXBJX2jlMOnPcJGtQc0GpdlOpY0cgIM/H2YpLFS4MYTxrFHSM8pKxnSlErYDY3YWrLgMcyi6tvNmEsQOqN1vqir3MhXKnZsoEnHm4lOnpOYTMbS2cjoyC9wdUkqORPkjWtD4XMQxIZNF13LCULTe6wLlKy55cWdSbwTGLWwxDBmvTazER1sQpcJiQmAuUdfL42W4206T1jB+M+TaY+Rb8nJsUrrFxSo52lf5cI+z2HO6i8tE1SNI2RJWtt94IWADGNIBgNyk29CpU+f22jJGJqOMrcH4Qm/EeuPW+LUDtCu5qq5WmFZHe/kgirrWUMts0DoPA3MOMqENpYlcwljZJMPDEq3K88I+kjDndxFii1orcmDrNaDCEKuFKylaSDVmrQ7lTpvbAlVOiqmqjBYcPgOLBdaikJxhJarJ8cDjwgCHFSsjM6vroQcs5OEmCTEESa2cXFlYIap8ahjfDDbdiFeGOFDN4DM2fQJXuYBOjmB3EJqt7LySKyQxenj11XNZQcTjLjCmrp2H9qo4IEjaschKPWAP3XdmuTRsL9WJJpoeZJNo7pZb4oxYUWtx2zmHGEx6oAk5VeVwBIx0hhSRZQVp2sXlEp+Eg4Bxz0xiCKsv7HhKJIUhru9kWRlMZdspUghQnsDXmQzeSZ0415NIhf+D0+dSaJjrfMJWh7FJ2ogC+xA92tTGfd+M59McZrIdhfyloPIGDMJytAFWbhYhv7ze2FdEuyWDrAyh5uUmelBi9m2wmY38yOMzXsbY/lR/7EVQS7cYo4s6gJZxliU86QHUc8QipIh9oIL5w4+PxGE9qwGoaYj2m2x5HU3PczgV9wbV8rlO6f9QAC1nlpWZTigxytVzkkw45YD2IotaRSdg5Agof6bDs5g01WlAy4k3T4Uy3c9MOIzWF7NwhZr1uzEcIH6vGpsC2boX1K0Qqe3PZmYtVlG1CiecG2j4/YpWZLQWfUwHTqOf2MgDXB5jzlKOmjkQYsnajORIN7D3s78FqegF+YFeArsiK8tdsIzXAyZgZym5N7sAP547JQiB7zCqSJb8Z1N6pRqHx3TL0OCIBFAtccB0cQeW5jfNMIpThuh4u4uVRd0SuUUWEVRa5OK5Bmea61TTJKAfUfxBexCSwJ9EyjZgahIIdyDv7DTIUCHral+uAm3DVObmB/ztMKHaFBxhGV0dHL8lOHHS3zvoKhlJ3KkoRwh4cABApWVBGR1X5sUdGu0TTxjLdH48ZxoFkdGRWirN0hXS9H6py4zdDh28noJJueN+WQtnSDMMW9a5XFPhh3OFOiX0njZx8OaRBXcatxVOhIoDko0PE+YTPwLpnelzZ0bIFGjVKPDdzPy7HgRxi2U5n1Cxil0aLR0dXJ/mBQEMP9VPJJyUrahxuk09B8EVYR1YRKUSyMaCmRz9b9WyywZaVNYmGy7T3SQufLe18ODhkgi7z+viwAAKNfObU2oHkJGdotRJUfDZh+ZQ5ouYCmYnADDNLBCxTbdVq1Vbl6lhZXCxk2Gi0D5B0bXBrJVzH3N5dw56k42lSJNrbfsUyjGM5CXThOXYAUV/FhoFZ7Z1ymJZjdFsS+ZDI1ibZmi3T+fXwUIEUgx8lF2sFu/bsWDnaZSaO0ly3xbk8whoshjmvmSK8LwDkJIdW1Xx1CHnCToP9B80f+ih8C4MnBVoyzSA9TuxmOQnjGb6M3RkIFkbj1TlIO3HZUwS7HANNzBcNB7CDKcvVYROvQ249BGlzuFAaPrXjdGwJtnXIlIW1wjTsuPHyHn7fF+RvFxbn9MiuZSDCcC9sXqoyh6ZhT1pm1xNemrS6zsN6abTPRvnWlmVO4JjBQRIbgPxguRwKXCL7vGocKCkgnD5eNmpDH+mKitNJj3HATwF9ysfdIT24rb+WZvvqjO0hk2eFjJMEXeJJvwFsjeNE6dTVpF2ohcNGmssgcnA6AGhYa4ScDVMQcLR9Md6smtAF/N2WWuDDB8C/V0vgAfN0LXEc6s6SHoWtcZhrpotDNe+m6y2PanPVBqTMFFqSnQZzEy3yCFAIsrIXW07mKD1fUjB1wX4IH7RNti5NJiRMzhYJdxwtGqkTdH5vkJQ6gvy68iQrDRzmqOwzimYp1uKbmdCUfT/lmLGCUOhG2MR2srITyclQ/Qg8fsXJo1AJJzJpjZFVsJjMQV0AWTG+JbXOtnjC4hDJRzjYg11GK/qsrE0ZMivBgW00nsM5R2EsEwQ0F6RWzSqshvMmsI0zCmZbaE9qnpKNkswz20tMyOW3JboY8/OwWIVln0hMeIpC5r5GIRuorcryYF0+Xm8CgJ8BvI4GDiRm+apQ2npMVWK6Gpcrv9AwGjgKWiBxiBiAECxAWrbhEA5pLUAiSanpPiUo1tsDrrC0DYPgnN0o5GFeixmkiQs6JCSON1eRbUaku19jKq8Lx9W0IjCAnpgdDeOCPSQo8NhtSGz29ShF7ClfjmDYSQiGB+XJoZLrPAIcTN4M5XjQxmBTWq1CXmOwCYiBnYrGSaP6ACAIMZIbph53ZVdqrMh0pt4BdqWWEw9fUgMUVNdjaHSdQzTcpS3F2nPHtSFtwI1NUMB3Pb+RxbE+EH3REryWkVd2vORUI0yLeIbN5QWqq1zl7hsO8LKwt9t9cOrd77fxYp1NU3bkeZDFs6WMHlQ+/BLZCdpqSokouNEIWbDH6cbTt0pmRkfK2U1josOOwVQmDGq14Exxt0vq6b7fDgkaUYP82BQiR4DaqFdpDfYij+hyfFiGSwsUAYdAFpTdJLer/HylhWH0LRIOx9lQQlBzjJsN4cz4Tj4Fg2izGsy0TQ2HMHa7rkXglcfZcgH7bVhU2uGRqgFRXE3ow3G23gdt5XalCXjCxly5NXDeE8kW6ARb996aExmcpb39PD2OQ6JSwuowg2yV2YkN5MX5Onf2GbEfruMI1XQcX1jcBMAY1nKxBlhFl6FmRntJPk2MOkGdKFCzBP5uYQBf7paXIP/rYUd3lojJ3mIGDuJuTbIqF5sFlg72Kaqt8TipSXavRrBLV+L17uCwNYvCqQN4xYmzagu7uTUK3NQRpkpb6GLrN0nI7LPmKCaEVu5Y5jCSpsch31Z12/WnngK+3lP9Ejl3AoO10hsyMx7MIic6aIaJrs2RkC0sY3Qqv0ddMT0sU7GVByK0wywk9s+kfMzy+4GjcoBDZscwFdQdjI78oHOnvCZ1PMqFQ9UlSc+K4s1hK3Ag9OaBQyd1vTIcRBOaGsaFoyf7XDpeRPxhvd8T1VjQw3BKMInSLVeg8DhpCuTSUAB1cKEIIGVMWmO8Rzs1UaE5RmLbEKpY0BCj/WRCIt2ykxEF8VhiZQ0NWFRGR3scBvNC40C5yQVTUMUp3W7EuIstYvqMVy9SvNmAUDTLAfnQHbfJVxTlj4fBqcIEABbm1IouRrgfrY64Nl6UhOBPqHNog72IEuR7BxT7IG1idc/4OdxW+9y2rmY510zrJI4yG8AtWhJWRVc0x0ZdGfnFsk/daU1FMZqhFpft2IVKgvrabyWuEfxiourt1hzuYdVRFcY0w6frfMB4hjzlpP0eg6lIkAPgDQqv8kiQukAQ17JQrCrtfLe1TiOsIHKPmp5q1wo/VlqQjgCznWs5qY5j8zzcxCueoKxkRRoG6+UW+o0y2E+KyQgEYNr1NT2Yzmge3ELsqy2sLWcDX+UAf0wFLhs7kWVG2mIJ0mSBHRCsnnRM5O526xVWN+lYiHjNARqQIJIHi1CfszuLFuRJSk3CPYJgVe8JxNZr8kS2fW48xQR5JRR5XIAT+i6mw7uPY3lI8dWBmQXRBGfLCERiqisSqU0lvrG5VJBDLowW/pIjKytSV7tqCCsTainYtsN2lOvUGT0aCvq+R9d5JA40pk1BgdKp054biUHjW4eI3IyzXY3TyK5DtzE53Zwa2tvxup+gqGvv891M9Ll5HAUgEaU7uJn1slVxKNFAK2xQdo/VXu4EdTIckwenaUViD2jAwFIYkRbxCHjzRhTI/OxEfFwbHeypwcTgDmBXHdV2lszuCmY6G4m2Oz1sd3qHOoqfCPAhTr7CgqOiwwbVbE8wMxEy3a6BI1FyNCLCdkLn5LwfaSxIrjumDKfzhXwcr0oYuKb7JTrfkoqgSjN/eJw7SbQx51FYUB5esiveYjwfVjUDH7Dl3KScxiSVvc7O5qgiApgnxhLzQczWskKjFjTLlgeq8Ri7q3b9aUDWDdschSdgVm6kQuUH69kM5rZZQFQs7iEuzPmDlug4rg+ZVB/MaV8qhWwlQ1XGiXXcWwbCzjKX2/C6xNZ7NilgIRrhsMTdVho12GoYceBdm+/49fyQjRdsBa4M40Tn5iVrJym1ZdWYmwoNNq/6AjBkpdI7yc6bibDu5R53Sb3Md1oxI3fqMO43pV4lGOtT237rNGG4EJCSGurhYJNhLRw6iuKByqLxCjbHVqu5Vc7XTcin05YhlqrELavDIY6OnLYxh20xg6HTw81DhBKGVMlxDQOZv5QxyC1mY9V0CycaTXf9DvrDGtSiR0SYuGtRIDzWCVmDQOYUGuftMIAp0skTRznZ21qhLgg2KGEOgxApJy1BuDBzia41nx3VpPNWwlEL06OALtUt2vihMNh5XJDsOk9lxBbmSkuobd12fUSW0tZZDWGCZQZUlWSO1SYZyC/b57GsYhiXkrzd0KM1lawpppiAuEmS+SEuhfMuOVUOjyXRmvPFpK1xnEAOvbcxBqRYI4xM+BNF2YKydz3ZV4NxoM9yAw7vwkoZPqng4dSRZq2sWRY6L894YOS5PLcVerShc5B+lTGhKLoqt4UxgHoIlwtIPeqcOvCcByrUQasuVpD6UINC2MytYStk/jRLTUhtj9owXShIPKjHc8o4exuzZnnBmaq4T1YgI9u1s6LcAZ6WZm3Jnbo/kOMFYAo4oCRTOVE5DBTx20E5CVkPz/EjRzRy1sxixjg6E5Oe5Totk5phrRBWNSrZ33CgApfwvB6Syz0HpC9XaTnsoGcMW8SFXfhhY+WcAvk3oENR1IykyloeNFVckSzAIeQyyLQ+HKeBDJ8cSTVxjWjcYmjoghyplXEgFXWTgAKWmFm9RCWGofbIOR/356d7eDeinQxUhVK/tEEEK0ScUkbrPjuy+shE4Cjz2paColeNiTED90aEylzBXdxt9HQcrHjDbAoOFv9DP5+Xe1zTCVRaE6KbLYsBo6M4NZ27S47eeY64ABUv11MuIccIm6YavpvZGxGWhIyJO3u+TfKAW0Pm3Dg1VfharCMEzp2eAVPmS3KQwwcYfPjoGvz/I16zwK4+mH9nXp+X1xxeP175ZeV3PFwJPSCFD9J/fcYvt7NATV0P7vH/AQ==</diagram></mxfile>
2202.06242/main_diagram/main_diagram.pdf ADDED
Binary file (40.9 kB). View file
 
2202.06242/paper_text/intro_method.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ There is a recent trend of incorporating optimization and equation solvers as the *final layer* in a neural network, where the *penultimate layer* outputs parameters of the optimization or the equation set that is to be solved [@amos2017optnet; @donti2017task; @NEURIPS2019_9ce3c52f; @wilder2019melding; @wang2019satnet; @perrault2020end; @li2020end; @paulus2021comboptnet]. The learning and optimizing is performed jointly by differentiating through the optimization layer, which by now is incorporated into standard libraries. Novel applications of this method have appeared for decision focused learning, solving games, clustering after learning, with deployment in real world autonomous driving [@xiao2022differentiable] and scheduling [@wang2022decision]. In this work, we explore a novel attack vector that is applicable for this setting, but we note that the core concepts in this attack can be applied to other settings as well. While a lot of work exists in attacks on machine learning, in contrast, we focus on a new attack that forces the decision output to be meaningless via specially crafted inputs. The failure of the decision system to produce meaningful output can lead to catastrophic outcomes in critical domains such as autonomous driving where decisions are needed in real time. Also, such inputs when present in training data lead to abrupt failure of training. Our work *exploits the failure conditions of the optimization layer* of the joint network in order to induce such failure. This vulnerability has not been exploited in prior literature.
4
+
5
+ *First*, we present a *numerical instability attack*. Typically, an optimization solver or an equation set solver takes in parameters $\theta$ as input. In the joint network, this parameter $\theta$ is output by the learning layers and feeds into the last optimization layer (see Fig. [1](#fig:general_architecture){reference-type="ref" reference="fig:general_architecture"}). At its core, the issue lies in using functions which are prone to numerical stability issues in its parameters (Appendix [\[appendix:additional_attacks\]](#appendix:additional_attacks){reference-type="ref" reference="appendix:additional_attacks"}). Most optimization or equation solvers critically depend on the matrix $A$---part of the parameter $\theta$---to be sufficiently far from a singular matrix to solve the problem. Our attack proceeds by searching for input(s) that cause the matrix $A$ to become singular. The instability produces *NaNs*---undefined values in floating-point arithmetic---which may result in undesired behavior in downstream systems that consume them. We perform this search via gradient descent and test three different ways of finding a singular matrix in neighborhood of $A$; only one of which works consistently in practice.
6
+
7
+ <figure id="fig:general_architecture" data-latex-placement="t">
8
+ <embed src="images/optimization_overview.pdf" style="width:85.0%" />
9
+ <figcaption>Optimization layers in neural networks. The neural network takes input <span class="math inline"><em>u</em></span>. Some parameters (<span class="math inline"><em>Q</em>, <em>p</em>, <em>A</em>, <em>b</em>, <em>G</em>, <em>h</em></span>) of the optimization then depend on the output <span class="math inline"><em>θ</em> = <em>f</em><sub><em>w</em></sub>(<em>u</em>)</span>. </figcaption>
10
+ </figure>
11
+
12
+ *Second*, to tackle the numerical instability attack, we propose a novel powerful defense via an efficiently computable intermediate layer in the neural network. This layer utilizes the singular value decomposition (SVD) of the matrix $A$ and, if needed, approximates $A$ closely with a matrix $A'$ that has bounded condition number; the bound is a hyperparameter. Large condition number implies closeness to singularity, hence the bounded condition number guarantees numerical stability in the forward pass through the optimization (or equation) solver. Surprisingly, we find that the training performance with our defense in place surpasses the performance of the undefended model, even in the absence of attack, perhaps due to more stable gradients.
13
+
14
+ *Finally*, we show the efficacy of our attack and defense in (1) a synthetic data problem designed in @amos2017optnet and (2) a variant of the Sudoku experiment used in @amos2017optnet and (3) an autonomous driving scenario from @temp_Speedprofileplanning, where failures can occur even without attacks and how augmenting with our defense prevent these failures. Lastly, we identify other sources of failure in these optimization layers by invoking edge cases in the solver (Appendix [\[appendix:additional_attacks\]](#appendix:additional_attacks){reference-type="ref" reference="appendix:additional_attacks"}). We list serious bugs in the solvers that we encountered.
15
+
16
+ # Method
17
+
18
+ **Threat Model:** We are given a trained neural network which is a composition of two functions $f_w$ and $s$, where $w$ represents neural network weights and $w$ is known to the adversary (i.e., the adversary has whitebox access to the model). The function $f_w$ takes in input $u$ and produces $\theta = f_w(u)$. $\theta$ defines some of the parameters to our solver (Fig. [1](#fig:general_architecture){reference-type="ref" reference="fig:general_architecture"}). In this paper, we analyze a specific component of $\theta$ which corresponds to the intermediate matrix $A$ (in $Ax =b$). For example, if $\theta$ only consists of $A$, it can be formed by reshaping $\theta$, where the $i,j$ entry of $A$ is $\theta_{i,j}$. The solver layer takes $A$ as input and produces a solution $s(A)$. The attacker's goal is to craft any input $u^*$ such that $s(f_w(u^*))$ fails to evaluate successfully due to issues in evaluating $s$ stemming from numerical instability, effectively causing a denial of service. Note that the *existence* of any such input $u^*$ is problematic and we allow latitude to the attacker to produce *any* such input as long as syntactical properties are maintained, e.g., bounding image pixel values in 0 to 1. In this setting, an attacker can also craft an attack input that is close to some original input if needed (Fig. [2](#fig:attack_images){reference-type="ref" reference="fig:attack_images"}), e.g., when they need to foil a human in the loop defense. Even in this worst case scenario of allowing the attacker to provide any input, our proposed defense prevents NaNs in all cases.
19
+
20
+ We emphasize the distinction between the goal of our attack inputs and that of adversarial examples. In traditional adversarial examples, small perturbations to the input image is sought in order to show the surprising effect that two images that appear the same to the human eye are assigned different class labels, but these misclassified labels can still be consumed by downstream systems. In contrast, in our work, the *surprise* is the *existence* of inputs that cause a complete failure in the outcome of the system, which to our knowledge have not been previously studied. Here, we show the existence of specially crafted inputs, which may be semantically close to a valid input, that evaluate to outputs that cause a complete denial of service, i.e., NaNs are produced, leading to undefined behavior in the system. A naive remediation of a default safe action for NaN outputs can fail in complex domains (e.g., autonomous driving) which have context-dependent safe actions (e.g., the safest action on a highway with a speed-limit road sign depends on various conditions such as speed of the car in front, need to change lane, etc.). It is thus impossible to provide a rule-based safe default action since there can be infinitely many contexts.
21
+
22
+ <figure id="fig:attack_images" data-latex-placement="t">
23
+
24
+ <figcaption>Left shows original image <span class="math inline"><em>u</em></span>, right shows <span class="math inline"><em>u</em><sup>′</sup> = <em>u</em> + <em>δ</em></span> which is semantically close. All attacks were found using <span class="math inline">$\sf{AllZeroRowCol}$</span> with a upper bound on the perturbations.</figcaption>
25
+ </figure>
26
+
27
+ In our attack, we seek to find an input $u^*$ that evaluates to a rank deficient intermediate matrix $A$ (Fig. [1](#fig:general_architecture){reference-type="ref" reference="fig:general_architecture"}). For any $m \times n$ matrix $A$, $A$ is rank-deficient if its rank is strictly less than $\min(m,n)$. A rank deficient matrix is also singular, hence the system of equations $A x = b$ ($b \neq 0$) produces undefined values (NaN) when solved directly or as constraints in an optimization. Even matrices close enough to singularity can still produce errors due to the limited precision of computers. Depending on the neural network $f_w$ (Fig. [1](#fig:general_architecture){reference-type="ref" reference="fig:general_architecture"}), finding $u$ that produces an arbitrary singular $A$ (e.g., $0_{m \times n}$) is not always possible (see Appendix [\[appendix:targetzero\]](#appendix:targetzero){reference-type="ref" reference="appendix:targetzero"}). Our approach is guided by the following known result
28
+
29
+ ::: {#prop:demko .proposition}
30
+ **Proposition 1** (@demko1986condition). *For any matrix $A$, the distance to closest singular matrix is $\min_B\{\left\lVert A-B\right\rVert_2: B \mbox{ is singular}\} = \left\lVert A\right\rVert_2 / \kappa_2(A) = \sigma_{\min}$*
31
+ :::
32
+
33
+ Thus, increasing the condition number of $A$ moves $A$ closer to singularity; at singularity $\kappa_2(A)$ is $\infty$. Following Alg. [\[alg:algorithm_attack\]](#alg:algorithm_attack){reference-type="ref" reference="alg:algorithm_attack"}, we start with a given $u$ producing a well-conditioned matrix $A$ and aim to obtain $u^*$ producing singular $A'$ in the *vicinity* of $A$ using three approaches: $\sf{AllZeroRowCol}$, $\sf{ZeroSingularValue}$, and $\sf{ConditionGrad}$.
34
+
35
+ :::: algorithm
36
+ **Input**: input features $u$, loss function $\ell$, victim model $f_w$\
37
+ **Parameters**: learning rate $\alpha$\
38
+ **Output**: attack input $u^*$
39
+
40
+ ::: algorithmic
41
+ Let $u^*= u$. $l = \ell(f_w(u^*))$ Update $u^*$ based on $\alpha{},\frac{\delta{}l}{\delta{}u^*}, \ell$ **return** $u^*$
42
+ :::
43
+ ::::
44
+
45
+ $\sf{AllZeroRowCol}$: An approach to obtain a rank-deficient matrix $A'$ from $A$ is to zero out a row (resp. column) in case $m < n$ (resp. $m > n$) in $A$. Then, we use $A'$ as a target matrix for which a gradient descent-based search is performed to find an input $u^*$, that yields $A' = f_w(u^*)$. In our experiments, we choose the first row/column to zero out, though choosing other rows/columns is equally effective.
46
+
47
+ From Prop. [1](#prop:demko){reference-type="ref" reference="prop:demko"}, $A'$ is a *closest singular matrix* if $\left\lVert A - A'\right\rVert_2 = \sigma_{\min}$. An approach to obtain this rank-deficient matrix $A'$ from $A$ is to perform the SVD $A = U \Sigma V^T$, then zero out the smallest singular value in $\Sigma$ to get $\Sigma'$, and then construct $A' = U \Sigma' V^T$. It follows from the construction that $\left\lVert A - A'\right\rVert_2 = \sigma_{\min}$. Then, using $A'$ as a target matrix a gradient descent-based search is performed to find $u^*$ that yields $A' = f_w(u^*)$. In theory, since $A'$ is a *closest singular matrix* it should be easier to find by gradient descent compared to $\sf{AllZeroRowCol}$. However, this approach fails in practice because precision errors make $A'$ non-singular even though $\Sigma'$ has a zero singular value.
48
+
49
+ From Prop. [1](#prop:demko){reference-type="ref" reference="prop:demko"}, we can also use gradient descent to find $u^*$ such that the matrix $A$ has a very high condition number. The overall gradient we seek is $\frac{\partial \log \kappa_2(A)}{\partial u}$, where we use $\log$ as condition numbers can be large. Following chain rule, we get $\frac{\partial \log \kappa_2(A)}{\partial u} = \frac{1}{\kappa_2(A)} \frac{\partial \kappa_2(A)}{\partial \theta} \frac{\partial \theta}{\partial u}$. Since $\theta = f_w(u)$, the third term is simply the gradient through the neural network. The second term can be obtained component wise in $\theta$ as $\frac{\partial \kappa_2(A)}{\partial \theta_{i,j}}$ for all $i,j$. The following result provides a closed form formula for the same (proof in Appendix [\[appendix:proofs\]](#appendix:proofs){reference-type="ref" reference="appendix:proofs"}).
50
+
51
+ ::: {#lemma:gradcond .lemma}
52
+ **Lemma 1**. *Let $A \in \mathbb{R}^{m \times n}$ with thin SVD $A = U \Sigma V^T$ and $\sigma_{\max} = \sigma_1 \geq \ldots \geq \sigma_r = \sigma_{\min}$ for $r = \min(m,n)$. Then, $\frac{\partial \kappa_2(A)}{\partial \theta_{i,j}}$ is given by $\mathop{\mathrm{tr}}\Big( \frac{\partial (\lvert\lvert A^{+}\rvert\rvert_2 * \lvert\lvert A\rvert\rvert_2)}{\partial A} \cdot\frac{\partial A}{\partial \theta_{i,j}}\Big)$ where $$\begin{align*}
53
+ &\frac{\partial (\lvert\lvert A^{+}\rvert\rvert_2 * \lvert\lvert A\rvert\rvert_2)}{\partial A} = B^T - (A^{+} C A^{+})^T + \\
54
+ & \quad (A^+)^T A^+ C (I - A^+A) + (I - AA^+) C A^+ (A^+)^T
55
+ \end{align*}$$ with $B \!=\! \lvert\lvert A^+ \rvert\rvert_2 V e_1 e^T_1 U^T$, $C \!=\! \lvert\lvert A\rvert\rvert_2 U e_r e^T_r V^T$ and $e_i$ is the unit vector with one in the $i^{th}$ position.*
56
+ :::
57
+
58
+ $\sf{ConditionGrad}$ still works less consistently than $\sf{AllZeroRowCol}$. This is mainly because the gradient descent often saturates at a condition number that is high but not large enough for instability.
59
+
60
+ A low-dimension illustration of the approaches is in Fig. [3](#fig:explaination){reference-type="ref" reference="fig:explaination"}, which shows the 2D space of the two singular values $\sigma_1, \sigma_2$ of all $2 \times 2$ matrices. The condition number ($\sigma_{\max} / \sigma_{\min}$) approaches infinity near the axes and the $\infty$ condition number on the axes is difficult to reach in $\sf{ConditionGrad}$. The illustration also shows why $\sf{AllZeroRowCol}$ works more consistently than $\sf{ZeroSingularValue}$ as recovering a matrix from $\Sigma'$ involves multiplication which leads to loss of singularity (more so in high dimension) whereas $\sf{AllZeroRowCol}$ directly obtains a singular matrix. This is reflected in our experiments later.
61
+
62
+ We note that simple approaches such as attempting to use gradient descent or other existing approaches to directly maximize model output to very high values fails due to saturation (see results in Appendix [\[appendix:maxpgd\]](#appendix:maxpgd){reference-type="ref" reference="appendix:maxpgd"}). Further, the optimization output and ill-conditioning of $A$ can have no relation at all:
63
+
64
+ ::: {#lemma.1 .lemma}
65
+ **Lemma 2**. *For an optimization $\min_{\{x|Ax = b\}} f(x)$ with $f$ convex, the solution value (if it exists) can be made arbitrarily large by changing $\theta = \{A,b\}$ while keeping $A$ well-conditioned.*
66
+ :::
67
+
68
+ Lemma [2](#lemma.1){reference-type="ref" reference="lemma.1"} implies that $A$ can remain well-conditioned even though output $\min_{\{x|Ax = b\}} f(x)$ is large. Thus, specifically targeting to directly obtain a singular matrix $A$ is important for a successful NaN attack (proof in Appendix [\[appendix:proofs\]](#appendix:proofs){reference-type="ref" reference="appendix:proofs"}).
69
+
70
+ <figure id="fig:explaination" data-latex-placement="t">
71
+ <img src="images/heatmap.png" />
72
+ <figcaption>Left is a heatmap of condition numbers (<span class="math inline"><em>σ</em><sub>max</sub>/<em>σ</em><sub>min</sub></span>) for 2D singular value space (<span class="math inline"><em>σ</em><sub>1</sub>, <em>σ</em><sub>2</sub></span>) of <span class="math inline">2 × 2</span> matrices (high condition number only near axes, as one of <span class="math inline"><em>σ</em><sub>1</sub></span> or <span class="math inline"><em>σ</em><sub>2</sub></span> approaches 0 since <span class="math inline"><em>σ</em><sub>min</sub> = min {<em>σ</em><sub>1</sub>, <em>σ</em><sub>2</sub>}</span>). Right is an enlarged version of the smaller dashed circle. </figcaption>
73
+ </figure>
74
+
75
+ :::: algorithm
76
+ **Input**: model $f_w$, input features $u$\
77
+ **Parameter**: condition number bound $B$\
78
+ **Output**: well-conditioned $A'$
79
+
80
+ ::: algorithmic
81
+ Let $A' = A = f_w(u) = U\Sigma{}V^T$. For all $i$, let $\Sigma'_{i,i} =$ min($\sigma_i$, $\sigma_{max}/B$) $A' = U\Sigma{}'V^T$ return $A'$.
82
+ :::
83
+ ::::
84
+
85
+ First, we note that our goal is to fix the instability in the optimization used in the final layer, which is distinctly different from the general problem of instability of training neural networks [@colbrook2022difficulty]. Next, we discuss defense for *square matrices* $A$. For symmetric square matrices, the condition number can be stated in term of eigenvalues: $\kappa_2(A) = \frac{|\lambda|_{\max}}{|\lambda|_{\min}}$ where $|\lambda|_{\max}$ is the largest eigenvalue by magnitude. A typical heuristic to avoid numerical instability for square matrices is to add $\eta I$ for some small $\eta$ [@StableDNN]. However, this approach *only works for square* positive semi-definite (PSD) matrices. If some eigenvalue of $A$ happens to $-\eta$ then this heuristic actually makes the resultant matrix non-invertible (i.e., infinite condition number). Besides, clearly this heuristic *does not apply for non-square matrices*.
86
+
87
+ As a consequence, we propose a differentiable technique (Alg. [\[alg:algorithm_defense\]](#alg:algorithm_defense){reference-type="ref" reference="alg:algorithm_defense"}) that directly *guarantees* the condition number of *any* intermediate matrix to be a bounded by a hyperparameter $B$. In the forward pass, we perform a SVD of $A = U \Sigma V^T$; the computation steps in SVD are differentiable and the matrix $\Sigma$ gives the singular values $\sigma_i$'s. Recall that the condition number $\kappa_2=\sigma_{\max} / \sigma_{\min}$. The condition number can be controlled by clamping the $\sigma_i$'s to a minimum value $\sigma_{\max}/B$ to obtain a modified $\Sigma'$. Then, we recover the approximate $A' = U \Sigma' V^T$. We present the following proposition (proof in Appendix [\[appendix:proofs\]](#appendix:proofs){reference-type="ref" reference="appendix:proofs"}).
88
+
89
+ <figure id="fig:network_diagram_jigsaw_sudoku" data-latex-placement="t">
90
+ <embed src="images/jigsaw_arch.pdf" style="width:85.0%" />
91
+ <figcaption>The Jigsaw Sudoku network architecture. The input is an image of the Jigsaw Sudoku puzzle (0 indicates blank cell that needs to be filled with a value in <span class="math inline">{1, 2, 3, 4}</span>) and the output is the solution to the puzzle given the constraint that no two numbers in the same colored region are the same. The solution to the blank cells given by the neural network is indicated in red.</figcaption>
92
+ </figure>
93
+
94
+ ::: {#prop:cond_def .proposition}
95
+ **Proposition 2**. *For the approximate $A'$ obtained from $A$ as described above and $x'$ a solution for $A' x = b$, the following hold: (1) $\left\lVert A' - A\right\rVert_2 \leq \sigma_{\max} /B$ and (2) $\frac{||x^* - x'||_2}{||x'||_2} \leq \kappa_2(A) / B$ for some solution $x^*$ of $A x = b$.*
96
+ :::
97
+
98
+ The second item (2) shows that approximation of the solution obtained from the solver depends on $\kappa_2(A)$, which can be large if $\kappa_2(A)$ is close to infinity. This error estimate can be provided to downstream systems which can be used in the decision on whether to use the solver's output.
2208.10531/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2208.10531/paper_text/intro_method.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ <figure id="fig:compare" data-latex-placement="htp">
4
+ <embed src="fft2_002.pdf" style="width:95.0%" />
5
+ <figcaption>Comparisons between the conventional MixUp <span class="citation" data-cites="zhang2018mixup"></span> and our Phase MixUp. (Best viewed in color.) <span class="math inline"><em>μ</em></span> is the ratio of the “hammer” image fused into the “television” image. Note that when <span class="math inline"><em>μ</em> = 0.50</span>, in Conventional MixUp the dark shadow outweighs “television”, while in Phase MixUp both the core objects can be highlighted. Besides, when <span class="math inline"><em>μ</em> = 0.75</span>, the “television" is completely unseen in Conventional MixUp, while it still exists in Phase MixUp. These demonstrate that Phase MixUp can alleviate the interference of background and style information.</figcaption>
6
+ </figure>
7
+
8
+ Domain adaptation [@wang2018deep] is proposed to transfer knowledge from the labeled training data that form the *source* domain to the unlabeled test data that form the *target* domain. Owing to the concerns about data privacy and security, a new setting --Source-Free domain adaptation [@liang2020we]-- emerges, where source data are completely unavailable when adapting to the target. Even so, there are still potential risks if the source models are visible. Some works like dataset distillation [@wang2018dataset] and DeepInversion [@yin2020dreaming] may recover data from the model through adversarial attacks. In such a case, **Black-Box domain adaptation** [@zhang2021unsupervised] is proposed to consider the source models are completely unseen and only model outputs are accessible for target adaptation, which is a more strict version of Source-Free domain adaptation.
9
+
10
+ Due to the limited access to the source model, the only way we obtain source information is by doing knowledge distilling between the source model and the target model. In the original Source-Free setting, we can alleviate domain shifts by updating the source model gradually, keeping the transferrable parameters while replacing the untransferrable ones. However, in the Black-box scenario, only the source model's outputs are exposed, i.e., the source information cannot be disentangled as target-relevant and target-irrelevant parts as Source-Free does. The overfitting on the source domain in Black-Box is much stronger than that in Source-free, which greatly undermines the target models' performance. [@liang2021distill] observes this problem and uses conventional MixUp [@zhang2018mixup] as regularization for better generalization. However, there exist problems with regularization methods like MixUp or CutMix [@yun2019cutmix] because *they are conducted on both input and label levels*. In the target adaptation process, we do not have accurate labels but the pseudo labels as substitutes and the linear behaviors learned from these noisy pseudo-labels due to domain shifts could have negative impacts on the generalization and degrade the model's performance (details in Table [\[tab:ab-aug\]](#tab:ab-aug){reference-type="ref" reference="tab:ab-aug"}).
11
+
12
+ Based on the discussions above, we develop a new and effective data augmentation method for domain adaptation named Phase MixUp, which regularizes ML model training only from the *input aspect*. Apart from inhibiting potential noises from pseudo-labels, our Phase MixUp can highlight task-relevant objects and avoid negative impacts from background information as well. Conventional MixUp directly blends one image's pixels into another image [@wu2020dual; @liang2021distill]. But this kind of combination is too simple to highlight the objects that we want to classify, especially when the image owns complex background and style information. For example, the top of Fig. [8](#fig:compare){reference-type="ref" reference="fig:compare"} shows the conventional MixUp between the "television\" image and the "hammer\" image. The "hammer\" image has a very strong dark shadow that outweighs another object "television\" when $\mu = 0.50$. Besides, when $\mu = 0.75$, the "television\" is completely unseen. In such cases, background elements like dark shadow will have a negative impact on conventional MixUp as they distract the attention of core objects and the model tends to connect the "hammer\" with the shadow instead. Frequency decomposition proves to be a useful tool to disentangle object information and background information from an image [@yang2020fda; @liu2021feddg], *since amplitude spectra contain most background information, while phase spectra are related to object information.* Therefore, we propose Phase MixUp (Fig. [2](#fig:framework){reference-type="ref" reference="fig:framework"}b) to capture the key objects and reduce background interference at the same time, as the bottom of Fig. [8](#fig:compare){reference-type="ref" reference="fig:compare"} shows. By mixing their phase spectra, we can focus more on the two core objects "television\" and "hammer\" and weaken background information like the shadow. Even under a more extreme case like $\mu = 0.75$, the "television\" still exists in the mixed image of Phase MixUp. The augmented image will attend further training to enhance the target model's class consistency.
13
+
14
+ What's more, despite the fact that input- and label-level regularizations have received enough attention in domain adaptation, regularization from the *network aspect* is overlooked. Specifically, in the Black-Box setting, only the source model's outputs are accessible, which leads to more severe overfitting of target networks on source information. Because target networks have to learn from the outputs produced by source models without detailed calibrations on network weights as the Source-Free setting does. Therefore, a network-level regularization technique on target networks is necessary. To this end, we propose a novel method for domain adaptation called Subnetwork Distillation, which aims to regularize the full target network with the help of its subnetwork and calibrate the full target networks gradually. We slim the widths of the target network to get its subnetwork, which has a smaller scale than the full network, hence less likely overfitting to the source domain. By transferring knowledge from the target subnetwork to the full target network, the original full network captures diverse representations from the target domain with a better generalization ability.
15
+
16
+ Our contributions are summarized in three aspects:
17
+
18
+ - We propose Phase MixUp as a new *input-level* regularization scheme that helps the model enhance class consistency with more task-relevant object information, thus obtaining more robust representations for the target domain.
19
+
20
+ - We introduce a novel *network-level* regularization technique called Subnetwork Distillation that assists the target model to learn diverse representations and transfers knowledge from the model's partial structures to avoid overfitting on Black-Box source models.
21
+
22
+ - We conduct extensive experimental results on several benchmark datasets with both Single-Source and Multi-Source settings, showing that our approach achieves state-of-the-art performance compared with the latest methods for Black-Box domain adaptation.
23
+
24
+ # Method
25
+
26
+ Our proposed method RAIN tackles the Black-Box domain adaptation from the perspective of regularization (both input-level and network-level). The overall framework is presented in Fig.  [2](#fig:framework){reference-type="ref" reference="fig:framework"}a. In the following subsections, we elaborate on the key components of the framework.
27
+
28
+ For a typical domain adaptation problem, we have a source domain dataset $\mathcal{S} = \{(x^{s}_{i},y^{s}_{i})\}^{n_{s}}_{i=1}$ with $n_{s}$ labeled samples. The target domain dataset $\mathcal{T} = \{x^{t}_{i}\}^{n_{t}}_{i=1}$ includes $n_{t}$ unlabeled samples, which shares the same label space $\mathcal{D} = \{1,2,\cdots,K\}$ with source but lie in different distributions, where $K$ is the number of classes. The goal of domain adaptation is to seek the best target model $f_{t}$ with the help of source model as $f_{s}$.
29
+
30
+ For the Black-Box paradigm, the learning starts with the supervised learning on source as $\mathcal{L}^{s} = - \mathbb{E}_{(x^{s}_{i},y^{s}_{i}) \in \mathcal{S}} \sum_{k \in D}l^{s}_{i}\log f_{s}(x^{s}_{i})$ with label smoothing [@muller2019does]: $l^{s}_{i} = \alpha / K + (1-\alpha)y^{s}_{i}$, where $\alpha$ is smoothing parameter empirically set to $0.1$. Moreover, the source model's details are completely unseen except for the model's outputs. In this case, knowledge distillation [@hinton2015distilling] is applied to transfer from source to target:
31
+
32
+ $$\begin{equation}
33
+ \label{eq:ps}
34
+ \mathcal{L}_{kd} = \mathrm{D_{KL}}(\hat{y}_{i}^{t}||f_{t}(x^{t}_{i})),
35
+ \end{equation}$$ where $\mathrm{D_{KL}}(\cdot)$ denotes Kullback-Leibler (KL) Divergence and $\hat{y}_{i}^{t}$ is the pseudo-label. Now we explain how to obtain $\hat{y}_{i}^{t}$. Assume that $q = \mathop{\mathrm{arg\,max}}f_{s}(x_{i}^{t})$, then we deduce the smooth pseudo label $l^{t}_{i} = \alpha' / K + (1-\alpha')q$, where $\alpha'$ is smoothing parameter empirically set to $0.1$. Based on this, the pseudo label $\hat{y}_{i}^{t}$ can be represented as $\hat{y}_{i}^{t} = \eta l^{t}_{i} + (1-\eta)f_{t}(x_{i}^{t})$, where $\eta$ is a momentum hyperparameter set as 0.6.
36
+
37
+ Conventional MixUp is a very popular input- and label-level regularization technique in domain adaptation [@wu2020dual; @liang2021distill], whose goal is to enhance class-wise consistency and linear behavior, thus helping the model learn better representations on target domain. Nevertheless, there exist noises in the pseudo-labels applied to MixUp, which are harmful to the adaptation process, and that's why we propose an input-level (only) regularization method here. Next details of the proposed Phase MixUp process are presented. We begin by introducing the standard format of the Fourier transform. Assume a target sample $x_{i}^{t} \in \mathbb{R}^{C\times H\times W}$, where $C$, $H$ and $W$ correspond to channel numbers, height and width. We transfer it from spatial space to frequency space and then decompose its frequency spectrum as amplitude and phase: $$\begin{equation}
38
+ \label{eq:ft}
39
+ \footnotesize
40
+ \mathcal{F}_{i}^{t} = \sum_{h=0}^{H-1} \sum_{w=0}^{W-1}x_{i}^{t}\exp[{-j2\pi(\frac{h}{H}u + \frac{w}{W}v)}] = \mathcal{A}_{i}^{t} \exp{(\mathcal{P}_{i}^{t})}.
41
+ \end{equation}$$ Here we ensure the one-to-one correspondence between channels of different spaces. Here $x_{i}^{t}$ is a spatial image representation based on image pixel $(h,w)$, and $\mathcal{F}_{i}^{t}$ is a frequency image representation based on frequency spectrum unit $(u,v)$. $\mathcal{A}_{i}^{t}$ is the amplitude spectrum and $\mathcal{P}_{i}^{t}$ is the phase spectrum of the target sample $x_{i}^{t}$. According to [@yang2020fda; @liu2021feddg], the amplitude spectrum reflects the low-level distributions like the style, *and the high-level semantics like object shape is stored in the phase spectrum.* Since our task here is domain adaptation for object recognition, we hope the mixup procedure focuses more on the key objects rather than the background information. Hence, we interpolate between phase spectra as: $$\begin{equation}
42
+ \label{eq:mixup}
43
+ \mathcal{P}_{mix}^{t} = \mu \mathcal{P}_{j}^{t} + (1 - \mu) \mathcal{P}_{i}^{t},
44
+ \end{equation}$$ where $\mathcal{P}_{j}^{t}$ is the phase spectrum from a randomly-selected target sample $x_{j}^{t}$ as $\mathcal{F}(x_{j}^{t}) = \mathcal{A}_{j}^{t} \exp{(\mathcal{P}_{j}^{t})}$, and $\mu$ is sampled from a Beta distribution as $\mathbf{Beta}(0.3, 0.3)$. After that, the Phase MixUp augmented sample produced by inverse Fourier Transform is: $$\begin{equation}
45
+ \footnotesize{
46
+ \begin{multlined}
47
+ x_{mix}^{t} = \mathcal{F}^{-1}(\mathcal{A}_{i}^{t}, \mathcal{P}_{mix}^{t}) \\= \frac{1}{HW}\sum_{u=0}^{H-1} \sum_{v=0}^{W-1} \mathcal{A}_{i}^{t} \exp{(\mathcal{P}_{mix}^{t})}\exp[{-j2\pi(\frac{u}{H}h + \frac{v}{W}w)}].
48
+ \end{multlined}
49
+ }
50
+ \end{equation}$$ The Phase MixUp procedure is depicted in Fig. [2](#fig:framework){reference-type="ref" reference="fig:framework"}b. After obtaining the synthesized sample, we can enhance class consistency by comparing the outputs of the original and synthesized samples. Here $\ell_1$-norm is utilized to compute the Phase MixUp loss as: $$\begin{equation}
51
+ \label{eq:mix}
52
+ \mathcal{L}_{pm} = \mathbb{E}_{x_{i}^{t} \in T}\left\Vert f_{t}(x_{i}^{t}) - f_{t}(x_{mix}^{t}) \right\Vert_{\ell_1}.
53
+ \end{equation}$$Phase MixUp is different from conventional MixUp. First, our Phase MixUp is conducted on the phase spectra related to core objects, not the whole image. Moreover, conventional MixUp operates on both input- and label-level, while Phase MixUp is an input-level augmentation.
54
+
55
+ ![The training process of Subnetwork Distillation (Best viewed in color.) The orange arrows associate with the full target network adaptation and the pink arrows correspond to the target subnetwork adaptation. During the optimization of JS Divergence and Gradient Dissimilarity, the model obtains knowledge of diverse target representations that benefit the generalization.](mutual_002.pdf){#fig:mutual width="97%"}
56
+
57
+ During the procedure of knowledge transfer from source models to target models with knowledge distillation (Eq. [\[eq:ps\]](#eq:ps){reference-type="ref" reference="eq:ps"}), overfitting on source information is an obvious side effect. Especially in the Black-Box setting, only the source model's outputs are visible while the source model's weights are completely unseen. In other words, the careful calibration of the target network's weights in the Source-Free setting cannot be achieved here. Hence, it is necessary to propose a specific network-based regularization method. To this end, we propose the Subnetwork Distillation approach to domain adaptation, which utilizes the self-knowledge transfer from the target subnetwork to the full target network with distinctive knowledge. We hope this structure can assist the model to obtain more target information from diverse representations far from the support of source, thus overcoming overfitting on source. We denote the full target network's weights as $W_{full}$ and the target subnetwork as $W_{sub}$. If the complete network can be represented as $W_{full} = W_{0:1}$, then by slimming the network width with a ratio $\alpha \in (0,1]$, the subnetwork can be generated as $W_{sub} = W_{0:\alpha}$, i.e., a subnetwork with $W_{0:\alpha}$ means selecting the first $\alpha \times 100\%$ weights of each layer of the full network. Therefore, the network's output with width $\alpha$ is denoted as $f_{t}(x_{i}^{t}; W_{0:\alpha})$. The Subnetwork Distillation objective is defined as $$\begin{equation}
58
+ \label{eq:ml}
59
+ \mathcal{L}_{sd} = \mathrm{D_{JS}}(f_{t}(x_{mix}^{t}; W_{sub})|| f_{t}(x_{mix}^{t}; W_{full})).
60
+ \end{equation}$$ After getting these outputs from the subnetwork with smaller widths, we compare them with the original network's outputs using Jensen--Shannon (JS) divergence, shown in Fig. [3](#fig:mutual){reference-type="ref" reference="fig:mutual"}. To prevent the adverse influence on the inference with original images and full networks, here we use the images operated after Phase MixUp, which can also add perturbations to the regularization for more robust representations.
61
+
62
+ Now we provide a theoretical analysis to illustrate why the JS divergence can benefit the adaptation process. Assume that Phase MixUp is a mapping function $f_{PM}:x\rightarrow z$, and the following neural network is $f_{network}:z\rightarrow y$. Here we have three types of networks, the source network $f_{s}$, the full target network $f_{t}$, and the target subnetwork $f_{sub}$. Besides, we make two assumptions that are intuitive to understand:
63
+
64
+ ::: assumption
65
+ **Assumption 1**. *Based on observed latent variables $z$ and an empirical predictor $\hat{p}$, $-\log\hat{p}(y|z)$ can be bounded by $C$, which is a constant.*
66
+ :::
67
+
68
+ ::: assumption
69
+ **Assumption 2**. *The target subnetwork is superior to the source network on the target datasets. Mathematically, $$\begin{equation}
70
+ p_{s}(y,z)\log\hat{p}(y|z) \leq p_{sub}(y,z)\log\hat{p}(y|z).
71
+ \end{equation}$$*
72
+ :::
73
+
74
+ What's more, there is a lemma that assists our main conclusion Theorem [2](#theorem){reference-type="ref" reference="theorem"}:
75
+
76
+ ::: lemma
77
+ **Lemma 1**. *The source loss and target loss are bounded by joint distributions and empirical predictions as: $$\begin{equation}
78
+ l_{s} \leq \mathbb{E}_{p_{s}(y,z)}[-\log\hat{p}(y|z)], ~~~
79
+ l_{t} \leq \mathbb{E}_{p_{t}(y,z)}[-\log\hat{p}(y|z)].
80
+ \end{equation}$$*
81
+ :::
82
+
83
+ On the basis of the proposed assumptions and lemma, we conclude the bound for the target loss as:
84
+
85
+ ::: {#theorem .theorem}
86
+ **Theorem 1**. *The target loss can be bounded by source loss and the JS divergence between the outputs of the full target network and target subnetwork as: $$\begin{equation}
87
+ l_{t} \leq l_{s} + C\sqrt{2\mathrm{D_{JS}}(p_{t}(y,z)||p_{sub}(y,z))},
88
+ \end{equation}$$*
89
+ :::
90
+
91
+ $\mathrm{D_{JS}}(p_{t}(y,z)||p_{sub}(y,z))$ corresponds to Eq. [\[eq:ml\]](#eq:ml){reference-type="ref" reference="eq:ml"}, which proves that the optimization of the JS divergence can benefit the adaptation process on the target domain.
92
+
93
+ There exist two extremes that hinder our goal. One is at the *beginning* of adaptation, when the subnetwork is very different from the full network, but it owns much greater errors than the full network, and the knowledge transfer is negative to the adaptation. The other is at the *end* of adaptation, subnetwork is very similar to the full network so that not enough knowledge can be transferred from the subnetwork to the full network. In such a case, the Subnetwork Distillation is meaningless. Provided that the gradient of the full network is $g_{full} = \frac{\partial \mathcal{L}_{sd}}{\partial W_{full}}$, and the gradient of the subnetwork is $g_{sub} = \frac{\partial \mathcal{L}_{sd}}{\partial W_{sub}}$, then we propose a weighted gradient discrepancy loss to balance this two extremes: $$\begin{equation}
94
+ \footnotesize{
95
+ \label{eq:grad}
96
+ \mathcal{L}_{wg} = (1 + \exp{(-\mathbf{Entropy}(f_{t}(x_{i}^{t}; W_{sub})))})\frac{g_{full}^{T} g_{sub}}{\left\Vert g_{full} \right\Vert_2 \left\Vert g_{sub} \right\Vert_2}.
97
+ }
98
+ \end{equation}$$ Here the left term is the weight and the right term is the cosine similarity between $g_{sub}$ and $g_{full}$. By minimizing the right term, the discrepancy between two gradients is enlarged, which provides a disturbance and guarantees that the full network and subnetwork learn divergent knowledge. The left term offers constraints on the gradient dissimilarity so that if the subnetwork doesn't learn distributions that are confident enough, a smaller weight will be assigned, and vice versa.
99
+
100
+ Based on the above discussion, we conclude the final objective for Black-Box domain adaptation: $$\begin{equation}
101
+ \label{eq:total-bbda}
102
+ \mathcal{L}_{bb} = \mathcal{L}_{kd} + \beta \mathcal{L}_{pm} + \gamma \mathcal{L}_{sd} + \theta \mathcal{L}_{wg},
103
+ \end{equation}$$ where $\beta, \gamma$, and $\theta$ are the trade-off hyperparameters for corresponding loss functions.
104
+
105
+ ::: table*
106
+ :::
107
+
108
+ ::: table*
109
+ :::
2212.06301/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2212.06301/paper_text/intro_method.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In recent years, the introduction of large-scale video datasets (*e.g.*, Kinetics [6, 33] and Something-Something [22]) have enabled the application of powerful deep learning models to video understanding and have led to dramatic advances. These third-person datasets, however, have overwhelmingly focused on the single task of action recognition in trimmed clips [12, 36, 47, 64]. Unlike curated third-person videos, our daily life involves frequent and heterogeneous interactions with other humans, objects, and environments in the wild. First-person videos from wearable cameras capture the observer's perspective and attention as a continuous stream. As such,
4
+
5
+ <span id="page-0-1"></span>![](_page_0_Figure_9.jpeg)
6
+
7
+ Figure 1. Given a set of diverse egocentric video tasks, the proposed EgoT2 leverages synergies among the tasks to improve each individual task performance. The attention maps produced by EgoT2 offer good interpretability on inherent task relations.
8
+
9
+ they are better equipped to reveal these multi-faceted, spontaneous interactions. Indeed egocentric datasets, such as EPIC-Kitchens [9] and Ego4D [23], provide suites of tasks associated with varied interactions. However, while these benchmarks have promoted a broader and more heterogeneous view of video understanding, they risk perpetuating the fragmented development of models specialized for each individual task.
10
+
11
+ In this work, we argue that the egocentric perspective offers an opportunity for *holistic perception* that can beneficially leverage synergies among video tasks to solve all problems in a unified manner. See Figure 1.
12
+
13
+ Imagine a cooking scenario where the camera wearer actively interacts with objects and other people in an environment while preparing dinner. These interactions relate to each other: a hand grasping a knife suggests the upcoming action of cutting; the view of a tomato on a cutting board suggests that the object is likely to undergo a state transition from whole to chopped; the conversation may further reveal the camera wearer's ongoing and planned actions.
14
+
15
+ <span id="page-0-0"></span><sup>\*</sup>Work done during an internship at FAIR, Meta AI.
16
+
17
+ <sup>&</sup>lt;sup>1</sup>Project webpage: https://vision.cs.utexas.edu/projects/egot2/.
18
+
19
+ <span id="page-1-1"></span>Apart from the natural relation among these tasks, egocentric video's *partial observability* (*i.e*., the camera wearer is largely out of the field of view) further motivates us to seek synergistic, comprehensive video understanding to leverage complementary cues among multiple tasks.
20
+
21
+ Our goal presents several technical challenges for conventional transfer learning (TL) [\[65\]](#page-10-1) and multi-task learning (MTL) [\[63\]](#page-10-2). First, MTL requires training sets where each sample includes annotations for all tasks [\[15,](#page-8-5) [24,](#page-8-6) [48,](#page-9-3) [53,](#page-10-3) [55,](#page-10-4) [62\]](#page-10-5), which is often impractical. Second, egocentric video tasks are heterogeneous in nature, requiring different modalities (audio, visual, motion), diverse labels (*e.g*., temporal, spatial or semantic), and different temporal granularities (*e.g*., action anticipation requires longterm observations, but object state recognition operates at a few sparsely sampled frames)—all of which makes a unified model design problematic and fosters specialization. Finally, while existing work advocates the use of a shared encoder across tasks to learn general representations [\[3,](#page-8-7) [18,](#page-8-8) [26,](#page-9-4) [32,](#page-9-5) [39,](#page-9-6) [44,](#page-9-7) [45,](#page-9-8) [51\]](#page-9-9), the diverse span of egocentric tasks poses a hazard to parameter sharing which can lead to negative transfer [\[21,](#page-8-9) [24,](#page-8-6) [38,](#page-9-10) [53\]](#page-10-3).
22
+
23
+ To address the above limitations, we propose EgoTask Translation (EgoT2), a unified learning framework to address a diverse set of egocentric video tasks together. EgoT2 is flexible and general in that it can handle separate datasets for the different tasks; it takes video heterogeneity into account; and it mitigates negative transfer when tasks are not strongly related. To be specific, EgoT2 consists of specialized models developed for individual tasks and a *task translator* that explicitly models inter-task and inter-frame relations. We propose two distinct designs: (1) task-specific EgoT2 (EgoT2-s) optimizes a given primary task with the assistance of auxiliary tasks (Figure [2\(](#page-1-0)c)) while (2) taskgeneral EgoT2 (EgoT2-g) supports task translation for multiple tasks at the same time (Figure [2\(](#page-1-0)d)).
24
+
25
+ Compared with a unified backbone across tasks [\[62\]](#page-10-5), adopting task-specific backbones preserves peculiarities of each task (*e.g*. different temporal granularities) and mitigates negative transfer since each backbone is optimized on one task. Furthermore, unlike traditional parameter sharing [\[51\]](#page-9-9), the proposed task translator learns to "translate" all task features into predictions for the target task by selectively activating useful features and discarding irrelevant ones. The task translator also facilitates interpretability by explicitly revealing which temporal segments and which subsets of tasks contribute to improving a given task.
26
+
27
+ We evaluate EgoT2 on a diverse set of 7 egocentric perception tasks from the world's largest egocentric video benchmark, Ego4D [\[23\]](#page-8-4). Its heterogeneous tasks extend beyond mere action recognition to speaker/listener identification, keyframe localization, object state change classification, long-term action anticipation, and others, and pro-
28
+
29
+ <span id="page-1-0"></span>![](_page_1_Figure_5.jpeg)
30
+
31
+ Figure 2. (a) Conventional TL uses a backbone pretrained on the source task followed by a head transferring supervision to the target task; (b) Traditional MTL consists of a shared backbone and several task-specific heads; (c) EgoT2-s adopts task-specific backbones and optimizes the task translator for a given primary task; (d) EgoT2-g jointly optimizes the task translator for all tasks.
32
+
33
+ vide a perfect fit for our study. Our results reveal inherent task synergies, demonstrate consistent performance improvement across tasks, and offer good interpretability in task translation. Among all four Ego4D challenges involved in our task setup, EgoT2 outperforms all submissions to three Ego4D-CVPR'22 challenges and achieves state-ofthe-art performance in one Ego4D-ECCV'22 challenge.
34
+
35
+ # Method
36
+
37
+ We are given K video tasks, $\mathcal{T}_k$ for $k=1,\cdots,K$ . We note that our approach does not require a common training set with annotations for all tasks. Let the dataset for task $\mathcal{T}_k$ be $\mathcal{D}^{\mathcal{T}_k} = \{(\mathbf{x}_i^{\mathcal{T}_k}, y_i^{\mathcal{T}_k})\}_{i=1}^{N_k}$ , where $(\mathbf{x}_i^{\mathcal{T}_k}, y_i^{\mathcal{T}_k})$ denotes the
38
+
39
+ i-th pair of (input video, output label) and $N_k$ represents the number of given examples. Note that "labels" $y_i^{\mathcal{T}_k}$ can be a variety of output types, and are not limited to category labels. For simplicity we omit the subscript i hereafter.
40
+
41
+ We consider two formulations with distinct advantages: (1) task-specific translation, where we partition the tasks into one primary task $\mathcal{T}_p$ and K-1 auxiliary tasks, and optimize the objective to improve performance on $\mathcal{T}_p$ with the assistance of the auxiliary tasks (EgoT2-s, Sec. 3.1); (2) task-general translation, where we treat all K tasks equally, and the goal is to maximize the collective performance of all the tasks (EgoT2-g, Sec. 3.2). As demonstrated in our experiments, objective (1) leads to the strongest performance on the primary task, while objective (2) offers the benefit of a single unified model addressing all tasks at once.
42
+
43
+ The training of EgoT2-s is split over two stages.
44
+
45
+ Stage I: Individual-Task Training. We train a separate model $f_k$ on each individual task dataset $\mathcal{D}^{\mathcal{T}_k}$ , obtaining K task-specific models $\{f_k\}_{k=1}^K$ . We do not place any restrictions on the task-specific model designs, nor do we require a unified design (*i.e.*, identical encoder-decoder architecture) across tasks. Therefore, any available model checkpoint developed for task $\mathcal{T}_k$ can be adopted as $f_k$ within our framework, offering maximum flexibility.
46
+
47
+ Stage II: Task-Specific Translation. We train a task translator that takes features produced by task-specific models as input and outputs predictions for the primary task. Formally, let $\mathbf{h}_k \in \mathbb{R}^{T_k \times D_k}$ be features produced by the k-th task-specific model $f_k$ , where $T_k$ is the temporal dimension and $D_k$ is the per-frame feature dimension for model $f_k$ . Following the feature extraction step, we design a projection layer $\mathbf{P}_k \in \mathbb{R}^{D_k \times D}$ for each $f_k$ to map task-specific features to a shared latent feature space. This yields a temporal sequence of task-specific tokens $\tilde{\mathbf{h}}_k \in \mathbb{R}^{T_k \times D}$ .
48
+
49
+ We process this collection of task-specific temporal sequences using a transformer encoder [58] of L layers to capture both *inter-frame* and *inter-task* dependencies. We denote the propagation rule of layer l by $\mathbf{z}^{l+1} = Encoder^l(\mathbf{z}^l)$ . Finally, we adopt a decoder head $Decoder^{\mathcal{T}_p}$ to obtain predictions for the primary task $\mathcal{T}_p$ .
50
+
51
+ In all, this stage has four major steps: (1) feature extraction; (2) feature projection; (3) transformer fusion; and (4) feature decoding. The procedure is summarized below:
52
+
53
+ <span id="page-2-1"></span>
54
+ $$\mathbf{h}_k = f_k(\mathbf{x}^{\mathcal{T}_p}), \quad \forall k \in \{1, 2, \cdots, K\}$$
55
+ (1)
56
+
57
+ $$\tilde{\mathbf{h}}_k = \mathbf{P}_k \mathbf{h}_k, \quad \forall k \in \{1, 2, \cdots, K\}$$
58
+ (2)
59
+
60
+ $$\mathbf{z}^{0} = [\tilde{\mathbf{h}}_{1}, \tilde{\mathbf{h}}_{2}, \cdots, \tilde{\mathbf{h}}_{K}]$$
61
+
62
+ $$\mathbf{z}^{l+1} = Encoder^{l}(\mathbf{z}^{l}), \forall l \in \{0, 1, \cdots, L-1\}$$
63
+ (3)
64
+
65
+ $$\mathbf{z} = Encoder(\mathbf{z}), \forall i \in \{0, 1, \dots, L-1\}$$
66
+
67
+ $$y_{pred}^{\mathcal{T}_p} = Decoder^{\mathcal{T}_p}(\mathbf{z}^L)$$
68
+ (4)
69
+
70
+ <span id="page-3-2"></span><span id="page-3-1"></span>![](_page_3_Figure_0.jpeg)
71
+
72
+ Figure 3. An illustration of EgoT2-s (left) and EgoT2-g (right) on three candidate tasks. The left figure illustrates EgoT2-s on three social interaction tasks, where the input to each model is unimodal (*i.e.*, video) or multimodal (*i.e.*, video and audio). The right figure shows the design of EgoT2-g on three example tasks that focus on different aspects of human-object interactions (*i.e.*, localization, object state change classification, and action recognition). EgoT2-s learns to "translate" auxiliary task features into predictions for the primary task and EgoT2-g conducts task translation conditioned on the task of interest.
73
+
74
+ where $y_{pred}^{\mathcal{T}_p}$ denotes the prediction given by EgoT2-s. During the second stage of training, we freeze the task-specific models and optimize the task translator with respect to the primary task dataset $\mathcal{D}^{\mathcal{T}_p}$ .
75
+
76
+ Figure 3 (left) illustrates the design of EgoT2-s using three social interaction tasks from Ego4D [23] as an example. EgoT2-s allows heterogeneity in the task-specific models (i.e., $f_1$ is unimodal while $f_2$ and $f_3$ are multimodal; also the three task-specific models can be associated with different frame rates and temporal durations) and utilizes a transformer encoder to model inter-frame and inter-task relations. The resulting EgoT2-s learns to adaptively utilize auxiliary task features for the primary task prediction.
77
+
78
+ EgoT2-s optimizes performance for a single primary task. Therefore, in the event all K tasks must be addressed, it requires K separate training runs and K distinct translators. This motivates us to extend EgoT2-s to perform task translation for all K tasks at once. In EgoT2-g, the task translator processes features from all K tasks and learns to "translate" features conditioned on the task of interest.
79
+
80
+ The first stage of EgoT2-g is identical to EgoT2-s. For the second stage, we propose two main modifications. First, we replace the task-specific decoder in EgoT2-s with a "generalist" decoder that outputs predictions conditioned on the task of interest. Natural language provides us with a flexible scheme to specify all tasks as a sequence of symbols. Inspired by [8], we tokenize all task outputs and replace the original task-specific decoder with a sequence decoder [50] for a unified interface. Specifically, we first transform the original label $y^{\mathcal{T}_k}$ to a target output sequence $\mathbf{y}_{seq}^{\mathcal{T}_k} \in \mathbb{R}^M$ , where M is the target sequence length. For the task translator to produce task-dependent outputs, we prepend a task prompt token $\mathbf{y}_{prompt}$ to the target output, i.e., $\mathbf{y}_{seq_1}^{\mathcal{T}_k} = \mathbf{y}_{prompt}$ . We then let the sequence decoder generate a sentence answering the requested task. Figure 3 (right) illustrates how we express task outputs as sequences of discrete tokens and attach task prompts.
81
+
82
+ With the transformed output, we treat the problem as a language modeling task and train the task translator to predict subsequent tokens (one token at a time) conditioned on the input video and its preceding tokens. The training objective is $\mathcal{L}^{\mathcal{T}_k} = \sum_{j=1}^M \mathbf{w}_j \log P(\mathbf{y}_{seq_j}^{\mathcal{T}_k} | \mathbf{x}^{\mathcal{T}_k}, \mathbf{y}_{seq_{1:j-1}}^{\mathcal{T}_k})$ . Note that the maximum likelihood loss is weighted to mask the loss corresponding to the task prompt token: $\mathbf{w}_j$ is set to 0 for j=1, and to 1 for any other j. During inference, the task prompt is prepended, and the task translator predicts the remaining output tokens. We use argmax sampling (i.e., take the token with the largest likelihood) to sample tokens from the model likelihood and transform the output tokens back to the original label space. Detokenization is easy as we simply reverse the tokenization process.
83
+
84
+ The second modification lies in the training strategy. While EgoT2-s adopts the primary task dataset for training, EgoT2-g requires joint training on all *K* task datasets. Sim-
85
+
86
+ <span id="page-4-5"></span>ilar to the training strategy in [8, 20], we sample one batch from each task, compute the task loss, aggregate the K gradients, and perform model updates in one training iteration. The final training objective is $\mathcal{L} = \sum_{k=1}^K \mathcal{L}^{\mathcal{T}_k}$ .
87
+
88
+ Figure 3 contrasts the design of EgoT2-s and EgoT2-g. They both provide a flexible framework that can incorporate multiple heterogeneous task-specific models (*e.g.*, the three example tasks we give here focus on different aspects of human-object interactions). With a design and an optimization that are specialized to a single primary task, EgoT2-s is expected to lead to superior individual task performance while EgoT2-g brings the efficiency and compactness benefits of a single translator addressing all tasks.
2212.08094/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2212.08094/paper_text/intro_method.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Language models that have been pretrained for the next word prediction task using millions of text documents can significantly predict brain recordings of people comprehending language (Wehbe et al., 2014; Jain & Huth, 2018; Toneva & Wehbe, 2019; Caucheteux & King, 2020; Schrimpf et al., 2021; Goldstein et al., 2022). Understanding the reasons behind the observed similarities between language comprehension in machines and brains can lead to more insight into both systems.
4
+
5
+ While such similarities have been observed at a coarse level, it is not yet clear whether and how the two systems align in their information processing pipeline. This pipeline has been studied separately in both systems. In natural language processing (NLP), researchers use probing tasks to uncover the parts of the model that encode specific linguistic properties (e.g. sentence length, tree depth, top constituents, tense, bigram shift, subject number, object number) (Adi et al., 2016; Hupkes et al., 2018; Conneau et al., 2018; Jawahar et al., 2019; Rogers et al., 2020). These techniques have revealed a hierarchy of information processing in multi-layered language models that progresses from simple to complex with increased depth. In cognitive neuroscience, traditional experiments study the processing of specific linguistic properties by carefully controlling the experimental stimulus and observing the locations or time points of processing in the brain that are affected the most by the controlled stimulus (Hauk & Pulvermüller, 2004; Pallier et al., 2011).
6
+
7
+ More recently, researchers have begun to study the alignment of these brain language regions with the layers of language models, and found that the best alignment was achieved in the middle layers of these models (Jain & Huth, 2018; Toneva & Wehbe, 2019; Caucheteux & King, 2020). This has been
8
+
9
+ <sup>1</sup> <https://github.com/subbareddy248/linguistic-properties-brain-alignment>
10
+
11
+ ![](_page_1_Picture_0.jpeg)
12
+
13
+ Figure 1: Approach to directly test for the effect of a linguistic property on the alignment between a language model and brain recordings. First, we remove the linguistic property from the language model representations by learning a simple linear function f that maps the linguistic property to the language model representations, and use this estimated function to obtain the residual language model representation without the contribution of the linguistic property. Next, we compare the brain alignment (i.e. encoding model performance) before and after the removal of the linguistic property by learning simple linear functions g and h that map the full and residual language model representations to the brain recordings elicited by the corresponding words. Finally, we test whether the differences in brain alignment before and after the removal of the linguistic property are significant and, if so, conclude that the respective linguistic property affects the alignment between the language model and brain recordings.
14
+
15
+ hypothesized to be because the middle layers may contain the most high-level language information as they are farthest from the input and output layers, which contain word-level information due to the self-supervised training objective. However, this hypothesis is difficult to reconcile with the results from more recent NLP probing tasks Conneau et al. (2018); Jawahar et al. (2019), which suggest that the deepest layers in the model should represent the highest-level language information. Taken together, these findings open up the question of what linguistic information underlies the observed alignment between brains and language models.
16
+
17
+ Our work aims to examine this question via a direct approach (see Figure 1 for a schematic). For a number of linguistic properties, we analyze how the alignment between brain recordings and language model representations is affected by the elimination of information related to each linguistic property. For the purposes of this work, we focus on one popular language model–BERT (Devlin et al., 2018)—which has both been studied extensively in the NLP interpretability literature (i.e. BERTology Jawahar et al. (2019)) and has been previously shown to significantly predict fMRI recordings of people processing language (Toneva & Wehbe, 2019; Schrimpf et al., 2021). We test the effect of a range of linguistic properties that have been previously shown to be represented in pretrained BERT (Jawahar et al., 2019). We use a dataset of fMRI recordings that are openly available (Nastase et al., 2021) and correspond to 18 participants listening to a natural story.
18
+
19
+ Using this direct approach, we find that the elimination of each linguistic property results in a significant decrease in brain alignment across all layers of BERT. We additionally find that the syntactic properties (Top Constituents and Tree Depth) have the highest effect on the trend of brain alignment across model layers. Specifically, Top Constituents is responsible for the bump in brain alignment in the middle layers for all language regions whereas Tree Depth has an impact for temporal (ATL and PTL) and frontal language regions (IFG and MFG). Performing the same analyses with a second popular language model, GPT2 (Radford et al., 2019), yielded similar results (see Appendix section F).
20
+
21
+ Our main contributions are as follows:
22
+
23
+ - 1. We propose a direct approach to evaluate the joint processing of linguistic properties in brains and language models.
24
+ - 2. We show that removing specific linguistic properties leads to a significant decrease in brain alignment. We find that the tested syntactic properties are the most responsible for the trend of brain alignment across BERT layers.
25
+ - 3. Detailed region and sub-region analysis reveal that properties that may not impact the whole brain alignment trend, may play a significant role in local trends (e.g., Object Number for
26
+
27
+ ATL and IFGOrb regions, Tense for PCC regions, Sentence Length and Subject Number for PFm sub-region).
28
+
29
+ We make the code publicly available1 .
30
+
31
+ # Method
32
+
33
+ **Removal of Linguistic Properties** To remove a linguistic property from the pretrained BERT representations, we use a ridge regression method in which the probing task label is considered as input and the word features are the target. We compute the residuals by subtracting the predicted feature representations from the actual features resulting in the (linear) removal of a linguistic property from pretrained features (see Figure 1 for a schematic). Because the brain prediction method is also a linear function (see next paragraph), this linear removal limits the contribution of the linguistic property to the eventual brain prediction performance.
34
+
35
+ Specifically, given an input matrix $\mathbf{T}_i$ with dimension $\mathbf{N} \times 1$ for probing task i, and target word representations $\mathbf{W} \in \mathbb{R}^{\mathbf{N} \times \mathbf{d}}$ , where $\mathbf{N}$ denotes the number of words (8267) and $\mathbf{d}$ denotes the dimensionality of each word (768 dimension), the ridge regression objective function is $f(\mathbf{T}_i) = \min_{\theta_i} \|\mathbf{W} - \mathbf{T}_i \theta_i\|_F^2 + \lambda \|\theta_i\|_F^2$ where $\theta_i$ denotes the learned weight coefficient for em-
36
+
37
+ bedding dimension d for the input task i, $\|.\|_F^2$ denotes the Frobenius norm, and $\lambda > 0$ is a tunable hyper-parameter representing the regularization weight for each feature dimension. Using the learned weight coefficients, we compute the residuals as follows: $r(\mathbf{T}_i) = \mathbf{W} - \mathbf{T}_i \theta_i$ .
38
+
39
+ We verified that another popular method of removing properties from representations—Iterative Null Space Projection (INLP) (Ravfogel et al., 2020)—leads to similar results (see Appendix Table 4). We present the results from the ridge regression removal in the main paper due to its simplicity.
40
+
41
+ Similar to the probing experiments with pretrained BERT, we also remove linguistic properties from GPT-2 representations and observe similar results (see Appendix Table 11).
42
+
43
+ Voxelwise Encoding Model To explore how linguistic properties are encoded in the brain when listening to stories, we use layerwise pretrained BERT features as well as residuals by removing each linguistic property and using them in a voxelwise encoding model to predict brain responses. If a linguistic property is a good predictor of a specific brain region, information about that property is likely encoded in that region. In this paper, we train fMRI encoding models using Banded ridge regression (Tikhonov et al., 1977) on stimulus representations from the feature spaces mentioned above. Before doing regression, we first z-scored each feature channel separately for training and testing. This was done to match the features to the fMRI responses, which were also z-scored for training and testing. The solution to the banded regression approach is given by $f(\hat{\beta}) = \operatorname{argmin} \|\mathbf{Y} - \mathbf{X}\beta\|_F^2 + \lambda \|\beta\|_F^2$ , where $\mathbf{Y}$ denotes the voxels matrix across TRs, $\beta$ denotes the learned
44
+
45
+ regression coefficients, and $\mathbf{X}$ denotes stimulus or residual representations. To find the optimal regularization parameter for each feature space, we use a range of regularization parameters that is explored using cross-validation. The main goal of each fMRI encoding model is to predict brain responses associated with each brain voxel given a stimulus.
46
+
47
+ Table 1: Word-Level Probing task performance for each BERT layer before and after removal of each linguistic property using the $21^{st}$ year stimuli.
48
+
49
+ | Layers | Sentenc | e Length | TreeI | Depth | TopCor | stituents | Ter | ise | Subject | Number | Object : | Number |
50
+ |--------|-----------|----------|-------------|-------|-------------|-----------|------------|-------|------------|--------|------------|--------|
51
+ | | 3-classes | | 3-classes | | 2-classes | | 2-classes | | 2-classes | | 2-cla | asses |
52
+ | | (Surface) | | (Syntactic) | | (Syntactic) | | (Semantic) | | (Semantic) | | (Semantic) | |
53
+ | | before | after | before | after | before | after | before | after | before | after | before | after |
54
+ | 1 | 74.67 | 43.28 | 76.30 | 42.93 | 77.15 | 47.28 | 87.00 | 59.25 | 92.10 | 49.95 | 93.28 | 47.31 |
55
+ | 2 | 69.83 | 42.44 | 76.72 | 38.88 | 78.60 | 42.75 | 87.18 | 48.25 | 92.32 | 55.50 | 93.47 | 54.59 |
56
+ | 3 | 72.31 | 46.19 | 75.76 | 40.33 | 77.81 | 48.85 | 87.42 | 44.26 | 93.04 | 48.55 | 93.80 | 49.76 |
57
+ | 4 | 71.34 | 46.43 | 75.94 | 38.63 | 78.36 | 48.00 | 88.09 | 42.56 | 93.50 | 50.12 | 94.90 | 50.06 |
58
+ | 5 | 72.67 | 46.97 | 76.00 | 40.88 | 78.60 | 45.28 | 88.39 | 44.26 | 94.05 | 49.88 | 93.59 | 51.45 |
59
+ | 6 | 70.38 | 44.37 | 79.02 | 41.89 | 80.23 | 43.47 | 87.17 | 44.44 | 94.98 | 55.08 | 94.50 | 54.17 |
60
+ | 7 | 72.98 | 46.55 | 77.93 | 41.23 | 80.23 | 46.43 | 88.69 | 42.62 | 95.88 | 50.24 | 94.62 | 47.58 |
61
+ | 8 | 72.67 | 44.67 | 76.07 | 40.08 | 78.90 | 46.86 | 87.42 | 44.56 | 96.10 | 50.24 | 95.10 | 50.18 |
62
+ | 9 | 70.50 | 45.28 | 77.15 | 42.62 | 79.87 | 44.55 | 88.27 | 47.22 | 96.38 | 52.78 | 94.56 | 49.27 |
63
+ | 10 | 72.91 | 47.93 | 76.90 | 41.78 | 78.17 | 47.76 | 88.94 | 45.47 | 96.06 | 53.68 | 94.50 | 50.30 |
64
+ | 11 | 70.07 | 46.67 | 77.27 | 45.47 | 77.69 | 45.77 | 87.24 | 48.43 | 96.94 | 53.44 | 94.92 | 49.52 |
65
+ | 12 | 71.77 | 42.93 | 76.39 | 46.61 | 78.29 | 48.67 | 86.88 | 45.10 | 94.03 | 51.45 | 93.95 | 48.73 |
66
+
67
+ **Cross-Validation** The ridge regression parameters were fit using 4-fold cross-validation. All the data samples from K-1 folds were used for training, and the generalization was tested on samples from the left-out fold.
68
+
69
+ **Evaluation Metrics** We evaluate our models using Pearson Correlation (PC) which is a popular metric for evaluating brain alignment (Jain & Huth, 2018; Schrimpf et al., 2021; Goldstein et al., 2022). Let TR be the number of time repetitions. Let $Y = \{Y_i\}_{i=1}^{TR}$ and $\hat{Y} = \{\hat{Y}_i\}_{i=1}^{TR}$ denote the actual and predicted value vectors for a single voxel. Thus, $Y \in R^{TR}$ and also $\hat{Y} \in R^{TR}$ . We use Pearson Correlation (PC) which is computed as $corr(Y, \hat{Y})$ where corr is the correlation function.
70
+
71
+ Implementation Details for Reproducibility All experiments were conducted on a machine with 1 NVIDIA GEFORCE-GTX GPU with 16GB GPU RAM. We used banded ridge-regression with the following parameters: MSE loss function, and L2-decay ( $\lambda$ ) varied from $10^{-1}$ to $10^{-3}$ ; the best $\lambda$ was chosen by tuning on validation data; the number of cross-validation runs was 4.
2304.11436/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-09-14T02:17:47.507Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" etag="Nh_TtpU2zRz0oShTP-_H" version="20.3.0" type="github"><diagram id="04zzCcIzVrgk2XMb4qFc" name="Page-1">7F3XlpvIun6auRwvcrgUAoTIAgSIm7OIIokk8tMfSu52aPc4zLY93rPdy+0WCAqq/vT9oar+QPe3+dD5TarUUVz+gUDR/AfK/oEgCEzS2x9wZnl7BkbopzPXLouezr0/YWZr/HQSejo7ZFF8/+jCvq7LPms+PhnWVRWH/Ufn/K6rp48vS+ry46c2/vXpidD7E2bol/EnlzlZ1Kdvz1II+f68EGfX9PnJMPHUv5v/fPFTE/fUj+rpg2eh3B/ovqvr/u2n27yPSzB6z+Py9oX4v/j23Yt1cdV/zQ3Xs6AolPB/Udae8ov8J4MPpz+Rt62Mfjk8dfjpZfvleQSuXT00nz7s6flj3PXx/Bop/KCMX/Z245O4vsV9t2zXPd31J4w+Df0Tj2DPx9P7AYeJp3Pph4P9TDP/icjXd42/H4ftw9NQvD4szr4/ObiqszLS4Am6YJ4t/Yl8xbiAnmcbo8h+EJd6fc/6rK62r4K67+vbHyjzfMGuzK7gi75utrNpf9tehYW3jxtHNKCx23wF0vMm8O9Z+Kapy+UKWmLAp31dd9Gjeziz/YPeEH8ge+gPfHuD/SvHMDh6g6IvThDkizvgLxy/+oTt39uXKrMqfupDkpXlvi7r7jEoqI9AEIKD83XVf3A+efyAPvddXcQffEPw0PazfbONQZTF7++qavAQpm78MOsBa+Dgqk8Z8bN8/ZI7P+XCD5iMxt7AMElgJIRTFILi2CssB9Nv6I9+fhAHovA/x4HdQ5G+Jy3ynhpvyfesCR9NvFXZCCBOVkVvyQI4oKt7/+mF6B9CuWf9gSJvCPxjDYJ/qkBg5A39CkHRH6VCcOhTAhLbO+PcHyQD/UGyfzyrtA8ounW5f0Ghj+TliQofCt3TKf+JxOE2unH3Cu1vWRSBxzBTmvWxuUkVeOa0kf1Bq2EjHegO9CS75gdk/VGUIz4iGv6p1kfJN++lcRNP+FP6/TATgL9iAggo+b+Nevvf1AP0It9gH6nLX4SaRxvuO+Z0lC/4/x2sweLIjvsTJj+hVhxtQO/psO76tN6Mrl9y78++GNf318g10KMPAudx3y9Po+0Pff0x+eM5690PPl9AU2/wpyN2fmr5cbA8H1Rbf90PDz64Cxy+v+1x9Hxf5N/Tx7vC30zzez10YfwVuqz3u2vcf4XQgLH9LA91cbkZh/FjkP2fUP3V/hG/BLpFPhIMhPyZ6PbVYSF/g9v/anBL/AVz/org9tUO0P+j2PbrCffLYNtX+wH/z2Hbb6bcr4JtX6ffK87lE7YtflPvF8a2r1PzU+35G9s+0/xDbPtZXfYhtv2s0Pwi2JZ8VYbLtzr466X466Wzizdz+AR7oQdWyqr+HXgD6AkwyZPJ/BYF8VKoH8dP7/xKlPv7SDj1ccwZ/USiqVcgOfbDRBj/MiJ666l8jQ78caF54h/3XeCv8Ol+Oy+/sPPyjtX/a70XmPrnWPAfdV++gXS/uP/yigP67/Zfvp10v7QDg7zqgD4cGPg3+f7rPJh3JRK/PZhXiP5lF+ZJnX3RhXkm4C/iwiDYlw3psxkMAbSJ4g79CzH7gK7BW8aQg3cn/LC4PthFG/oPENKzr4L/ULQUxYk/lP03IfW/NqjvCqOWj92UDyT5NUBO/TDR/QrP5TcNP09D7BV1/HOJ+BVO1W8ifp6I/7wkfkVa7x8iIhz4cIx8CxEhiOB2/I8mIvaChP+0HOLfYBCz26Nylfmip1mCL5h3RPt0IF93Rf1787aeNslmgFyYxwN3z2eh5zOgKb/3/0B3bw8RPm/iK4hQIDydb/9hu93JLDzRuO6Y3Wm3HW1/d1tXeGRi2N3uuJ3ltt/H3/3uetx+T+zbv+ftC4VjrucDc73w2++BmYojcw1FZroc9rv7cX+qpf21lpjrVWZ3d4ndFdtx99z66fjU4pndTWdpu0sAV/9nv6D1abcz9rv99vo7dmced7t0e9WZ27iSp047YQL9PD/1jNu9/9l6OJ22noTgzeW3V/fG9sWV297wyG1vyb8dJp7e3n4bn5PBGMdUOXMHDuZTZhFnnpWYwueOR0iaJ8M2oWRXKIi4XK+FxKfh5WDUpZiFtWTWkGpdUI0tiBNk8EYRHc1zY9m87dhw6jk3tfAcr/IPZRuiBhxVERa7LZsenUOKZWLWSFYpOk6JZ96tlfJGcm4NkTVtJ6297CA9meHDXWZnxT3MVC4uvWzBquvAdO4hg5Bt3MxYJT4kNRF3MzXqEJ0E6HZSr4jncdrGZuKex+ltxz8zThwYp/12GcvswrfjdBLRx9UzxzEmNzOpyJxPURoaSqbU3JHnJKMOBGZvKut53NmXSjb26aVQNa6kjMdY3baRa6T8jKjrFdcKlTPOnmBypXSGDdsuo4tjN7l3sG8ekjZ+pUKR66GxUBLXWuPSiy9kx5tUNNq59Hz3Jt78utWKxverVrq1906D+sBHB/lGTHedm8NAWJRKgnr9DEeBi6iVjw16gcdBRWhV+8ko0fXGYMomMUdj6+NOn1j6TuaT1yhQfXOYfUifJJhmIQe+X3lH2BG2CyVn0vcGfrFI+N7FUz0cscnRLVUkd7Ge9EiHJ5MOc8fBPOZ3Tav4JPeoUF7MBRL5RCyUjs9dcrsyn0y/ofht/BlUtB2qycLtlcxxdrY/vgPbjLJ9VZy42OMVBPMICsZjv0U4n6wa3PI1aQgbTIRNXh602FGES+xvZoohqXPTassB7jNNx3zqRskoWpxI/Ly1m02WW7tpdb5AZV6YChfa2y0XG+sv+Wn7/hKO1C6eaCi+JexoyG1siE19O5CD4fpEVe9iZW9E7YWA8bVxs9XGbCWqwpu825o5Y60MHsLrOXzTkfx8n7lEzoZB9ZWjf1snCl9gbRmiPSUi3G278tzVF11A28nLsfA2HKjMiLIKTxqXJdLE78I71dOxcDyclOp2XyY4xqlpIhFc75ygM/0g5c6UvgR32rnC1ul4zPfUwnGh18p2HfElRK2npZm2geSdUYPv6vl2MmdLOTlkdesiLEDmHqYWTDyzeMm0x0sWNo63LDdsua/gBQWVxKNlmcerN8+urOy8zR9kYgvtTOriGjHurdB6uxNQVt7m492x9bguGbPjsIpLhlsRNZRGmmZw3PM9erGJEZMh9CxLWyN5tg2Qq++6dcS8efQCcceUw451OyElz0eva2QryuyTViYRzlv5gQFP9poz3lQEGR/oY3lkBDUShIabjt56ZUdbnmD+elQbwh6I69oEXr6SMG1COe7Q+UzSjWDK61lPt561hayY7Um9xiOfpMbWNnwoKXTm4J0xmPzgTgyCdzQFmAblAvTkYbdLOKuVbjUuFieb4AWNlm1fH9xZwPPufi3DwjsoRQHjB7/fGpzvcV4im+RW+ZiKqnG7TpOmIj6tUQfk3vUCt93NXK6sGgcxDx7kYVbep4FxDSZEk8hqmiljZMMw0yVSOhnX1nCn8C5rkb3ck05zL0tq1pJStTHk65jhdQpnpFmNq1LC0qvOimwwkzeLRGL4rO+FmetRCB/vjW/F41KWnQ2ts6A2SOrGjVUEFTNXvQMp7CHG0XWG4ZhZgQxuv0xvUTaBwA3jCZeOU2zB7wEqZHYQnLN0Q12BBGCz01Mi6IncuKm/JrZqcJ6M66o3zoftfHCkZD86wu0p3enH8xUmD+zU73MzdHpruie5dCdazYVZgyRN/pbHx/QCHnLS6LDstBbdn272TXQ1dj7Dm61koNwj8Mg6yX4j3fMir3L+TAfUhWDUwBSjWeQ8prQXCaPs69FIVH7fgAYVjBrPuOVGp3QQpIJt4NXm3M6hO3wzd0rEOFF3vgweLt/mm3gSatZ2lqGi0mJwNIaAsfEe+4Nn+zg6Xq0Z6sfIkOOV3Y9zQft0Qm2dZURZPkex40g2UDIn76gmUkXX96GZu+B+UlFBjfN4l2Sit30vRJcsh7mSJi5S2m4nyiVLxPM9OQLoyIfRBniYzJ1wNlNDtnHSFiGDu2AJqgJliMkYucibRDEpsqJbu460ift2gzteKPBX9gaX0lAUOXP3fsAtIcQ66gigDJq29IwhlNGHwuI5g4AqaFJs4Jahq9xHYn8Z4H3OpIFIa5vc8lxC7TBpQ2EM76ka5RJV7Ow5YzHve0xg9I2X87awBdsX3M2IRRsIoPztNhZhLR6OZasbiKIsygC/QEI83KjzfobGmbulSFhuKJ4L0kMjgjcf/EVPOZmdAqzq6QTOhPXGJtHFm1fXS3h9grEr5cUaNgrJNTgHPiPWFh/uT/lxRXbdXJsTfEsumhqrqiR2CbtiaayObDncCetA9fH1qJujB2RwnEV3brDLEJgON91bbzNv9jIUSJzG3HgFjBtzTN13Ypxaqpq00zLrThunxuJ7mFPe+mbfRHsrH93ssoHyTfZLMqXFgvIh2PdugOOSh4Qn5hQcNvzKLDmX8vYAxxEaLJbf2rCjmcZNw+ZclGPY8mcsrXqdg07wPlyMvXWNwhBGndKIDyWMrDCCGaZyJR1aigTOrITb4Hn5HrYRCc4A++xpZrs97mKuy/bI6Eqnywn0xLkLs4zdKWB4j8FiatOIp/JhVDxt39AEfQfmNxshwMDjtHkVm5PAHPqpbCIiZu75Jfc3U5DpaY6ZaHVFyQCrQVNFEbg0dTvdAw+ogptEEqVVXWOtNnONUMiJxLIYGcY7jIeepe32PnEgnYMzFbceOewU2sddQ99tBpRp/YJe7o1LE87BmvvOHuouNy94d/BPGBGFWwvbMGmEdusP58uxmdEzp05FO+aDRdjnIYq5JLvxY+/YFwxTDRO5Zpc7jkrwxekIv8c7uJbNU2sH+GYKGDVOU1EqVpxUL2bFXoB1oO47djMCzVlr7kwg4Cl4rw13RP0wIx1PmahlNKUzwx3Z+MLS5CZC2XSsCmFC9JMt5eNtXE3goshJsnNWSgAk6ZLZruHM4uVotwQcba0u4QDZR5QeooBHs2PLZHiY60rf5EwGRk+gD3BO56Pj9WbKDEfeJqOFoTrAutQOhie8FKqJNBDcE8rZBMqEvwsp31sQ5HKKK1H5rhkKfbvccxqZDcsrF+CLD4wwcDeYKqHzDtuQz8xRR2AKlRulsUa/GwKvCSx3E8oD1JPYVdv19YEpxr17MgqEMPdqdVpuqjUIUIMBJH1ZQR8Owdbl0oEnPcUDAm8pG430KJYSftnbHhmbal1kGTSVG3Pu1WX2z+siKnyswkjL4LER8pRBnWAZ8wXZItayEfDIS0R99fU6xLEZi7ENWAzHarW14khszjGTmNu3EGLpFYptPeDvTsJ3WOvEV7fKyIlOwPDrzNCvJw34kfosNkV7hJvNQBi6D3ic2eT6vENuvCUMZcouRncUOsSa8O27SaSkwJrW9nD1SaygzkQD0GqwV2XYgW89reXOThm3K++9xUfKWgRDsVyF3TQdHDpJYd3sbNlQ5atyP0cN7yN3KC9EIZhmqQjDzWLINmfwNtPvdocdM3NIR6Th8d6MB5aJj5B3PlVseEEGA3aBZJIQAWS516yYr9iBqRsM18Yz0fetRB+bzgKKGk0WRF9kTefzzD3YrrBb7H2FmNZm0Hl18aNq+5v2wYrqbxU7UIX6SJfCrGP1JkVjkkZYCi17Mt+GS0+r+cJd+90RoztndSF7lgR5HkdlNdupOtpHur0PbPvgyskqwZCGY2aKOqtv4z3WTrqfkdUgKg/wfI/67v7uKFm/72+5UZrZXt9dOAD66ZF23b6BBzLPC4HuLDKtLk7p7rAz22Z5u49HO1/zgzDFR9pCObE9BYy2DaoxqNNMI6eqcXt4ZElgpXrQLe88UnIpUDfwVkJ5T3Ro0IFZJCwXQD8N/C8a4Fs/2K7vhtgkgOxNHiXQbM45TdJvhCgzu4HFAItnb+ecZOGqtkcGlcJ+URvsQCWqo2ILkKn9PgqAVuwvMGvRO6BTbYox/YSJR6GB0TbU2slEQ/2AnGcDOY9lgSPlfp6W+85xfTIme+wGN85ROilXHa1MG4B4NR9uWqW7APlEsoAVKsSwIaEHVpCpSxcaPXA/8tpZkRXqVgJN82kE4IOxUgJoo+2jec7HOhYTjhbRUuJGZ9MzVA5AkTzyLrnTSQNNZVPVYCgWIm24qMYQxGh/UWXKdFRU6dbjel3m06pQe7GXRlHijMORCW4DGDAZJSWBPKHTGb5h4HVyleITDk0JjT079zzxy5pqnQICUutRd3vuqfbCbbasM4ZlaPlSPHsdzY4ypSvjzQT0Wn38rKyi3ws5vuZ4SVMVFdIErB4ohyCx8z1AhCnB8oJvEmTDYm1cokeTvINh4qHokA5H8bpYcl9EUXe9Bvnsrlu3BUrW2DINTViuG1HVYb8PyIoQoINAOZSr+Y40QUHk8r1OVvFDUxL6kt4oSVaj+Dwe0TAbrhTQq3vW8kSmhQqRvSzKbbyzdC9gwKNkfCCvnFMkPAFTAqJdY0X1KLf3ZoPgBcF3nBo528WI5Hy5v2Y90gMHfQbUFu5cOlby3ihlmaKDQ3+0+0LjQeSROeGb+ZEXQWlv+245Nyc0dCMfCF9JwZAOyTt8NNZcxi63im+VvqCFMVh14ojNeLSZ5TKcsbZEd1ehlogM2zkodRcqPkyxQj5c8kO4daLoAHSMho5L0pgNQhIhFJ/y3vL39db526AwV0LE9vA6lQFa9i2PBBgaN6Kg+pe+MkbJZzJDLYbU82yqrs29qM4A/oA+Ho43CrfyhrRx4RH24jmVox6tHw5lLJKuIQYZAa12aXeDHhv+ZNZTEDJUzZrybjfjLHPa7+EYnrabjCmytFbVuelcq8id7pCoV6suANYgXJeQ6LChKnkHwDRVouHRJ8YOgsZ2HPWS6RDdJE30ToupFXjI3a42TzqPGPaQzfOpHpf0CBucDVCAfWpbHtvxXo+39qrO8kHcxUMcUlR41xiSCWlFtRX3DkGzcpW5HYZtb6AftFLbbc6Do6HZiGbKZcrq2S5yyi3FpcBV9NKty6Yt8Q04pVfSgoWLRxTJuersDPE2H5ShWH3XDv3QLnEgrZsHdohhUaG6Bg/dU4IaFreYpFTjRzfUOzsJfHSPabtTnApxc8RFKnDkMGL69byZAM1tuN0GrqhB2h2Mht1hkWnr7uZbNMXo7Uy9tzDhpOGXpLNYL81LoB+upM3LzIEdJkwPMeK8s+RW7MO6NUDUspeuZ3oDEtvT5s5ftXtxF5ekZLkgz5FLom4NNJ18S+VK0otoRhsH+Mi2gQGKK/vdbtNDO2iv6xsOudYbL3cRcBs28Xb7KjHkGh5uveMjOmKqLkAZsulgaTkExSLV/kkPViI5qDOIwyDzbscs6Lo5AlYQCnrnQhHgOb+6JYyA+dKkA+3zMHrjaZMLX3ZXi9VQBD6eEKVqbiPuYVAzp7ORFSMOuhcXwBXOB14cmglNwanN4OPqhn35uqEgLdcsiA/RLmx7jSmEW8/LmUtKDop1+7GpipEpNnS9Vw1X41CdvZ92wgKimGWcJH3XAlvM6np25oF0x9pqXzHdIgMmp0ObsLHI4K+HuPeqWYcknHJjJoopGWEdwiKqOkI9Ywzvy5w2F+fEZ7W4v/YFxmja3A3T4ZimVGJ0JpJqAPYTprdcllQaU6kXgyUVk/iWU6sMBqWXd4qQF2XpZuZhE/5r6E1hyBLVDucvA0D6ki50c5mwQirBPayxAB8162aqIWpQqMQFQ807WAJc4JLw+D7ZcJMSWmNmFJnRV0GIwFSlaYg1n/scwyneL8PkErhOaPZQVvXN8WGp+B0nngCNagB19nk5ofNpnBYXGVMLcVju5BIcq+Ow2DwFKUQyNuzx4GOi5vgOLgOcU7kXE+lHCSicfDUUU+OdE8Buhd+sSEExsurfGfR4gXbh2rkp6kSncD6gmWHRqFJZxIUA7vumkziDshB4aCeClLklz5cgpXLacJ1GOOJIbJJCN9FO509lQsKDMcPrfDo081ARInWugYaDBOyKbt7QqZ8P91rvKUwN1BG8DH+AZnosoIE6s/iIsrMJ5OUonhN60+b06GWiSu52O6qg23xQLUiCNHl1ZU0WNDFoQGpAgXbngLVVtzodaEodUh1dg1U+AgbV/QRqz/Gx36N9oLbncwhZWJGCCF9010tRYLEVIbzGpYBG30X6LIjYDKGaPnTx0UwmpgWvyYj6XoU6N3RG5MRqJ0dd+LUII0foTxow+waSyg9150kbF/N2JKZ3cgmmCuuh8eyEKKdXjl3qQuSUE3US2UAYDhDuMUIlOru7Ylw7ypuH1b+KmX1g0Y58gLQSjR9+hSzpswyvp46M/BlxbgBG1gDWILYcTirwu1tnTZnQWILAvyzGVFrHTiL2dlJeauDvCLW8GORkAjB1ILf+EvFxEJoJ6yox8XN+JLukoQFH0w/Hy7TddJFKulxcbOOgDR+BKEzub0C77EZBEMwhqB6eQBiXUdIuUt+ewIinXeHNwO8X+JA7+4Lpi0dmtz/gNHEva6GKDyBSrE0rNWEITXSEyGdiArd02AnLfE6cvvOvSQUAKRjJWqtAW3qOhr7YAsUo8oAgR7LlYsHzAp/YkLcy55zk7O+5dsSVZl8O18x2lXu2mF2f8bHgp4GjtcadbgJRuwDqJ/NBZ+fN1TSrqocFzNuGFjitkp/cWwQlN9Lz8H6FtOVElBdDodyF6ucqG1cLoBp9fmRPYjXhYPIR+ENSOuxJdsYbdWh9o6Wxw7WkjJuyxJcgw2NLoOtAl/dqakXBgdXgBcBlPxkHTpOSFqrxJHfJARKjjRTUsBJBIMwjr7PmrAi3w94dnVHTYxTZRoYnw7FCdQOmDMTOAzVA96Z+YHrE3Jl7Hd0MANsBou12bSeHRBnI+JAStHwPvf2MyjWAqwfl/ADG0Lix0wqfTotFXo9VqZdmQStMuckBcZE9HYNPcLYPTk1C7QzcjUUK49o0Y4jyGiaZdRthEDZQzJXvqtw4U31/qmwzLGZ0LyXrhobiyqVXf84IeDy7VX7EHZMUgaJj1dD1Y2lD5S53Lwy2uOxua07lTs6LD8aqM4A970GY0CpOPQyxePB3WgUL9m2c8VC7nyZDCZkg2pNFYg/VNoaOx2MRlJGl7CKeVqPWCJThdTTrOrCMSMzm1hkti1OvDp36XDDW4U6gCEeeU14I+yAW9H3Ksoy6JiYLXhPbb3aZsk/MSFiYDd2Huo/kc3VfxawXZpBynfIxraak0/NuZuLZcSqCaYpucbvWEtQYiOeGoIu7vVvcsirPIlWL+cm0FLVQUl7T7RG0cl2nh0bkAgykqLgbArHGRfZ9H+pzTu8Pbc9R+8Xp851yIAadPmbaXNlne6LuxTl2OxxYOFUn7WQGvqQWmr7V8qS8+a8+SPpA45Re3A0EHfYG5zHhcrSymLhowC5il+Cu30/GLAtcNVUH4aAJi8wOaYqLWErKCRjEUW8IuYMhYFEQzMKM+RDb5dZLmAnGAEjV0HXE2ECwBgK3PsDLPg0+YkvC5hNNHUfGvY52wg6Fr3dI7Y+x06OPMBpqoNQGJDmFVQ/luGt7ht8v6XLYYLNk3Fnep7L+iA4FmwOjaIJgHQC/DQDqLVr0OIhuLHnNOmVwwxyaU/sduVOY1HiEy5PQyBX7poml05l4W/QoqqPuHpNZL0htq5HLXPAekeYCPR9HNT26zCJhCJnq9J3ntN14kEjnnHv9kb7b60I40K02V9NkQxfbszmfnEVHL9kTVIrHOi08rJ1lCYCy7V/aOnNmHI3xAOWwyhG7/KglBDQpLUjn9FUWsMW1hS4uOdGA9kDn8dcWOJGTpUPVvGLJqOcupasK4je73FxPJ2nn7Gjg9hgDvpmGXiljk4Id+Hpi5b15knxsneeHz3LtfV+PemykfOLYXy+IE8k33NKUKHUin/OX++qycHPxkwersuz0yApVTZVVRuQ81FvPoUK2d+eSja6gV9IF7fEjDtmXY8pECJaZRqPsfY+Elu1HOx2yqPIvkY0NlBZrG8EXEBWAAiDRvagr/Bg5PdGvvisZy8itDrUo0Pl+iSVDVzZf2Ux3On9+RFSuNUvtx108Plx8W88AQlgrBQxPEyx3inSuS1bp+rGbqomhQAhod9xwWozLMafzuTOW6Dmk4aaE9+XsW6vRHYvA2h7Z82rvOGd25YJdEQho7kK3cxjhaaHwp/m8IRzZQ4FBlGi3hnFtpUhfKLzyEeThOQ51ptw+tDQcoKIOudO42jAeWYgkzxtd5T5m4GJddCXkki4bUhB65vlurpr7TTaX2NmUNl33VXHQ24EFGDTW3E2xwQN1j5Sep/axAA2BppYVLB7JaiWxImfN63pT+PikywazhFeK23z7BMQucEycMSD6B3W9oKx/92TNPfchjx1J4aKctPZ0upkXvcyVfVZaPVVzAQiYw3UsupuEVWsIbZ3URkztKAsjNwMrcP0+6k4+2ToV1ugUi+gTvdHUjCWBGknFGAOQayt36H0VZIEyfQKdeVACxLACqcuLm8ybxtgcTnjoGgHjKNm9e+OxuQrnPTXsi0It70XuiccKRc8BcwouzKi4eE6dH55VslnaM70Gs58LWAiajX06UmVKD0hf0g5E5hx8RLnhTVNb3DCJqnjcdctmq3Tn1OPAfSHWxkWA32RFVuZ6NMzraiJ7K1aZM5d4XiQUI93aQ4CLrkkHIy4MuIXFU23ZlrDiZOLl8QYo7yghyvADjWjVDKKqokuehKyOgZEQnEEXWX/AoOQU+AZ8V4U0OFwCh78FC/BrbHPlbqZ99US3KLwqqIo75++OVxHDNwf53Itwu0nxuUtMnbVTOUK5VC87/mBaLgwQx8o/lAF4rjLOUZXB4+x5+gVhlUbakJ1pq0l19lWXWyOzRhDaQVtkpXk7rlLCTBlR9EXZEIV0vOaTiAGyH3kSyKSQMEO56rJ0GvDjZn83BcTgI2K8lT2g1dnaMmJJz7Yu5/6CkN16G5sAYx/IFPR+mgB4Rm4I3bA1pc7RUROdE/dIugzk6IplZLv+LmQMaHYTVj6ebo2jq7PjLSX7CM8jJ1CIBf59nzpC6mUR2ic1aNgrNWg/bCoi/g0Fvb9r0H7XoP2uQftfrEE7Zb3MupEsogeJ0WpdMMfNd2oSp1PF8X4o1tY4hCzrdXA+49gyyu5EDRXCxudZQ06du+qUJ1dBawai4CGaI3brHeucK5RvvSKkyrGXGdNgwoev7KoPS3ySyO4UDBjs2B4mTDWBd/M6da1SQDmTGVdsduskHc/2tY/qsJ7JKzBlgU3ccfiQtAPXGKGFPIRG9+HG7i6yIJnu7mBuqgdNSacJKsftNwBzO1fiIlEwJfuwT4Q7mgxVca1ELbQJEZMog/QaeUH7M6Ip0iyoLkhPR2R1JpxbSnhYssFa8KBSv2LYcV0kiFKjk5oZoVBUahg1StiZiW8vESqzvbNA/Bo5eNZ3127z3kbuAPJmRlqH51q+wRLSEzTDK3ClH9n+wCNwuUGcXVFLd2O1ehO6ONGMG2IR5uVq7wnIx3tg53pdY1eobtqw2bjmFjF8C4Vra/W4gKazTasnH81ZUYVwPiwb3zUmqt589q4koz6YK0a3soHgTUX0BT07audDd+hbIWNw7zzALjursDW6SbwjRzOKe7uKEAoVpIImqDiFTUdSr6hmTBLgHkxDLcuVb1jlc+lYMvrBp8mdVUmmqV8paoI093aPH7Ec4R746kW47kQNFdXkCOwtY7p0Vs0BlsM+i60Jq5tBRsGWtROD+IBasTrhzOzCm7HBj3njoXfhFsylwMcPQEA6nWCerMgQlf1OMeVKwGESEUsBUW8r7mdjoq8pWEJjc7x19l7IJ7I5HPBGiQgKoHdlKKfJd4ZBQld5Mty7fs3PG1B4xCzmpI9O0t2+IzAIUFB2om/wdvuUOZiGzBCdFTG1NXLZxRJlKVhr830FxdZw9omzVY7sobdOaGcNSr1GLS1SMhiwgGiOZV1cNr2OsnA3CxPelSMT36CyYoOc6uMmBqaNv13j/fnQuBfT2IbgMHfYIRasKhKIJCXI1HSVGOi/g74LLD+8UzjL5NRgwV1PMxDIpi748XgX/Hm3hzFdp3a97JZlM29wkxo2d3YJNL9m6UHyTawPPFI5cvPtBKPn1pU5JM9qM8pJm9J9U24vTna97D2xM6DRPKbJDUo3sZ6zcT9eQQ0azDdIu4p0DUuKqii7YCi3bvBjJjcgmtWFqwMclQPV1qF0IHASMts5rwMjPRj33EzIE3UyNx9muccnRCpg1uqFNFXHEDWPj2zuEMNxCEKQBwSlJB9tBWQgdtHeoslN34ArWMGiu2reyYOhMqx1JxnbYBepze1oLTun0Wba5fsjnuXjxQCwIeBBJGc8gs9yYEmT2/VqIoxph/XJQZg1S5cbGt5dBXQwsvE0piMVquFdNfxl012Wn5TrcR/CROgZUSUKN5jI/SFHVLzZU8XlIm3SeiZbQse0WDqKvRnvyQEOWxTdaLRm27MpY3L3M1E296t+C02pVx5FKpIsGJFHzejeWsx8t8J4Ny7LGHAIIaxOeEblPYWCPBiIaLAiqNalWiwLmbqmMN6H+7ZH8Vm+1qe4vY36mCZYXlXpZodQjboclxiRSEc4H+WrxEqg1EVfpW2kQMhhNnHCGpL9lGHLPdlhaQoJ0xwzEXDET9VhEkZ3iahQP5RpBLhZvRjKvOeu1WZsyBEjKVNtDjkqVMoCoL1BoZ27VJuN8JrosulgHy8KmRlPNuZctOgG0FzJeTLvlw5643sQd/KPsJgcQrqFk4crijq9X/bw4ZweV2idGCfAnN5TLYggysxdxQt8tE/ShMGU1SSmo4TsHF+QQyL4Y4lplRTdVrJtSBvB1RxFp+jhlzNaMS9OhDaSm7WN0i70ribhW6yMIPKQWvbU417My3tJilR4WIodRlM2LFyTyaPER5w1UV1KX2+SNQuxEyPoIB7wE5CFGPDVYUQCPZNbk3Ajn4xmw+GvMzVfNo/bZh7+CEjSaBUfVMexO3uo4UGJo4aQzWIplF06lkOl1srGOqBLFEvVjp19NEEN37EkMoLjzTuOUmiYhk6CgsA3+76HSqKCOxCKQeKRjYcGm/peuHN78XCatZhxM5vwW5sUKwhhKnWGDQKUKsFMEzcqNwx+xEGWVeE3QeATPQxX3+6MCiha6IYtByQCAXxkzAVKiUw4mA5FieiqfZyLDgGViUzw0JsuZlL8YwgOVDljM4T4eJ6WK48HoUdTdEtKqlD65z4v53tjFZAH4m4qUKzBlPNB7qiCeUCHmoYOhB4MOwKKQelqvqkxagdarpcNQgSxhoGQr9QcGabrE2Qa99FIxZEIDQAeHAhQU3TNLdK9rMvMXFGhJ2CKdnRSgtfNiW89j46V4X4H12mxemf1xm03SBv4IHgRhHfv0Ad1YowTAceEHbQSPkwzrpj8oMRwOFOgvAAk2RqfaeEuEkKgS3ybQaputS3M9Q1soNRYwB7pNIa73T3RO+/OC9vlcAdOqerMi/jpVkbMyMrzWkENxrizUHh4/si48B6qjyNdCTPOd3160SBUdoh25ehYwnWMSnqzwoRY23ezLu6hSnOw+TYqIwLUwgEqBFBqbYI8Dazf9p4lNeTVxhqgMe07FfutLsB3JCV6PBT7EymBlzrfQb2oobBGcwH11WoPxlkOg8w7A94mUTWhG9SO9IKSza6h/E6UKrLe0YAJzoEj2dYjs3UrtbCAyW0UgJsP51gLLdo8yF7dERrXaKJ2LeiWuhbDBRnFzjE2xWKy7nzYRkVrDDwnqx0Wn5VLR4OiQIaT2uG+OllXaHESDxtOwCKUx6OM7Dv3CKQydUYAK1IvBGHYLE0PM4Nunl1io1O3QWWyp1xKGnaako2FYizoMDxylxenXM/QZrwJKwAjB2CfZ4TIdG4HtvRpnZLduCTGZo907W2Sq1DrfTRzJxdzizKmqvsppHa7yHI2aKmnqreim4WKdzOCL6GHzcahkdh7HiCc51dFkgcnWW3KYeagczTg0o7tXCA89xJJbj4a+l0r6DcLsx1agYHUaVFZ23m7xIIqz729XwcG2nQxk+xGbI0FGtTrbu6abNH+tYy1gDU2bZwidb6v8cFUy4PKACGjegNKyRWESY6NabMzlx4EBuvwGqQ1+P1tulGWJZzrzVLbeQkdlJD2aRCZF1ESVAJVCJoe5LO8OBpkblgTv2pKxYcqJYy5h4ErZDjdgNtY496x0lEE16GuTZ0zNrCKD6sWjWH5bFPSyA83pJ1irvduZQ+YlDlGIKBberdrsg9mO0SvQ3QJfEfXmCtMi76HxRDIjt53QWR2QWLeb/RdDHxr084XCdEM6+x1hoKVp02dejPnXpcYgpqLd2Pxw26DYNTDvHAU51chopyHyhmoyUwrKe9vY406qo5O5L06oLeBoS4OFk/DRdzrsuKhfAp1MAVwmZdwsVVph5BQ6wpkMTd+LQWMp4STPzd0hd/z65HgQgbYv8uIUKNAk1hOhTegw1CQrxtKxa/6quITx536R4ryUeGQxignS3AbYLueyHWxe1SyYdKG09D04amB7lM6yh1N7cDbaXCfCK8j5J2X4ZfmjI3A/khrLh4viTj6akwh8Zr0YcxcAmfPzj3m9/odPyI9s1fiS4x3eLdZHblNXPHYYYWCaae4VjfvkDTUtRGvPS1k9iF3w87TT5cOu1O27/EMVmze5gH1TDm91u6gwQmDJ+fKbaj7SuZqKO9zm24sSANJEZZ8BK2AZShWTtlvFg9RE1uY1sF2lg3kVEIIU2myb7EbhALhk/fc8iidF3oXgjEJvu6RU73RcMPYxE1we7nqPNvBW1daIcix4uMhv5ciHnRue8gnxE3HiRQu7Y5KLXxzd7q1wuIeb9mzJVL3RLPOaDbOFNzCUz9E0FUAvB/xm6nWEFLYdF6TttLZWkkMQVVUwCh1DpyqcqKVO5d7gSxj5RHb3zHFOB4UNxMmzklL01rZNOSB5tuPMUrxqJDdDdTGxaOuB2G3o7i4jHqMofabr2mdTXTjZGaa9bPdmHUvh1MLl2fzni8E+vCX8mpGa5cuYrlMluIAw8IJBrU5gCk4RAdSnKV6mljCo+RY9FgxdVwi7uO776rB4jcyB8UyKJ4Yiv1QFpTmC2og+4t/TalFx6MG54OpFlsC1fXOy6VSlUtI8WfKe5T9kc4CHBxOZIQDSLBbTtoONRFERteWlabmIcm6eqwme6hwsg2irqNSjgezFsSZgNdx2jDZvCnN1SmKRi8XXq80U/LrNSNvlGIbXSB4fhdOo8cDTj+H+2EPTDfs2lUJ2ehxA/7wXhvVIB78ibsYe5dZ+2PHF7YA39DBrvY3jkQA+Zauz5m1IYSbV543z89AxmmgDkvMqzuQoNn31ZGsBbRMkSPeUbiOIJ0kYyCX07coko/FPKESgktJxLjAwDUqpescMrj3Q0ukTDOrjeAPx9tc3hJmYlx8RqBso8aBBCzLxY1DJpt7finP80JfFuGo+Vg4tLDviKVYVR58tl12qrsLbVEm8cjH8bMFDcTYK+PMxA6G9ntcCzGcgDG6GSo1QdVVbtndMi/Y7kjcRnfk9NQ0FypBy6qm2tS7IL5SK8zVp649MtIC5aBw2YY3l1t3Oe9hWtFGSH5f9xZeZImlHm+bP4fyLXRhFGD3LjZpY8dRFANHRa4SOdw0Ovb2SzaSRlgcDGva+cQ4pW1P7rN5IKVW2SmHrBfHS8fYXtMYet1NsCazE1oox+uRhq4XeNHSRy0QTz4QTHLuzwVXjsvd8ki7cXYlBdwJpmshCF/pxoSXg36mbIwEegVhLTFhORXFwwxH+2U/d3Wj8VoelXhOX1q4PsM9qObuMAo5h3bnEya6ONPxRO+kGSCt3MVKGLVdqJsWek1cpUvCnhknA8G62TX1dMUyyiKukTaol8o8CrJ4mA/GZuiIh9iRupDKBs1d4XHKsfuo7kfxsEfLlZHmYiVyqNsretAjHsGrts8fCFhtH5M13BmBW8DId03AEpcwJ+pwQnAXkBrJIT585NV7KQZIhYkHtR4zZBp6bVRIh9+ILAwSfYJyzLb8qOnhcKSCKLqx5vVwfDQSC/CRhPGSOo/IBbj4x1sjypFfhiRXsIkfpIvj97bRnnnKiLQzQemxnQ/jxtEW3HehSait04AQhsol6HThVNJ+FOX0lXeKRTJzkHHzJemFODYPtz4+Iyip324xS9IbZAPmuUSOfH6mNYlEsaPD5m4j3/xLMXVUrRs+nLslpawLDg0uR8HdkUxm7n7KI5hB534t7Ef5N94EGRE7lw7Ek8VmGAlqdQv4/sg9UsT+hnAU3zODHSrzoim3IcsPkcbh7nHdh55p0GlCcZSeGgCn8jKGIZtcUPd682uWfuwPtXgVwj6pHvM5FN7TETDLlPfshzndP/KoHISgjmxFPswA2wtyWwoDysQ1BdRrN+NDwxsDBYqa5G6ckzaZz0Ovx9rS97qiKOnpqllIh7+VZUwDmUIkiM4lpRKAwaE1QBzCQceUjRdpcAyH8DYPo2oWMxoaCs7aYuz2JAjoaTeMilv14M43GOFvfZf4tohrjJsPoIJeCfK53ISj9wpXqMyC1eEORAqO43zwV4iI50w0dwUGyhPRKRmd6qq0d+eA3RW2ce20ZekMV0gqyEutX8Squ91QHwEBsA24O9SEaRbU9RFzy67SKYWZaFMnaLcgxHhAJnS/I3tUuYAgrABszzkQzlB+p62TfMzIOmu6rDvhUVC5V2IcHT0eUxHLsp4bLl1qs6pw5ei9hFez1YCoHSP11ngknTU3baX0dorXHSkxFi02vB8zdEmkthrtVj47oLaC88F0C+CVtGWAT6M/OI1kpZ4AIUK5D8dEWbeTi59EphNSRl6cMWoVnNFKWHRD1/pUXU4T6jP9giXh8JjgnOmTd/GH1u98I4CDoq27phMwlxBAgcZN5tnguO6uG/w4dmWzdGckusObmed5UEKYRskVMvbalIZmQe9vhn0DJutYxD1nbO6jxAJdYGVDBbl01Rn1hatufUnN8fXYmcOYNg20T6hY2kmKt5uzYQhm3tQl74zuOSe0Y/8y3Du88BgjHsdzlbbLnRZNPD4g+/ukH7tog+X7pfcScXlkPne46m1KHOuYKsBNJj+qkb0v1dOwroqtVfwd18+W6hOZ12XK/XCprjAqkGqcGwmHYqsvYKoJKdNAG3nuKWEQTevmQIKhB/zRC+o9H8IN39syZiU94m8++P2ccCSAQDnbCtc5iivQZ3s83i0dv0TbWzMdfDxX58eshsvqAvqt2e1RuqdPLKXqbDVl1BHZZGMam7NaHQijna6nnYKgxW28PQqRhgYDHgEjXAP6KgADPOcDacuVLduXSMqLS7CqRWnWdMy6+D1W3jIK27sY7eMRKMJMGNfoJLY4zabLakbeQ1PSuTPzyBXwUrKhqS54+AXbp4bvtMod51Nfq9qdVtOZb8pKfHi8noheHsxzcODNFd7OOe0FgW9IIo2zsbknvanm+9PSoufD8EhD06sgHJvbEeGw4+aq3Q7tbFl+NieyF7SJA7A6D8SpRjMfhh12UbXYEbexboeQhTuogWzaws+a6nesIi+IfoliyODzcgzU2wFm+oMmlZRAZFBI+etSmExM+4eqE9qh0rl7IytxRaNImzJ1CKftrsSPzMNB9/PD+YYthACrknktzxaqgYQzP+9vt5Fuy2IyN7hFtDM8LHDKH6V8f8W7Nbfli0vf23rfPNRkRdghMV7G3uH7mszyMr9HIRWchzJe9GbFZAIOcL4J0Z1EKxy8oRqH0m84fHLVntOltfI3z6nHLCpxbwUqkUGDVWwy20SFgZix78G5gfN71C+CUGA3Izeb9qbrDhcZiRoQoKm71U7uZypU9qx+ylo0C1XfkV0bmoy9AhbyZHK62RRfZhnNKGpOENn5DMwmJHTxQBw3+14fQcQCcA0chZ4I0WdgUpw2iHMNptxYkqUlR7JI4ROXjwHKBuo6k/0xWJy7DQ0OYYPyr4zFrn7e3junn9yjl4x1zLjaiuI2CL1Ag8ksh9W1jKyGaGtWXLcggCVMY+oSH8VbjxbXYayMId5GfjsP8g0TUydz1d6AUpJxdXt6LgQs2QS15cLu5hHmwOBwMFQU6jjkwqpZqDPm1ibSldAKswXHMeoqLjLDrkBDAh/n1/MtEXHUjuxN2ek3rHT3N7dfdtomS7syiR0yakPs1BdsdtN2bRGP5mj4o9gLq1k2ESm8deS871Ud8cpegD+3HOLTpbH0Lhv9Pv7Lqoitx35ZxmV97fzba0tOPq/RClZEauIu294UrCv58Y36+y++sK7kYxnvDbk9rb+EvF+LCXqDoJ+uxgRBTwtkgfPPayw+L1D4faiGfFzT8goRyZ9KxE+XxtKHoASLmfy30PBHlBrhn5Ya/VSyPK+6/rlSo7iKdmAbV7BoWenf7xvNPqLGx6uAfsclPT+zoCf8brXPDwj8Nct8Pi9z+qUFPZ/F58sLej4x9hcX9PyAxPgrJH4+99Xrfj49QQd7ALznMJh8g0Af/DzvuvZc2vZyAdm3A/HUyHtm+qTdP2HoDUoT5PPC3jj6Ubtg5W/ywwejHz/m7TB+8pgHz74bo6/bKdDRri3i3Y7YBo74enUadH11NxUO+YOG/9h9ugwxoNSjTO6PH7YNw4drDOPv+PCrdwP4mn3j0M8qGugNTNPIf8ZRz5fUSXKP/1Oyvb5q7lcstvePaZ8vLij8I9XPM0P/y9QP9nI9+e+kflAEqJ8fpXJeBzS/LuvCn2HcP7+n5fzsBpT/MtYlXmKw78S6OPrTWZf4irWGP+DdpzX4P2KPv+Tiz+8k9BK6fxZW/yJ88SdMUh9jKJz6u5zwoiWYpn4u4b9ifeLfhH9PrhfOGva8hcO3E/7lXlck/nMJ/xX75fzobXjhFxGJ10IS8HOs57vvZPWqqfoa7/f3Tlav7GT1H6wm/jf326X+gg0/qxxg6EdtjPYqOz0z9P/crlTfQJ1njfhi3ffXtqRCXiHe99iP6nXU+hf7sUL/uo2Mvp1YBPIxrYhPaPVaPuB7iNnRhvuOOR3lC/5/B2uwOLLj3u0/+HurotfI+qF/9vrwPQVdPvTPPisS32+rosetG7r0lw8ueIqe/SUMe6EpnsTxr6DWZ68GifjH0/8uuPrsgP7vbGX3DSrkr3c7+1E64/Wtel/dre5pq97HfnWg/4///6V79n57rurFhnUw9gpefw1g/bBkFfkKXH9HxP2/eL/lb6cd/HGgAYM+pd1P3V2Z/AoH9NvCpR/ZyE8Twi9H+J35fgMhxEcmHEOwzxvx7eBlqPSdYf8TegOR78z5U4M09XcyB39J9i9GXJ/d2C9a9OdNxn6RAAsKvYgIwC/Y72vjKyj+cUPvNsT+QnjlW6HIS3QBP237+Ffv9S6X/7xTIP5d0cirqdBXBO0pE/r6Dp//9ZnQt6rlr1UhSIViH8fy0f+MfZdnJfvxHT8uT0q9gl7+Ee0JvaEp8mMHiCb/M+1JkB9rT5Qm/nnt+XpOFkJ+KfUJv4yh0H9TfWIvi5JewvDvpT5fqEMM/r7e2Wdt3n9vUO474EH02Zd/dos/DbehPwgOvk6Ur0hE/UpEwX8CTWDonyYK/c8R5QvZEOxFruILx49kCIx+6YY31ItbPsyffNDGVyRECMinYfJbEiIISxKPEuAvJkS+fU9V9IWefSW4/mpmBP9RrIUi/13y/ncyI99OJoJ+Q3/4g31ENYp+g1KfEu71rMib5xj990+MvKKqCaj5d0Q3/4bS/li0SPITCv2ouObr1HnF3XtHHfi/OST2PYj1bNE+Y2F/VBTsVWoRrxQNbvTB94HfbfSaAb2+Ua7+nZTDCOKzqhH+VC/+VKhEvBqJPvzfRrqNmn0a9/5vWj5L4Quv7xXfA35ti/cfFoumvgKMfLdoCvzxoD6mFb1PC7/BsRfBYxrCvmv444sx4Wdo9osENbAX3AL/3ZgwDn2hoR89JeWVrPBTIBb9lN3+DYHYt3L1uUAsRMLIx0T5LjzzJ/laoz8hLPvdJ899qyJ5F5Yl8I/rUiDoPwjLbrfDxMdqCYP+1myYv1RLX55FQL2qlr4myUX/k/oLpV5EdfC/WTOMwegb9IPpCC8EByLf0Nh7fPRs036WcnsFyD4pt1dE4F+h3D4/325TbpvDTn0fE/jT9NerhUvlv8B5/N71FO+I8g94kq+T7iumn/3aAbXvQBUUfzG5GfvUL/xRbuHrVPkvyzX9mLTGC6K84qz/XKL8g7mm32mNP75jWuNFOA95JuRPyWu8zltfUdb2Kwn8r5DXeDed4CdlNV4n3Cu5zv/lrAbxMYme5yj/U+Xaz5Usr5Kn+G9Gpt+DWi8Ks1+zsD8VjNKvzp36ndZ4RTO+oNxrqvCngqPnNWK+JpHxPy93L0LTr01q+bmZDPoVB/6T6MvPCEDCb2Dy+fgpALmBtW8NQP4ljb64XA71ixW2Uy/qjdEXTXx1EPBFJQ/y4+J8rzPY7zADQEcvJtMgnxZ9/Fyl/TvM8ClRiH+aKL/DDP/OMAMGvVKF93PDDPTvMMMrdCKhN8iHYYaPsTWGw28I+lPK/exIA/070vCZ6DCGfkqinxppeCfJr9Lnv3pS8fcg1zNw/YyVpX+mlYWh37GGr401vABIry239VMR0rv5c18TbPiflzzyheQ91+n9Y8EGGPp1yp2wx4LsH87hf57T/7fKnXD0XdHlUxUmRv/xo8qdPus9fHESKv16XdQ/Vq75cjXT50jMN0c66BfqinzxLj840gFDv0osbfObIOoj7qaJb+fuv8+Kv9o0Z5T6mDPw54V5/9Ng2s9nsW9bd/cHTON/r/KQF3Xnf6u884P6048YFke+sC7atzDsJxWgX44GY/8ov75YOBpG/qZKRDH08w396CLPVzyjf/dSIk8S+rkqT5R8Xqz7xSogv/AuC9CrUzjLf0FW77vXeRIvitdemd/5U1PrMPSPL5z1nW3G1+r5f0x9Ix/LN/JyjfavV9/U5xv64XDjK9Zz+M0635V1XgQ3qb859wN94VV90tCPZp3XTP/vlRZftRgv4ABGYP+wxXhlbs4GW7ZeECUgT9Btn67g0yaV0Kdn33zVqc1J/VSZ/BpZi+9A1BdRb/iVeQWvhd6+x+Lkf0HTr/Acn9N32c0H2vAv8n1fTBOW4AvGD4vrY1g/yLnyj5/tkscTdvfmkRl8jPnTM9m075ut82Dn2e1fGFXQm2xD4wnIBXZvwu0BCB/5Pdi9FZwHW0c3aX3ffrs/SzCQf451CRA1j4CSVD7otheJ+/ufj3d5ugRGqDdNdX33rnW3Nf78nlGc+MODWV/kkt+9/ffgj4/LZelPE8SvWZnvsvHAX/DHt23s8t7U3zdL0n8DAvgmow+9n+1+eUIBf2ttv3dhho+DDOhzlOw7BBn+26bJwyjxhiI+2KzqY8AAMtof7mWFvNiO8asn0cPoG5iAsNcnoeIbRcn/b+9qVhgGYfAbidba7j563GmvsB4Gg7Ky92cWVNSkJXbW7u9YEJF+MX5+iYkXgI90uUxVA4WM9BS+vGRLe9Hh29RctWeht/26O/vSZ/7G2cw6R215qgkxqF6z+awSCQ4gwO/cj/qnfSbHncUcOD4qA3KVlEpUTsIAghvseLv2xm/8JEKgw7dCIsRFMaKQ1DS9IXZmjlsIzS1CanHgcg21QDqtgHgDAZN9auOAbHD6sa6YRx64iKd1t7tSoYa1/JUmVQE7Qnv5tNs084GUmWpnlDIpCH9dePC4l+W6JDF7nvDICdFF1Zk3OLlLVUcLrkzFt21pZJrWX6Q95xt7w7qdOiAGZmWTbZNTA5C5atp1Jhv4aWr9H3wm/ezvcLeuThFZnFY0uWR4/TkOw8MfPqXcn4bLdDfsng==</diagram></mxfile>
2304.11436/paper_text/intro_method.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Federated Learning (FL) [\[29\]](#page-9-0) is a distributed learning paradigm, where each party sends the gradients or parameters of its locally trained model to a centralized server that learns a global model with the aggregated gradients/parameters. While this process allows clients to hide their private datasets, a malicious server can still manage to reconstruct private data (only visible to individual client) from the shared gradients/parameter, exposing serious privacy risks [\[31,](#page-9-1) [45,](#page-9-2) [51\]](#page-9-3).
4
+
5
+ One effective solution against such attacks is Federated Learning with Model Distillation (FedMD) [\[22\]](#page-8-0), where each party uses both its *private data* and a public dataset for local training and sends its predicted logits on the public dataset (termed *public knowledge*) to the server, who then performs aggregation and sends the aggregated logits back to each party for next iteration. Therefore, in FedMD, the server can only access the predicted logits on public dataset, thus preventing leakage from gradients or parameters and retaining data and model heterogeneity with low communication cost [\[27\]](#page-9-4). Prior work [\[4,](#page-8-1) [15,](#page-8-2) [19,](#page-8-3) [27,](#page-9-4) [30\]](#page-9-5) suggests that FedMD is a viable defense to the original FL framework, and many studies [\[4,](#page-8-1) [6,](#page-8-4) [15,](#page-8-2) [22,](#page-8-0) [25\]](#page-8-5) consider FedMD as a suitable solution to solving real-life problems. This has catalyzed a broad application of FedMD-like schemes [\[5,](#page-8-6)[6\]](#page-8-4) in industrial scenarios [\[17,](#page-8-7) [19,](#page-8-3) [26,](#page-9-6) [30,](#page-9-5) [38,](#page-9-7) [48,](#page-9-8) [49\]](#page-9-9).
6
+
7
+ Despite its popularity, few studies have investigated the safety of sharing *public knowledge* in FedMD. Contrary to the common belief that FedMD is reliably safe, we found that it is actually vulnerable to serious privacy threats, where an adversary can reconstruct a party's *private data* via shared *public knowledge* only. Such a reconstruction attack is nontrivial, since to guarantee that the adversary recovers its *private data* strictly from *public knowledge* only, the attack needs to follow two inherent principles: *Knowledge-decoupling* and *Gradient-free*. 'Gradient-free' means that the adversary can recover private data without access to gradients. 'Knowledge-decoupling' means that the attack 'decouples' private information from learned knowledge on public data only, and directly recovers private data. However, existing inversion attacks [\[12,](#page-8-8) [44,](#page-9-10) [51\]](#page-9-3) fail to meet the 'knowledge-decoupling' requirement as they do not consider the case where the target model is trained on both private and public datasets. Recent TBI [\[44\]](#page-9-10) proposed a gradient-free method with inversion network, but still relied on availability of private data (as shown in Tab. [1\)](#page-1-0).
8
+
9
+ In this paper, we propose a novel *Paired-Logits Inversion* (PLI) attack that is both gradient-free and fully decouples private data from public knowledge. We observe that a local model trained on *private data* will produce more confident predictions than a server model that has not seen the *private data*, thus creating a "confidence gap" between a client's logits and the server's logits. Motivated by this, we design a logit-based inversion neural network that exploits this confidence gap between the predicted logits from an auxiliary server model and those from the client model (*paired-logits*). Specifically, we first train an inversion neural network model based on public dataset. The input of the
10
+
11
+ <sup>\*</sup>The University of Tokyo,
12
+
13
+ takahashi-hideaki567@g.ecc.u-tokyo.ac.jp
14
+
15
+ <sup>†</sup> Institute for AI Industry Research, Tsinghua University, jjliu@air.tsinghua.edu.cn
16
+
17
+ <sup>‡</sup>Corresponding Author, Institute for AI Industry Research, Tsinghua University, Shanghai Artificial Intelligence Laboratory, liuy03@air.tsinghua.edu.cn
18
+
19
+ <span id="page-1-2"></span>inversion network is the predicted logits of server-side and client-side models on the public data. The output is the original public data (e.g., raw image pixels in an image classification task). Then, through confidence gap optimization via paired-logits, we can learn an estimation of server and client logits for the target private data. To ensure the image quality of reconstructed data, we also propose a prior estimation algorithm to regulate the inversion network. Lastly, we feed those estimated logits to the trained inversion model to generate original private data. To the best of our knowledge, this is the first study to investigate the privacy weakness in FedMD-like schemes and to successfully recover private images from shared *public knowledge* with high accuracy.
20
+
21
+ We evaluate the proposed PLI attack on facial recognition task, where privacy risks are imperative. Extensive experiments show that despite the fact that logit inversion attacks are more difficult to accomplish than gradient inversion attacks (logits inherently contain less information than gradients), PLI attack successfully recovers original images in the private datasets across all tested benchmarks. Our contributions are three-fold: 1) we reveal a previously unknown privacy vulnerability of FedMD; 2) we design a novel paired-logits inversion attack that reconstructs private data using only the output logits of public dataset; 3) we provide a thorough evaluation with quantitative and qualitative analysis to demonstrate successful PLI attacks to FedMD and its variants.
22
+
23
+ <span id="page-1-0"></span>Table 1. Comparison between PLI attack and existing inversion attacks. GF: *Gradient-free*. KD: *Knowledge-decoupling*.
24
+
25
+ | Method | Leak | GF | KD |
26
+ |-----------------------------------------------------------|----------------|----|----|
27
+ | MI-FACE [12] | logit/gradient | Х | X |
28
+ | TBI [44] | logit | ✓ | X |
29
+ | DLG [51], GS [13]<br>iDLG [47], CPL [42] | gradient | × | Х |
30
+ | mGAN-AI [41],<br>Secret Revealer [46],<br>GAN Attack [14] | parameter | × | X |
31
+ | PLI (Ours) | logit | ✓ | 1 |
32
+
33
+ # Method
34
+
35
+ In this study, we investigate the vulnerability of FedMD-like schemes and design successful attacks that can breach FedMD to recover private confidential data. As a case study, we choose image classification task, specifically face recognition, as our focus scenario. This is because in real applications (e.g., financial services, social networks), the leakage of personal images poses severe privacy risks. In this sec-
36
+
37
+ tion, we first provide a brief overview of FedMD framework with notations. Then, we give a formal definition of image classification task under the federated learning setting.
38
+
39
+ **FedMD** [22] is a federated learning setting where each of K clients has a small private dataset $D_k := \{(x_i^k, y_i^k)\}_{i=1}^{N_k}$ , which is used to collaboratively train a classification model of J classes. There is also a public dataset $D_p := \{(x_i^p, y_i^p)\}_{i=1}^{N_p}$ shared by all parties. Each client k trains a local model $f_k$ with parameters $W_k$ on either $D_k$ , $D_p$ or both, and sends its predicted logits $\mathbf{l}^k := \{\mathbf{l}_i^k\}_{i=1}^{N_p}$ on $D_p$ to the server, where $\mathbf{l}_i^k = f_k(W_k; x_i^p)$ . The server aggregates the logits received from all clients to form consensus logits $\mathbf{l}^p := \{\mathbf{l}_i^p\}_{i=1}^{N_p}$ . The consensus logits are then sent back to the clients for further local training (See the complete algorithm in Appendix B).
40
+
41
+ Several schemes follow FedMD with slight variations. For example, FedGEMS [6] proposes a larger server model $f_0$ with parameters $w_0$ to further train on the consensus logits and transfer knowledge back to clients. DS-FL [16] uses unlabeled public dataset and entropy reduction aggregation (see Appendix B for details). These frameworks only use public logits to communicate and transfer knowledge, which is the targeted setting of our attack.
42
+
43
+ Image Classification task can be defined as follows. Let $(x_i^k, y_i^k)$ denote image pixels and class label ID, respectively, and L denote the set of target labels $\bigcup_{k=1}^K \{y_i^k\}_{i=1}^{N_k}$ . We aim to use public logits $l^k$ obtained for $D_p$ to reconstruct private class representation $x_j$ for any target label $j \in L$ . We further assume that $D_p$ is made up of two disjoint subsets $D_0 \coloneqq \{(x_i^0, y_i^0)\}$ and $D_a \coloneqq \{(x_i^a, y_i^a)\}$ , where $D_0$ consists of public data of non-target labels, while $D_a$ contains images from a different domain for all target and non-target labels. For example, in face recognition tasks, the feature space of $D_a$ might include an individual's insensitive images, such as masked or blurred faces (see Fig. 1). We also suppose the server is honest-but-curious, meaning it cannot observe or manipulate gradients, parameters, or architectures of local models.
44
+
45
+ <span id="page-1-1"></span>![](_page_1_Figure_10.jpeg)
46
+
47
+ Figure 1. Examples from three datasets containing private and public data. Public dataset consists of two subsets: $D_p = D_0 \cup D_a$ . $D_0$ and $D_k$ consist of images of non-target and target labels(Unmasked (LFW), Adult (LAG) and Clean (FaceScrub)), while $D_a$ contains images from a different domain (Masked (LFW), Young (LAG), Blurred (FaceScrub)).
48
+
49
+ In this section, we first describe how to train the inversion neural network for private data recovery (Sec. [3.1\)](#page-2-0), then explain how to estimate private logits by optimizing the confidence gap between the logits predictions of server and clients (Sec. [3.2\)](#page-2-1). To prevent too much deviation from real data, we also introduce a prior estimation to regulate the inversion network training (Sec. [3.3\)](#page-3-0). Lastly, we will explain how to recover private data using the trained inversion network, the estimated logits and the learned prior (Sec. 3.4).
50
+
51
+ Logit-based inversion is a model inversion attack that recovers the representations of original training data by maximizing the output logits *w.r.t.* the targeted label class of the trained model. This inversion typically requires access to model parameters, which is not accessible in FedMD. In order to perform a gradient-free inversion attack, [\[44\]](#page-9-10) proposes a training-based inversion method (TBI) that learns a separate inversion model with an auxiliary dataset, by taking in output logits and returning the original training data. Inspired by this, we insert an inversion attack on the server side of FedMD. However, we show that logit-based inversion is more challenging than gradient-based inversion since logits inherently contain less information about the original data (See Appendix. E). To tackle the challenge, we train a server-side model f<sup>0</sup> on the public dataset with parameters W0. The server-side logits are then updated as :
52
+
53
+ $$\mathbf{l}_i^0 = f_0(W_0; x_i^0) \tag{1}$$
54
+
55
+ Next, we train an inversion model using client-server paired logits, l k and l <sup>0</sup> on the pubic data subset D<sup>0</sup> only, denoted as Gθ:
56
+
57
+ $$\begin{aligned} \min_{\theta} \sum_{i} ||G_{\theta}(p_{i,\tau}^{0}, p_{i,\tau}^{k}) - x_{i}^{0}||_{2} \\ p_{i,\tau}^{0} &= \operatorname{softmax}(\boldsymbol{l}_{i}^{0}, \tau), \quad p_{i,\tau}^{k} &= \operatorname{softmax}(\boldsymbol{l}_{i}^{k}, \tau) \end{aligned} \tag{2}$$
58
+
59
+ where softmax(, τ ) is the softmax function with temperature τ . Note the distribution gap between l 0 and l k is the key driver for successfully designing such a paired-logits inversion attack, as will be demonstrated in Sec [3.2.](#page-2-1)
60
+
61
+ To enhance the quality of reconstruction, we leverage the auxiliary domain features D<sup>a</sup> to obtain a prior estimation for each data sample x¯<sup>i</sup> via a translation algorithm (detailed in Sec. [3.3\)](#page-3-0). The reconstruction objective is therefore summarized as:
62
+
63
+ <span id="page-2-2"></span>
64
+ $$\min_{\theta} \sum_{i} ||G_{\theta}(p_{i,\tau}^{0}, p_{i,\tau}^{k}) - x_{i}^{0}||_{2} + \gamma ||G_{\theta}(p_{i,\tau}^{0}, p_{i,\tau}^{k}) - \bar{x}_{i}||_{2}$$
65
+ (3)
66
+
67
+ ![](_page_2_Picture_10.jpeg)
68
+
69
+ Figure 2. Overview of PLI. After receiving the output logits l k on the public data D<sup>p</sup> from k-th client, the server applies softmax to the logits corresponding D<sup>0</sup> of both client and server models and forwards them to the inversion model G k θ . The inversion model reconstructs the original input. We first optimize the inversion model with Eq. [3,](#page-2-2) while utilizing the prior data synthesized from D<sup>a</sup> by transformation model Aϕ. Then, the server recovers the private date of target labels with optimal logits and trains the inversion model with Eq. [10.](#page-4-0) The final reconstructed class representation will be picked from the set of reconstructed data with Eq. [11.](#page-4-1)
70
+
71
+ The second term enforces the recovered data to be close to the prior estimation.
72
+
73
+ Since the server already has access to the pubic dataset and its corresponding output logits, it can train G<sup>k</sup> θ to minimize Eq. [3](#page-2-2) via backpropagation (line 6 in Algo [1\)](#page-3-1). The architecture of the inversion model typically consists of transposed convolutional layers [\[9,](#page-8-12)[44\]](#page-9-10). Notice that the inversion model is trained with public data only and did not observe any private data at training time. Once the inversion model is trained, it is able to reconstruct data in the public dataset.
74
+
75
+ However, the model cannot be used directly to reconstruct private data with sensitive labels yet, since it has never seen any true logits of the private data, the distribution of which is different from that of public data. Most existing inversion attacks [\[12](#page-8-8)[–14,](#page-8-10) [41,](#page-9-13) [42,](#page-9-12) [46,](#page-9-14) [47,](#page-9-11) [51\]](#page-9-3) require either gradients or parameters, thus not *gradient-free*. To reconstruct private data without gradients, we propose a new path for estimating the input logits of private data, by exploiting the *confidence gap* between server and clients, as below.
76
+
77
+ We observe that the server and client models exhibit a confidence gap on their predictions of the public data, mani-
78
+
79
+ <span id="page-3-2"></span>![](_page_3_Figure_0.jpeg)
80
+
81
+ Figure 3. Confidence gap between the server and the client under FedMD setting on public and private data. This figure represents the normalized histogram of entropy on public and local datasets and estimated distribution. Lower entropy means that the model is more confident. Client consistently has higher confidence on private dataset than server, indicating a significant confidence gap.
82
+
83
+ **Input:** The number of communication T, clients K, the set of target labels L, the inversion model $G_{\theta}^{k}$ , the translation model $A_{\phi}$ , the global model $f_{0}$ , softmax temperature $\tau$ , and public dataset $D_{p}$ .
84
+
85
+ Output: Class representations in the private dataset of L1: Generate prior $\bar{x}_j$ for each $j \in L$ with $A_{\phi}$ . 2: for $t = 1 \leftarrow T$ do Receive $\{\boldsymbol{l}_i^k\}$ from k = 1...K3: for $k = 1 \leftarrow K$ do 4: Train $G_{\theta}^k$ with Eq. 3 on $D_0$ . Train $G_{\theta}^k$ with Eq. 10 for each $j \in L$ . 5: 6: Update global logits by aggregating $\{l_i^k\}$ 7: Train $f_0$ and send the global logits to each client. 8: 9: for $j \in L$ do Reconstruct $\{\hat{x}_j^k \leftarrow G_{\theta}^k(\hat{p}_{j,\tau}^0, \hat{p}_{j,\tau}^k)\}_{k=1}^K$ $\hat{x}_j \leftarrow \text{Pick the best from } \{\hat{x}_j^k\}.$ 10: 11: 12: **return** $\{\hat{x}_i\}_{i \in L}$
86
+
87
+ fested by their predicted logits (Fig. 3). Specifically, the client model is more confident on the private dataset, resulting in a higher entropy of server logits. To exploit this, we propose a new metric for optimizing private logits, then analytically obtain the optimal solution, which is used as the input logits for the inversion model to generate private data.
88
+
89
+ Specifically, we adopt the following metric to measure the quality of the reconstructed class representation $x_j^k$ for arbitrary target label j owned by the k-th client:
90
+
91
+ $$Q(x_j^k) := p_{j,\tau}^k + p_{j,\tau}^0 + \alpha H(p_{j,\tau}^0)$$
92
+ (4)
93
+
94
+ where $p_{\underline{j},\tau}^k$ and $p_{\underline{j},\tau}^0$ denote the j-th elements of $p_{j,\tau}^k$ and $p_{j,\tau}^0$ respectively. $H(\cdot)$ is the entropy function, and $\alpha$ is a weighting factor. The first and second terms grow big-
95
+
96
+ ger when the client and server are more confident that the recovered data belongs to class j, while the third term penalizes the strong confidence of server's prediction, which plays an important role in differentiating public and private feature spaces. With Eq. 4, finding the optimal input logits for our inversion attack against FedMD is equivalent to the following maximization problem:
97
+
98
+ <span id="page-3-4"></span>
99
+ $$\underset{p_{i,T}^k, p_{i,T}^0}{\operatorname{arg}} \max Q(x_j^k) \tag{5}$$
100
+
101
+ Recall that the private and public data domains for the target label j are different. Since the server-side model is not directly trained on the private dataset, we can assume that $f_0$ returns less confident outputs on private data compared to $f_k$ , in this sense Q reinforces the knowledge-decoupling principle since Q increases when the reconstructed data of j are similar to the private data, and decreases when too close to the public data. For instance, Q takes a lower value when the reconstructed data is masked, even if its label seems j.
102
+
103
+ **Analytical Solution** We can analytically solve Eq. 5 w.r.t. $p_{j,\tau}^k$ and $p_{j,\tau}^0$ , and the optimal values for target label j that maximizes Q, as follows:
104
+
105
+ $$\hat{p}_{\underline{u},\tau}^{k} = \begin{cases} 1 & (u=j) \\ 0 & (u \neq j) \end{cases}, \quad \hat{p}_{\underline{u},\tau}^{0} = \begin{cases} \frac{\sqrt[\infty]{e}}{J-1+\sqrt[\infty]{e}} & (u=j) \\ \frac{1}{J-1+\sqrt[\infty]{e}} & (u \neq j) \end{cases}$$
106
+
107
+ $$\tag{6}$$
108
+
109
+ Detailed derivation can be found in Appendix A. Given an inversion model $G_{\theta}^{k}$ that takes paired logits and returns the original input x, we have the following equation:
110
+
111
+ $$\arg\max_{x_{j}^{k}} Q(x_{j}^{k}) = G_{\theta}^{k}(\hat{p}_{j,\tau}^{0}, \hat{p}_{j,\tau}^{k})$$
112
+ (7)
113
+
114
+ The empirical impact of feature space gap between public and private data is examined in Appendix E.
115
+
116
+ To prevent the reconstructed image from being unrealistic, we design a prior estimation component to regulate the inversion network. A naive approach is using the average of $D_0$ (public data with the same domain) as the prior data for any target label i:
117
+
118
+ <span id="page-3-5"></span>
119
+ $$\bar{x}_i = \frac{1}{|D_0|} \sum_{i \in D_0} x_i^0 \tag{8}$$
120
+
121
+ <span id="page-3-3"></span>Here the server uses the same prior for all the target labels.
122
+
123
+ When the public dataset is labeled, such as for FedMD and FedGEMS, the adversary can generate tuned prior for each label by converting $D_a$ with the state-of-the-art
124
+
125
+ <span id="page-4-4"></span>translation method such as autoencoder [7, 24, 35] and GAN [18, 20, 50]. Specifically, the server can estimate the prior data $\bar{x}_i$ for the target label i with translation model $A_\phi$ as follows:
126
+
127
+ <span id="page-4-2"></span>
128
+ $$\bar{x}_j = \frac{1}{|D_a|} \sum_{i \in D_a} A_\phi(x_i^a)$$
129
+ (9)
130
+
131
+ Once the inversion model is trained with Eq. 3, the attacker uses the optimal logits (obtained from Sec. 3.2) to estimate the private data with $G_{\theta}$ . To make the final prediction more realistic, the attacker further fine-tunes the inversion model by restricting the distance between the reconstructed image and their prior data (obtained from Sec. 3.3) in the same way as in Eq. 3:
132
+
133
+ $$\min_{\theta} \sum_{j \in L} \gamma ||G_{\theta}^{k}(\hat{p}_{j,\tau}^{0}, \hat{p}_{j,\tau}^{k}) - \bar{x}_{j}||_{2}$$
134
+ (10)
135
+
136
+ <span id="page-4-0"></span>Finally, given a reconstructed image from each client, the attacker needs to determine which of the k images $\{\hat{x}_j^k\}_{k=1}^K$ from k clients is the best estimation for target label j. We design the attacker to pick the most distinct and cleanest image as the best-reconstructed data based on: 1) similarity to groundtruth image via SSIM metric [40]; 2) data readability measured by Total Variation (TV) [36], which is commonly used as regularizer [13,45]. Specifically, the attacker chooses reconstructed private data $\hat{x}_j$ by:
137
+
138
+ $$\underset{\hat{x}_{j}^{k}}{\operatorname{arg\,min}} \sum_{k=1,k\neq k'}^{K} \operatorname{SSIM}(\hat{x}_{j}^{k}, \hat{x}_{j}^{k'}) + \beta \operatorname{TV}(\hat{x}_{j}^{k}) \tag{11}$$
139
+
140
+ where $\hat{x}^k_j = G^k_{\theta}(\hat{p}^0_{j,\tau},\hat{p}^k_{j,\tau})$ . Lower SSIM indicates that the image is not similar to any other reconstructed privat pictures, and smaller TV means the image has better visual quality.
141
+
142
+ <span id="page-4-3"></span>![](_page_4_Figure_8.jpeg)
143
+
144
+ Figure 4. SSIM distribution (left) and Reconstructed Images (right) for LAG.
145
+
146
+ The reasoning for choosing Eq. 11 as the criterion is demonstrated in Fig. 4, where the left figure shows the distribution of the first term of Eq. 12 for clients who own the private data for the target label (blue) and clients who don't (red). Fig. 4 also provides reconstructed examples from all
147
+
148
+ clients, where k=3 is the ground-truth client, whose recovered image has the (almost) lowest SSIM. This figure also demonstrates the importance of $\beta$ (the trade-off weight in Eq. 11, where a highly-noisy image (k=8) results in an even lower SSIM but is penalized by high TV score via a balanced weight ( $\beta > 0.1$ in this case).
149
+
150
+ The complete algorithm is shown in Algo 1.
2305.05560/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-01-16T20:30:05.417Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" etag="5jx-7nbSnVcLVXfo7nnE" version="20.8.5" type="device"><diagram id="1EITDPjxAA1jL1cddy4x" name="Pagina-1">5VnbctowEP0az7QPMJZv4EduSdpJZ5jQS9I3gYVRaywiRIB+fVe2hPGFQC4kJHlIol3JK+vs0VnJMezOdHXO8WzyjQUkMiwzWBl217AsG1nwWzrWqQM1zdQRchooV+YY0H9EOfWwBQ3IPDdQMBYJOss7RyyOyUjkfJhztswPG7MoP+sMh2pGM3MMRjgipWG/aCAmqbdpNTL/BaHhRM+MPD/tmWI9WIWYT3DAlltz2T3D7nDGRNqarjokkthpXNIXOtvRu3kxTmJxyAOeiK//kC/fzcuv4c+aL75RvKzp9NzhaKFWbLidT/AzXwznRJBbaH6GAZ3uj4Fah1hrcARZwdTtiZhG4EDQnAvO/pIOixgHT8xiGNke0ygquHBEwxjMEbw8AX/7jnBBAfaW6pjSIJDTtJcTKshghkdyziVwDHycLeKAyHWZYJWBUNjImGS15VLAnBM2JYKvYYimqauyr1jqOcpeZjn3lGuylW5b+bBiWbiJnCUCGioXD8iLZuPevPTPPk5WGui1s4JKWelcvF/8N0DqXdEo4+9W4O8cDf8dagW7ZWtjJJbcG+9aspzm/uRUbY7jJccpJ6eIPomDlizKGaoBnk8SWFA+K9LfxwKQjhOPZdqbXOlCjEq4jlks1BkCeWADvHx9rToT40YadVeb3dV2Z3etrRUV12oG2d56CqzsIWnoZwTmIRF9wingSbjMP43DTfjgjEo870//nC34iBygQOlkBxR2EujzzQ467dnL2sdJhAW9y5+KqjikZugzCmvLpNwqSLlZoGG6cvXU9immGKhRoL1bCKTSUAyUUHqz7INY7pCbq+Bq0LWHv2ctC7Nw2mrUyhXAsLxIKO7l2O7dLpjuqM0TVrZggD9bJRTQ3dAK5d/e4EqHgjdLo6U95W0ESb3EQzhp5/bM4QrFCbwPHibxJCFnEq4EQLdtuN1Kit6/6Yu6tTmRq1mM7UNvlZ7VzLq8HTwL25DdyJGkVgzBxmOoFU/kRzUg7sdWwRcXOi0spyJ0dlHo3EcKnVes71a9EOrIUldx2HomqRv03qLU6Z39ZKkz68jWtFjnAj1V9ywvH/XlZM87Qdl7Tfk6LVVyUEGVnJKYHKpLpXuHf7QDWDWyjRNk2hsssPqCsL/AntZNokhlz3vkTcIpfvOzXvYmYR+tvF7SmGAOAxaCRlSs32Ct1Zv8Oa4VyD5Gra0hB+XCguMI1baSOlVfWj4gdXZ9M3swSSzfr/u+17Qc33dRw23miyUc2EzTadoucpqe6+xmyo7vZ2Bm/+ZJOZD9r8zu/Qc=</diagram></mxfile>
2305.05560/main_diagram/main_diagram.pdf ADDED
Binary file (10.7 kB). View file
 
2305.05560/paper_text/intro_method.md ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Multi-objective sequential decision making is a complex process that involves trade-offs between multiple, often conflicting, objectives. As the preferences over these objectives are typically not known a priori, it is challenging to find a single optimal solution, and instead, a set of solutions that are considered optimal can be presented to the decision maker [@roijers2013survey]. To keep decision support tractable, it is necessary to reduce the size of the solution sets as much as possible. Therefore, defining appropriate solution sets that do not retain excess policies while guaranteeing that no concessions are made to optimality, as well as designing corresponding pruning algorithms is essential [@taboada2007practical].
4
+
5
+ A solution set that is often considered appropriate in both multi-objective decision making and multi-objective optimisation is the Pareto front [@roijers2013survey]. The Pareto front consists of the policies that lead to Pareto optimal expected payoffs and thus contains all policies which are optimal for decision makers interested in optimising the utility from these expected returns [@hayes2022practical]. However, it is known that the Pareto front does not necessarily contain all optimal policies for problems where the decision maker optimises for their expected utility instead [@hayes2022expected].
6
+
7
+ To address this limitation, we introduce a novel dominance criterion, called distributional dominance, relating the multivariate return distribution between policies directly. Distributional dominance relies on first-order stochastic dominance, which is known to imply greater expected utility for univariate distributions [@fishburn1974convex; @bawa1985determination], and has also been explored for multi-variate distributions [@denuit2013multivariate; @levy2016bivariate]. Based on distributional dominance, we propose the *distributional undominated set (DUS)* as a novel solution set and show that it contains all optimal policies for the class of multivariate risk-averse decision makers defined by Richard . Furthermore, we show that it is a superset of the Pareto front and as a result is a suitable starting set which can be further pruned to smaller subsets for specific scenarios.
8
+
9
+ While the DUS contains no distributionally dominated policies, it may still contain policies which will never be chosen in the expected utility setting. Therefore, we introduce a second solution set, the *convex distributional undominated set (CDUS)*, which includes only those policies that are undominated by a mixture of policies in the DUS. We find that the CDUS is a subset of the DUS and contains all optimal policies for multivariate risk-averse decision makers. While in general the CDUS and the Pareto front do not coincide, both sets are shown to include the convex hull.
10
+
11
+ From a computational perspective, we contribute algorithms to prune a set of policies to its DUS or CDUS. As these pruning methods rely on the quality of the input set, we present an extension of the Pareto Q-learning algorithm [@vanmoffaert2014multiobjective] to learn return distributions and only discard those policies that are not in the DUS. We evaluate our approach on randomly generated MOMDPs of different sizes and compare the sizes of the resulting sets after pruning. As our goal is to use these sets in a decision support scenario, keeping their sizes reasonable and algorithms tractable both in terms of runtime and memory enables decision makers to efficiently select their preferred policy[^1].
12
+
13
+ Sequential decision making is often formalised using Markov Decision Processes (MDPs) which provide a mathematical framework for modelling settings in which an agent must choose an action at each time step based on the current state of the system. To address real-world situations where decision makers must consider multiple conflicting objectives, MDPs can be generalised to Multi-Objective Markov Decision Processes (MOMDPs) which allow for vectorial reward functions [@roijers2017multiobjective].
14
+
15
+ ::: definition
16
+ **Definition 1**. A multi-objective Markov decision process is a tuple $M = (\mathcal{S}, \mathcal{A}, T, \gamma, \mathbf{R})$, with $d \geq 1$ objectives, where:
17
+
18
+ - $\mathcal{S}$ is the state space;
19
+
20
+ - $\mathcal{A}$ is the set of actions
21
+
22
+ - $T \colon \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to \left[ 0, 1 \right]$ is the transition function;
23
+
24
+ - $\gamma \in [0, 1]$ is the discount factor;
25
+
26
+ - $\mathbf{R} \colon \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to \mathbb{R}^d$ is the vectorial reward function.
27
+ :::
28
+
29
+ In a MOMDP, a decision maker takes sequential actions by means of *policy* $\pi: \mathcal{S} \times \mathcal{A} \to [0, 1]$ which maps state-action pairs to a probability. We denote the set of all policies by $\Pi$.
30
+
31
+ We take a distributional approach [@bellemare2023distributional; @hayes2022decision] and consider the multivariate return distributions of these policies. The return $\mathop{\mathrm{\mathbf{Z}^{\pi}}}= \left(Z_1^{\pi}, \dotsc, Z_d^{\pi}\right)^\text{T}$ is a random vector where each $Z_i^{\pi}$ is the marginal distribution of the $i$'th objective such that, $$\begin{equation}
32
+ \mathop{\mathrm{\mathbb{E}}}\left[\mathop{\mathrm{\mathbf{Z}^{\pi}}}\right] = \mathop{\mathrm{\mathbb{E}}}\left[\sum_{t=0}^\infty \gamma^t \mathbf{r}_t \mid \pi, \mu_0\right] = \left(\mathop{\mathrm{\mathbb{E}}}\left[Z_1^{\pi} \right], \dotsc, \mathop{\mathrm{\mathbb{E}}}\left[Z_d^{\pi}\right] \right)^\text{T}.
33
+ \end{equation}$$ For notational simplicity, when considering the expected returns directly we will write this as $\mathop{\mathrm{\mathbf{V}^{\pi}}}= \left(V_1^{\pi}, \dotsc, V_d^{\pi}\right)^\text{T}$.
34
+
35
+ Multi-objective decision making presents additional complexity compared to traditional decision making, as it is not possible to completely order the return of different policies. Pareto dominance introduces a partial ordering by considering a vector dominant when it is greater or equal for all objectives and strictly greater for at least one objective. We say a policy Pareto dominates a second policy when the expected value of its return distribution is Pareto dominant.
36
+
37
+ ::: {#def:pareto-dominance .definition}
38
+ **Definition 2**. Let $\pi, \pi' \in \Pi$. Then $\pi$ Pareto dominates $\pi'$, denoted by $\mathop{\mathrm{\mathbf{V}^{\pi}}}\mathop{\mathrm{\succ_\text{p}}}\mathop{\mathrm{\mathbf{V}^{\pi'}}}$, when $\forall i, V_i^\pi \geq V_i^{\pi'} \land \exists i, V_i^\pi > V_i^{\pi'}.$
39
+ :::
40
+
41
+ When the expected return of $\pi$ is equal to $\pi'$ or Pareto dominates it, we denote this by $\mathop{\mathrm{\mathbf{V}^{\pi}}}\mathop{\mathrm{\succeq_\text{p}}}\mathop{\mathrm{\mathbf{V}^{\pi'}}}$.
42
+
43
+ First-order stochastic dominance (FSD) is a well-known dominance criterion from decision theory and economics, which relates return distributions directly [@levy2016stochastic; @denuit2013multivariate]. Let $F_\mathbf{X}(\mathbf{x}) = P(\mathbf{X} \mathop{\mathrm{\preceq_\text{p}}}\mathbf{x})$ be the cumulative distribution function (CDF) of a random vector $\mathbf{X}$, denoting the probability that the random vector takes on a value Pareto dominated or equal to $\mathbf{x}$. Informally, we say that $\mathbf{X}$ FSD another distribution $\mathbf{Y}$ when it always has a higher probability of obtaining Pareto dominant returns.
44
+
45
+ ::: {#def:fsd .definition}
46
+ **Definition 3**. A policy $\pi$ first-order stochastically dominates another policy $\pi'$, denoted by $\mathop{\mathrm{\mathbf{Z}^{\pi}}}\mathop{\mathrm{\succeq_\text{FSD}}}\mathop{\mathrm{\mathbf{Z}^{\pi'}}}$, when, $$\begin{equation*}
47
+ \forall \mathbf{v} \in \mathbb{R}^d: F_{\mathop{\mathrm{\mathbf{Z}^{\pi}}}}(\mathbf{v}) \leq F_{\mathop{\mathrm{\mathbf{Z}^{\pi'}}}}(\mathbf{v}).
48
+ \end{equation*}$$
49
+ :::
50
+
51
+ # Method
52
+
53
+ We take a utility-based approach to multi-objective decision making [@roijers2013survey] and assume that for any decision maker a utility function $u: \mathbb{R}^d \to \mathbb{R}$ exists that represents their preferences over the objectives. We consider the class of strictly monotonically increasing utility functions, denoted by $\mathcal{U}$. Intuitively, such utility functions imply that any decision maker prefers more of each objective, given all else equal.
54
+
55
+ ::: {#def:strictly-increasing .definition}
56
+ **Definition 4**. A function $f: \mathbb{R}^d \to \mathbb{R}$ is called strictly monotonically increasing if, $$\begin{equation*}
57
+ \forall \mathbf{x}, \mathbf{y} \in \mathbb{R}^d: \mathbf{x} \mathop{\mathrm{\succ_\text{p}}}\mathbf{y} \implies f(x) > f(y).
58
+ \end{equation*}$$
59
+ :::
60
+
61
+ In the utility-based approach, there is often a need to optimise for an entire class of decision makers or a decision maker for which we do not know the exact utility function. In this case, it is necessary to identify a set of policies that contain an optimal policy for all possible utility functions. A further complication arises from the fact that different optimality criteria exist depending on how the utility is derived [@roijers2013survey]. For scenarios where a decision maker's utility is derived from multiple executions of a policy, the scalarised expected returns (SER) criterion can be optimised, $$\begin{equation}
62
+ V_{u}^{\pi} = u\left(\mathbb{E} \left[ \sum\limits^\infty_{t=0} \gamma^t {\bf r}_t \:|\: \pi, \mu_0 \right]\right).
63
+ \label{eqn:ser}
64
+ \end{equation}$$ Alternatively, it is possible that the decision maker only executes their policy once and therefore aims to optimise their expected utility. In the utility-based approach, this is known as the expected scalarised returns (ESR) criterion, $$\begin{equation}
65
+ \label{eqn:esr}
66
+ V_{u}^{\pi} = \mathbb{E} \left[ u\left( \sum\limits^\infty_{t=0} \gamma^t {\bf r}_t \right) \:|\: \pi, \mu_0 \right].
67
+ \end{equation}$$ It is well-established that, in general, optimal policies under one criterion need not be optimal under the other criterion [@roijers2013survey; @vamplew2022impact].
68
+
69
+ One of the most common solution sets in the literature is the Pareto front (PF), formally defined in [5](#def:pareto-front){reference-type="ref+label" reference="def:pareto-front"} [@roijers2017multiobjective]. We stress that this solution set is presented in the context of the SER criterion as it is based on the expected returns of the policies.
70
+
71
+ ::: {#def:pareto-front .definition}
72
+ **Definition 5**. The Pareto front is the set of all policies that are not Pareto dominated: $$\begin{equation}
73
+ \mathop{\mathrm{\text{PF}(\Pi)}}= \left\{ \pi \in \Pi \mid \nexists \pi' \in \Pi, \mathop{\mathrm{\mathbf{V}^{\pi'}}}\mathop{\mathrm{\succ_\text{p}}}\mathop{\mathrm{\mathbf{V}^{\pi}}}\right\}.
74
+ \end{equation}$$
75
+ :::
76
+
77
+ A second solution set that is often considered is the convex hull (CH) which contains all policies that are optimal under linear utility functions and is therefore applicable under both SER and ESR [@hayes2022practical]. Additionally, when stochastic policies are allowed, the convex hull can be used to construct all Pareto optimal policies [@vamplew2009constructing].
78
+
79
+ ::: {#def:convex-hull .definition}
80
+ **Definition 6**. The convex hull is the set of all policies that are not Pareto dominated by a convex combination of other policies, $$\begin{equation}
81
+ \mathop{\mathrm{\text{CH}(\Pi)}}= \left\{\pi \in \Pi \mid \nexists \lambda \in \Delta^{|\Pi|}: \sum_{i=1}^{|\Pi|} \lambda_i \mathbf{V}^{\pi_i} \mathop{\mathrm{\succ_\text{p}}}\mathop{\mathrm{\mathbf{V}^{\pi}}}\right\}.
82
+ \end{equation}$$
83
+ :::
84
+
85
+ We note that solution sets based on return distributions have also been considered, with for example the ESR set [@hayes2022expected]. In this work, we extend this line of research and provide additional theoretical and computational results.
86
+
87
+ While most of multi-objective decision making focuses on returning the Pareto front, we demonstrate that this does not cover the full range of optimal policies. Specifically, for decision makers optimising their expected utility, the best policy in the Pareto front may still be significantly worse than a Pareto dominated policy. To overcome this, we propose a novel dominance criterion and subsequently construct a solution set based on this criterion.
88
+
89
+ To understand why it is necessary to construct these novel solution sets, and in particular why a distributional approach is appropriate, it is helpful to consider a motivating example.
90
+
91
+ ::: {#exmp:utility .example}
92
+ **Example 1**. Imagine a hospital patient needing to decide on a treatment plan with their doctor. Their objectives are to maximise the efficacy of the treatment, denoted $v_1$, while also maximising their comfort (i.e. minimise the side-effects), denoted $v_2$. Unfortunately, these objectives are conflicting. In previous discussions with their doctor, the patient mentioned that they wish to strike a balance between the two. A fitting utility function is the product between the two objectives ([\[eq:utility-function\]](#eq:utility-function){reference-type="ref+label" reference="eq:utility-function"}) as it is maximised when values are closer together. $$\begin{equation}
93
+ \label{eq:utility-function}
94
+ u(v_1, v_2) = v_1 \cdot v_2
95
+ \end{equation}$$ The doctor then proposes the following two treatment plans. $$\begin{equation*}
96
+ \begin{split}
97
+ A & = \left\{P(v_1=1, v_2=0) = \frac{1}{2}, P(v_1=0, v_2=1) = \frac{1}{2}\right\}\\
98
+ B & = \left\{P(v_1=0.45, v_2=0.45) = 1\right\},\\
99
+ \end{split}
100
+ \end{equation*}$$ with $\mathop{\mathrm{\mathbb{E}}}[A] = (0.5, 0.5)$ and $\mathop{\mathrm{\mathbb{E}}}[B] = (0.45, 0.45)$.
101
+
102
+ When taking the standard approach and applying Pareto dominance, it is clear that the expected return of $A$ dominates that of $B$. In contrast, when considering the distributions on the basis of expected utility, $A$ has an expected utility of 0, while $B$ has an expected utility of 0.2025. As the patient will most likely follow the treatment plan only once, they aim to optimise their expected utility and thus prefer distribution $B$.
103
+ :::
104
+
105
+ As this example shows, it is pertinent to consider exactly what the decision maker aims to optimise for: do they optimise for repeated execution of the same policy, or maximising the expected utility from one execution? In the former case, they may well decide based on the expected value of the distribution. In the latter case, however, taking the full distribution of returns into account is key to effective decision support.
106
+
107
+ To address the limitations of Pareto dominance, we introduce the *distributional dominance* criterion. This criterion states that a distribution dominates another when it is first-order stochastic dominant and at least one of the marginal distributions *strictly* first-order stochastic dominates the related marginal distribution of the second distribution.
108
+
109
+ ::: {#def:dist-dom .definition}
110
+ **Definition 7**. A policy $\pi$ distributionally dominates another policy $\pi'$, denoted by $\mathop{\mathrm{\mathbf{Z}^{\pi}}}\mathop{\mathrm{\succ_\text{d}}}\mathop{\mathrm{\mathbf{Z}^{\pi'}}}$, when, $$\begin{equation*}
111
+ \mathop{\mathrm{\mathbf{Z}^{\pi}}}\mathop{\mathrm{\succeq_\text{FSD}}}\mathop{\mathrm{\mathbf{Z}^{\pi'}}}\land \exists i \in [d]: Z_i^\pi \mathop{\mathrm{\succ_\text{FSD}}}Z_i^{\pi'}.
112
+ \end{equation*}$$
113
+ :::
114
+
115
+ One can verify that distributional dominance is equivalent to strict first-order stochastic dominance in the case of random vectors when all variables are independent. In general, however, distributional dominance is a stronger condition than strict first-order stochastic dominance as the condition on the marginal distributions implies strict FSD but is not implied by it. Defining distributional dominance as such enables us to guarantee a strictly greater expected utility for a large class of decision makers and leads to the general solution set discussed in [4](#sec:solution-set){reference-type="ref+label" reference="sec:solution-set"}.
116
+
117
+ For the class of decision makers with utility functions in $\mathcal{U}$, we show that when a given random vector has *strictly* greater expected utility for all utility functions than a second random vector, this implies distributional dominance.
118
+
119
+ ::: {#th:u-implies-dd .theorem}
120
+ **Theorem 1**. *Let $\mathbf{X}$ and $\mathbf{Y}$ be d-dimensional random vectors. Then, $$\begin{equation*}
121
+ \forall u \in \mathcal{U}: \mathop{\mathrm{\mathbb{E}}}u(\mathbf{X}) > \mathop{\mathrm{\mathbb{E}}}u(\mathbf{Y}) \implies \mathbf{X} \mathop{\mathrm{\succ_\text{d}}}\mathbf{Y}.
122
+ \end{equation*}$$*
123
+ :::
124
+
125
+ ::: proofsketch
126
+ We first show an additional lemma stating that the condition implies first-order stochastic dominance. Therefore, the proof reduces to showing the condition on the marginals. It suffices to show that if $\mathbf{X}$ does not distributionally dominate $\mathbf{Y}$, it is always possible to construct a utility function for which $\mathop{\mathrm{\mathbb{E}}}u(\mathbf{Y})$ is at least as high as $\mathop{\mathrm{\mathbb{E}}}u(\mathbf{X})$.
127
+ :::
128
+
129
+ In practice, it is impossible to verify whether the expected utility of a given random vector is always strictly greater than that of a second random vector. On the other hand, we will demonstrate that it is computationally feasible to verify distributional dominance (see [4.2](#sec:computing-dus){reference-type="ref+label" reference="sec:computing-dus"}). We now show that distributional dominance implies a strictly greater expected utility for a subset of utility functions in $\mathcal{U}$. The condition we impose is referred to as "multivariate risk-aversion\", which means that a decision maker in this class will, when confronted with a choice between two lotteries, always avoid the lottery containing the worst possible outcome [@richard1975multivariate]. Below, we present the theorem and proof for bivariate distributions. We note that for FSD this property has been shown to hold for $n$-dimensional random vectors as well [@scarsini1988dominance].
130
+
131
+ ::: {#th:dd-implies-u .theorem}
132
+ **Theorem 2**. *Let $\mathbf{X}$ and $\mathbf{Y}$ be two-dimensional random vectors. Then $\forall u \in \mathcal{U}$ with $\frac{\partial^2 u(x_1, x_2)}{\partial x_1 \partial x_2} \leq 0$, $$\begin{equation*}
133
+ \mathbf{X} \mathop{\mathrm{\succ_\text{d}}}\mathbf{Y} \implies \mathop{\mathrm{\mathbb{E}}}u(\mathbf{X}) > \mathop{\mathrm{\mathbb{E}}}u(\mathbf{Y}) .
134
+ \end{equation*}$$*
135
+ :::
136
+
137
+ ::: proofsketch
138
+ The proof utilises the fact that first-order stochastic dominance implies greater or equal expected utility [@hayes2022expected]. We subsequently show that the additional condition on the marginal distributions for distributional dominance implies strictly greater expected utility.
139
+ :::
140
+
141
+ We adopt distributional dominance to define the distributional undominated set (DUS). The DUS has two important desiderata: it contains the Pareto front, i.e. the optimal set under SER and contains all optimal policies for multivariate risk-averse decision makers under ESR. The deferred proofs for the theoretical results can be found in the supplementary material.
142
+
143
+ As the name suggests, the distributional undominated set contains only those policies which are not pairwise distributionally dominated. We define this formally in [8](#def:du-set){reference-type="ref+label" reference="def:du-set"}.
144
+
145
+ ::: {#def:du-set .definition}
146
+ **Definition 8**. The distributional undominated set is the set of all policies that are not distributionally dominated: $$\begin{equation}
147
+ \mathop{\mathrm{\text{DUS}(\Pi)}}= \left\{ \pi \in \Pi \mid \nexists \pi' \in \Pi, \mathop{\mathrm{\mathbf{Z}^{\pi'}}}\mathop{\mathrm{\succ_\text{d}}}\mathop{\mathrm{\mathbf{Z}^{\pi}}}\right\}.
148
+ \end{equation}$$
149
+ :::
150
+
151
+ From this definition it is clear that all policies which are optimal for multivariate risk-averse decision makers are in the set. To show that the Pareto front is a subset as well, we first introduce [3](#lemma:dd-implies-pd){reference-type="ref+label" reference="lemma:dd-implies-pd"}, stating that distributional dominance implies Pareto dominance.
152
+
153
+ ::: {#lemma:dd-implies-pd .lemma}
154
+ **Lemma 3**. *For all policies $\pi, \pi' \in \Pi$, $$\begin{equation*}
155
+ \mathop{\mathrm{\mathbf{Z}^{\pi}}}\mathop{\mathrm{\succ_\text{d}}}\mathop{\mathrm{\mathbf{Z}^{\pi'}}}\implies \mathop{\mathrm{\mathbf{V}^{\pi}}}\mathop{\mathrm{\succ_\text{p}}}\mathop{\mathrm{\mathbf{V}^{\pi'}}}.
156
+ \end{equation*}$$*
157
+ :::
158
+
159
+ ::: proofsketch
160
+ The proof works by utilising a known link between the expected value of a random variable and its cumulative density function. Then, the conditions for distributional dominance imply that the expected value for each marginal distribution is greater or equal and at least one marginal distribution is strictly greater.
161
+ :::
162
+
163
+ Leveraging [3](#lemma:dd-implies-pd){reference-type="ref+label" reference="lemma:dd-implies-pd"}, it is a straightforward corollary that the Pareto front is a subset of the DUS.
164
+
165
+ ::: {#co:pf-subset-duset .corollary}
166
+ **Corollary 1**. *For any family of policies $\Pi$, the Pareto front is a subset of the distributional undominated set, i.e., $$\begin{equation*}
167
+ \mathop{\mathrm{\text{PF}(\Pi)}}\subseteq \mathop{\mathrm{\text{DUS}(\Pi)}}.
168
+ \end{equation*}$$*
169
+ :::
170
+
171
+ We highlight that our dominance results and solution sets are not restricted to MOMDPs but apply to any stochastic multi-objective decision problem with vector-valued outcomes.
172
+
173
+ To deal with return distributions computationally, we project distributions to multivariate categorical distributions [@bellemare2023distributional; @hayes2022expected]. This ensures that finite memory is used, and, importantly, that computations can be performed efficiently. Concretely, to verify first-order stochastic dominance, we need only compare a finite number of points as the CDF is a multivariate step function with steps at $\mathbf{v}_1, \mathbf{v}_2, \dotsc, \mathbf{v}_n$. Formally, for the categorical distribution $\mathbf{X}$ the cumulative distribution at $\mathbf{x}$ is computed as follows,
174
+
175
+ $$\begin{equation}
176
+ F_{\mathbf{X}}(\mathbf{x}) = \sum_{\mathbf{v}_i \mathop{\mathrm{\preceq_\text{p}}}\mathbf{x}} p(\mathbf{v}_i).
177
+ \end{equation}$$
178
+
179
+ Additionally, discrete distributions enable straightforward computation of marginal distributions, thus having all ingredients to check distributional dominance (see [7](#def:dist-dom){reference-type="ref+label" reference="def:dist-dom"}). Then, starting from a given set of policies, the DUS can be computed using a modified version of the Pareto Prune (PPrune) algorithm [@roijers2017multiobjective] that checks for distributional dominance rather than Pareto dominance. We refer to the resulting pruning algorithm as *DPrune*.
180
+
181
+ As the DUS is a superset of the Pareto front and further contains optimal policies under ESR, we can intuitively assume that it might grow very large in size, thereby complicating its practical use in decision support systems. When considering SER, it is possible to reduce the set to the Pareto front by utilising existing pruning operators [@roijers2017multiobjective]. We contribute a similar approach for ESR and present both the resulting solution set as well as a pruning algorithm for this purpose.
182
+
183
+ For univariate distributions, it has been shown that a mixture distribution can be constructed that first-order stochastic dominates another distribution if and only if for any decision maker there exists a distribution in the mixture which is preferred over the dominated distribution [@fishburn1974convex; @bawa1985determination]. Mixture dominance has also been considered for multivariate distributions [@denuit2013multivariate].
184
+
185
+ Here, we show that convex distributional dominance implies greater expected utility for multivariate risk-averse decision makers when considering bivariate distributions.
186
+
187
+ ::: {#th:mixture-dom-implies-u .theorem}
188
+ **Theorem 4**. *Let $\{\mathbf{X}_1, \dotsc, \mathbf{X}_n\}$ and $\{\mathbf{Y}_1, \dotsc, \mathbf{Y}_n\}$ be sets of two-dimensional random vectors. Then, $$\begin{equation*}
189
+ \exists \lambda \in \Delta^n: \sum_{i=1}^n \lambda_i \mathbf{X}_i \mathop{\mathrm{\succ_\text{d}}}\sum_{i=1}^n \lambda_i \mathbf{Y}_i,
190
+ \end{equation*}$$ implies that $\forall u \in \mathcal{U}$ with $\frac{\partial^2 u(x_1, x_2)}{\partial x_1 \partial x_2} \leq 0$, $$\begin{equation*}
191
+ \exists i \in [n]: \mathop{\mathrm{\mathbb{E}}}u(\mathbf{X}_i) > \mathop{\mathrm{\mathbb{E}}}u(\mathbf{Y}_i).
192
+ \end{equation*}$$*
193
+ :::
194
+
195
+ ::: proofsketch
196
+ The proof follows from [2](#th:dd-implies-u){reference-type="ref+label" reference="th:dd-implies-u"} and linearity of expectation.
197
+ :::
198
+
199
+ Observe that in the special case where all random vectors $\mathbf{Y}_i$ are equal, mixture dominance of $\mathbf{Y}$ implies that all decision makers will prefer a random vector $\mathbf{X}_i$ over $\mathbf{Y}$.
200
+
201
+ We define a final solution set, called the convex distributional undominated set (CDUS), that contains only those policies which are undominated by a mixture of distributions. [4](#th:mixture-dom-implies-u){reference-type="ref+Label" reference="th:mixture-dom-implies-u"} guarantees that for all decision makers in the class there is an optimal policy contained in the set. We define the CDUS formally below. It follows from this definition that the CDUS is a subset of the DUS.
202
+
203
+ ::: {#def:convex-du-set .definition}
204
+ **Definition 9**. The CDUS is the set of all policies that are not distributionally dominated by a convex mixture: $$\begin{equation*}
205
+ \mathop{\mathrm{\text{CDUS}(\Pi)}}= \left\{ \pi \in \Pi \mid \nexists \lambda \in \Delta^{|\Pi|}: \sum_{i=1}^{|\Pi|} \lambda_i \mathbf{Z}^{\pi_i} \mathop{\mathrm{\succ_\text{d}}}\mathop{\mathrm{\mathbf{Z}^{\pi}}}\right\}.
206
+ \end{equation*}$$
207
+ :::
208
+
209
+ Given the myriad of solution sets in multi-objective decision making, it is useful to define a complete taxonomy between them. From [1](#co:pf-subset-duset){reference-type="ref+label" reference="co:pf-subset-duset"}, we know that the Pareto front is a subset of the DUS. Additionally, it follows from [9](#def:convex-du-set){reference-type="ref+label" reference="def:convex-du-set"} that the CDUS is also a subset of the DUS. Earlier work has shown that the convex hull is a subset of the Pareto front [@roijers2017multiobjective] and we show that this is also true for the CDUS.
210
+
211
+ ::: {#co:ch-subset-cesr .corollary}
212
+ **Corollary 2**. *For any family of policies $\Pi$, $$\begin{equation*}
213
+ \text{CH}(\Pi) \subseteq \mathop{\mathrm{\text{CDUS}(\Pi)}}.
214
+ \end{equation*}$$*
215
+ :::
216
+
217
+ The final missing piece of the puzzle is the relation between the CDUS and Pareto front. However, here one can find counterexamples which disprove that the CDUS is either a subset or superset of the Pareto front. The landscape of solution sets for multi-objective decision making can then be summarised as shown in [1](#fig:solution-sets){reference-type="ref+label" reference="fig:solution-sets"}.
218
+
219
+ <figure id="fig:solution-sets" data-latex-placement="h">
220
+ <embed src="solution-sets.pdf" />
221
+ <figcaption>A taxonomy of solution sets in multi-objective decision making.</figcaption>
222
+ </figure>
223
+
224
+ To prune a set of distributions to its CDUS, we must check for each distribution whether it is dominated by a mixture of the other distributions. Fortunately, this verification is feasible by restating the problem using linear programming. Concretely, we extend an algorithm that checks whether a univariate distribution is convex first-order stochastic dominated to our setting [@bawa1985determination]. We show the resulting linear program *CDPrune* in [\[alg:cdd\]](#alg:cdd){reference-type="ref+label" reference="alg:cdd"}.
225
+
226
+ For notational simplicity, we define the size of the set of distributions allowed in the mixture as $n$. Then the linear program takes in total $n+1$ distributions as input, where the final distribution is the distribution to check. As these distributions are discrete, the CDFs are multivariate step functions that step at a finite number of points. Let $D_i$ be the set of points at which the CDF of distribution $i$ steps. Then $D = \bigcup_{i=1}^{n+1} D_i$ is the union of all such points. We denote $h = |D|$.
227
+
228
+ :::: algorithm
229
+ ::: algorithmic
230
+ A set of return distributions $\mathcal{Z}$ allowed in the mixture and a return distribution $\mathbf{Z}$ to check Whether the distribution is convex dominated $$\begin{align}
231
+ & \text{Maximise } \delta = \sum_{i=1}^n \sum_{k=1}^d l_{i,k} \label{eq:maximisation}\\
232
+ & \text{Subject to:} \nonumber\\
233
+ & \quad \sum_{i=1}^n \lambda_i F_{\mathcal{Z}_i}(\mathbf{v}_j) + s_{j} = F_{\mathbf{Z}}(\mathbf{v}_j) \quad j = 1, \dotsc, h \label{eq:joint-constraint}\\
234
+ & \quad \sum_{i=1}^n \lambda_i F_{\mathcal{Z}_{i,k}}(v_{j,k}) + l_{j,k} = F_{Z_k}(v_{j,k}) \nonumber \\
235
+ & \quad \qquad j = 1, \dotsc, h \quad k = 1, \dotsc, d \label{eq:marginal-constraint}\\
236
+ & \quad \sum_{i=1}^n \lambda_i = 1 \nonumber \\
237
+ & \quad \lambda_i \geq 0 \quad i = 1, \dotsc, n \nonumber\\
238
+ & \quad s_j \geq 0 \quad j = 1, \dotsc, h \quad \text{where } s_j \text{ is a slack variable} \label{eq:slack-variables}
239
+ \end{align}$$ $\textproc{True}$ **if $\delta > 0$ **else $\textproc{False}$****
240
+ :::
241
+ ::::
242
+
243
+ The linear program maximises $\delta$, which is the sum of slack variables that make up the difference between the CDFs of the marginal mixture distributions and the marginals of the distribution to check ([\[eq:maximisation\]](#eq:maximisation){reference-type="ref+label" reference="eq:maximisation"}). If this procedure leads to a $\delta$ greater than zero, this implies that the conditions for distributional dominance are met and the distribution is dominated by the mixture. Note that we may omit an additional constraint on the $l$ slack variables to be greater or equal to zero, as this is implied by the constraint on the $s$ slack variables ([\[eq:slack-variables\]](#eq:slack-variables){reference-type="ref+label" reference="eq:slack-variables"}).
244
+
245
+ When no exact formulation of the joint CDFs is available, we propose an alternative linear program that operates solely on the marginal distributions. In this case, it is necessary to change the first constraint in [\[eq:joint-constraint\]](#eq:joint-constraint){reference-type="ref+label" reference="eq:joint-constraint"} to $$\begin{equation}
246
+ \sum_{i=1}^n \lambda_i \prod_{k=1}^d F_{\mathcal{Z}_{i,k}}(\mathbf{v}_{j,k}) + s_{j} = \prod_{k=1}^d F_{Z_k}(v_{j,k}),
247
+ \end{equation}$$ while the second constraint in [\[eq:marginal-constraint\]](#eq:marginal-constraint){reference-type="ref+label" reference="eq:marginal-constraint"} is removed altogether. By maximising the sum of $s$ slack variables, the resulting linear program essentially checks for strict first-order stochastic dominance between random vectors with independent variables. One can verify that this implies distributional dominance for independent variables and otherwise may serve as an approximation.
248
+
249
+ Our final contribution relates theory to practice by designing an algorithm able to learn the DUS in a given MOMDP. We evaluate this algorithm on different sizes of MOMDPs and compare the resulting sizes of the sets when pruned down to the subsets covered in the taxonomy in [1](#fig:solution-sets){reference-type="ref+label" reference="fig:solution-sets"}. All code is available at <https://github.com/wilrop/distributional-dominance>.
250
+
251
+ Pareto Q-learning (PQL) is a classical algorithm used in multi-objective reinforcement learning to learn the Pareto front [@vanmoffaert2014multiobjective]. We find that the general framework of PQL lends itself nicely to learning the DUS. Our algorithm, DIstributional Multi-Objective Q-learning (DIMOQ) is shown in [\[alg:dimoq\]](#alg:dimoq){reference-type="ref+label" reference="alg:dimoq"}.
252
+
253
+ :::: algorithm
254
+ ::: algorithmic
255
+ The state space $\mathcal{S}$, actions space $\mathcal{A}$ and discount factor $\gamma$ The DUS Initialise all $Q(s, a)$ as empty sets Initialise all $\mathbf{R}(s, a, s')$ as Dirac delta distributions Estimate $T: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to [0, 1]$ from random walks Initialise state $s$ Take an action $a \sim \pi(a|s)$ Observe the next state $s' \in \mathcal{S}$ and reward $\mathbf{r} \in \mathbb{R}^d$ $ND(s, a, s') \gets \textproc{DPrune} \left(\bigcup_{a' \in \mathcal{A}} Q(s', a')\right)$ Update the reward distribution $\mathbf{R}(s, a, s')$ with $\mathbf{r}$ $s \gets s'$ $\textproc{DPrune}\left(\bigcup_{a \in \mathcal{A}}Q\left(0, a\right)\right)$
256
+ :::
257
+ ::::
258
+
259
+ The algorithm first initialises the Q-sets containing undominated distributions to empty sets and reward distributions to Dirac delta distributions at zero. During training, the agent follows an $\epsilon$-greedy policy and learns the immediate reward distributions $\mathbf{R}(s, a, s')$ separate from the expected future reward distributions $ND(s, a, s')$. Learning the immediate reward distribution is done by recording the empirical distribution, while learning the future reward distribution is done using a modified version of the Q-update rule employed for PQL (see [\[eq:q-update\]](#eq:q-update){reference-type="ref+label" reference="eq:q-update"}). Note, however, that for DIMOQ the pruning operator for the distributions in the next state is *DPrune* rather than *PPrune*.
260
+
261
+ The Q-learning update in PQL is described for deterministic environments. As we deal with fundamentally stochastic environments, we propose an alternative formulation in [\[eq:q-update\]](#eq:q-update){reference-type="ref+label" reference="eq:q-update"}.
262
+
263
+ $$\begin{equation}
264
+ \label{eq:q-update}
265
+ Q(s,a) \gets \bigoplus_{s'}T(s'|s, a)\left[\mathbf{R}(s, a, s') + \gamma ND(s, a, s')\right]
266
+ \end{equation}$$
267
+
268
+ First, the term $\left[\mathbf{R}(s, a, s') + \gamma ND(s, a, s')\right]$ constructs a set of expected return distributions when the state-action pair leads to $s'$. Next, the $\bigoplus_{s'}T(s'|s, a)$ constructs mixture policies over all next states $s'$ where each distribution is weighted according to its transition probability $T(s'|s, a)$.
269
+
270
+ In a learning setting, the transition probabilities are not assumed to be given. As such, we perform a number of random walks before training to estimate these probabilities. During learning, we do not update the transition function anymore, to avoid creating unnecessary distributions which will never be observed again due to drift in the probabilities.
271
+
272
+ The second adaptation necessary to learn the DUS rather than the Pareto front is the action scoring and selection mechanism. Even for PQL, this is complicated as it is not obvious what metric to use to determine the quality of a set of Q-values. Several set evaluation mechanisms have been proposed for this, such as for example the hypervolume metric [@guerreiro2022hypervolume] or using a Chebyshev scalarisation function [@vanmoffaert2013scalarized]. We note that these approaches can be extended to DIMOQ as well by computing the expected value of the distribution first and then continuing with one of the aforementioned scoring metrics.
273
+
274
+ In addition to the classical scoring methods, we propose using a linear utility function as a baseline and scoring a set of distributions by its mean expected utility. As linear scalarisation can be done efficiently, this results in a performant scoring method. An additional advantage of this approach is that when more information about the shape of the utility function is known, the linear utility baseline can be substituted with a better approximation.
275
+
276
+ Due to stochasticity in the environment and because the Q-update rule in [\[eq:q-update\]](#eq:q-update){reference-type="ref+label" reference="eq:q-update"} performs all possible combinations, the Q-sets in the algorithm are quick to explode in size. To constrain the size of the sets, we propose two mechanisms.
277
+
278
+ First, we limit the precision of the distributions that are learned. This approach was demonstrated to be successful in multi-objective dynamic programming as well [@mandow2022multiobjective]. Second, we set a fixed limit on the set size. Whenever this limit is crossed, we perform agglomerative clustering where the number of clusters equals the maximum set size. As input for the clustering, we compute the pairwise distances between all distributions. In experiments, we compute the Jensen-Shannon distance between the flattened distributions. Alternatively, one could use the cost of optimal transport between pairs of distributions.
279
+
280
+ We evaluate DIMOQ ([\[alg:dimoq\]](#alg:dimoq){reference-type="ref+label" reference="alg:dimoq"}) and CDPrune ([\[alg:cdd\]](#alg:cdd){reference-type="ref+label" reference="alg:cdd"}) on randomly generated MOMDPs of different sizes shown in [\[tab:momdp-config\]](#tab:momdp-config){reference-type="ref+label" reference="tab:momdp-config"}. For each size category, we repeat the experiment with seeds one through five and perform $50,000$ random walks to estimate $T$ followed by $2,000$ training episodes. All experiments considered two objectives, used a discount factor of $1$ and limited the precision of distributions to three decimals. Finally, the experiments were run on a single core of an Intel Xeon Gold 6148 processor, with a maximum RAM requirement of 2GB.
281
+
282
+ We observe that the runtimes for DIMOQ shown in [\[tab:dimoq\]](#tab:dimoq){reference-type="ref+label" reference="tab:dimoq"} are heavily influenced by the size of the MOMDP. Additionally, there is a large variance in runtime across different seeds. We find that these differences cannot solely be attributed to having a more complex transition function, but are most likely due to the interplay between the transition function and the reward function. Specifically, if transitions result in a large number of undominated returns each iteration needs to perform a large number of combinations. It is clear however that scaling becomes an issue for DIMOQ when going to larger action and state spaces. As such, we plan to investigate the use of function approximation to further extend DIMOQ to larger MOMDPs. Additionally, we note that MOMDPs modelled after real-world scenarios will likely contain more structure and are thus interesting to study for future work.
283
+
284
+ In [\[tab:pruning\]](#tab:pruning){reference-type="ref+label" reference="tab:pruning"} we show the average size of the DUS, as well as what percentage of the DUS belongs to the CDUS, Pareto front and convex hull on average. We observe a similar pattern, namely that larger MOMDPs lead to larger solution sets. Interestingly though, larger MOMDPs also allow for a greater percentage of policies to be pruned for the smaller solution sets, which is beneficial for their use in decision support.
285
+
286
+ We highlight that although the CDUS is often substantially smaller than the DUS, the Pareto front and convex hull are much smaller than either. Intuitively, this is because when both objectives are to be maximised, Pareto optimal policies can only occur on the upper right hand region of the objective space, while policies in the DUS and CDUS may still exist in the Pareto dominated part of the space. However, recall from [1](#exmp:utility){reference-type="ref+label" reference="exmp:utility"} that these policies may still be optimal under ESR. We visualise this in [2](#fig:visualisation){reference-type="ref+label" reference="fig:visualisation"} where the expected values for the final distributions from one representative experiment are plotted.
287
+
288
+ ![The resulting solution sets for a sample experiment. Policies in the dominated part of the objective space may still be optimal for certain decision makers and can thus not be excluded a priori.](visualisation.pdf){#fig:visualisation width="0.85\\columnwidth"}
289
+
290
+ Finally, we remark that while the CDUS cannot be guaranteed to be a superset of the Pareto front in general, in all experiments this was in fact the case. This is also apparent from the results in [2](#fig:visualisation){reference-type="ref+label" reference="fig:visualisation"}. An interesting direction for future work is to specify the exact conditions under which this relation is guaranteed to hold.
2306.00196/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-05-24T04:48:46.766Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36" etag="QWNHGuSHsOqkSiNn2AsF" version="20.5.3" type="google"><diagram id="U8GLhpOXRq4T1l7sYyj_" name="Page-1">7Vrdc6M2EP9reOlMMiA+bD/Gdtw+tNNM087dPXUUkI16AhGQv+6v7wokDAKfnYT4cpkkmQGtVqtl97fS7k4sd5bsfs1xFv/BI8IsZEc7y51bCDm2h+AhKfuKEniTirDKaaSYDoR7+o3olYq6phEpWoyCcyZo1iaGPE1JKFo0nOd822ZbctbeNcMrtaN9INyHmJEO2ycaiVhrZzfYfyN0FautPT2R4Jq5IhQxjvi2sZd7a7mznHNRvSW7GWHSeNoulaDFkdlasZyk4pwF2T82I4vHzadvxSamfy4fHv8bXykpG8zW6oOVsmKvLQBSwNgwmG5jKsh9hkM5swV/Ay0WCYORA6+4yCoPLOmOwKbTJWVsxhnPgZbyVIooRM6/1pYEG0yVAiQXZHf0y5zaXgA0whMi8j2wqAWeBovC2ERBbHtwGNJuiRu+qolYgWRViz7YEV6UKZ9gVvQezOq+ObO6p80KUZbJV5qUgd00pDQGhci+YXSVAk3wrEH9HT8QdscLKiiXsw9cCJ4AA5MTUxx+XeV8nUba8hZyl+UPsJSb3Wg/2X1OU/rMYyHkyXUjDYEWYZTa1xTOriVNI5Jfh7AjWkRYYHhIegFPHgr96tjIg0eW0wT03JCriIsrB42vs3R1yYDz28hwxl1oOOMuMjRtcGB4Z8RbGt3I+wBGIcNFQcM2OrqGEjhfEXFHwNhEkFxGKJVmLh0crvONdGy5NsJFXA8qQQ2YwF1hL+DrpmDdfP9ZCri2/UATvpQEP/A1Yb5Te1SjfXNUa6OI1WeSaGVeV4Yr4dYsv+ZUcHVd3nCp3xPsmpYThiUi2xdwj5/VDnecgoI1otCkjSgUoLaIgq/zkKhVzVvOEOTZXhua+tTSgpRXTUGADbxvsGWSoTiusBsYCjvIQHEl8YDp2qbPh7n/s8Hcb4LcOQHwc7H8ozDqBRMDWu4zMdrBjqHLQBj1xobCY/f7ek3a/IDyE99h8Lst/teJgWCoGGhAdzaT4H1hdJAdFRXqJ2r0RR31aniAvRzsGwPzVK8jyGlfEujUHfHWQ8g3jnnH9p8bQmMzA7k2RA0URL5tqIz81wf5qAfkARMyxeOlcge0B49rrieuirJ+huTSRn62KwGh5+FtVT09+efPZIEKea81mt5bo7kiq11A6WojvcaIMEgdRTumcgJ744eSoUyA14JX2lRFjMq5Q8ChhHknGU9oFMnFU+UP2MafWv78e1mtQSzT8zIc+0PZenniC+lF+8TrSXzr1kXcagscD5UXZb7jiyPlX+cDK+dgxTsHK+4lsTLpYMXpeOv5bYmuESUsFjihTFrgbxzzBEu+6gqdqvp67h1ofykjoL5LOip/lVjVKER2v88HcJ/rGu7ThUWr/dH1Hnot7+ljpeG+IbtK78x9nRzbDX6w+7qt1g/3HXWfj4ysy+1rPl7Sfee0dAesO0wzn1+HWG+8Bhg5ZvPQ6AqeWwME7glBRyqAoZJ054x29JMhUXdRhoREq3Hz0+HDmQyED1PQa+Ojrys9UG7OCI4gN37PKbgJ9wFulUDV7hoRI9S5VCaXTMidvobuQAhZcsb49gMjT8fI6czDuWiJ7/S1PGUFXlj+7S+6GH9v7vQHqgPM9p0/fjV3wvDw7xzVLXL4pxj39n8=</diagram></mxfile>
2306.00196/main_diagram/main_diagram.pdf ADDED
Binary file (22.4 kB). View file
 
2306.00196/paper_text/intro_method.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The restless bandit (RB) problem is a dynamic decision-making problem that involves a number of Markov decision processes (MDPs) coupled by a constraint. Each MDP, referred to as an arm, has a binary action space, $\{\text{passive, active}\}$. At every decision epoch, the decision maker is constrained to select a fixed number of arms to activate, with the goal of maximizing the expected reward accrued. The RB problem finds applications across a spectrum of domains, including wireless communication [@AalLasTab_19_rb_wireless], congestion control [@AvrAyeDonJac_13_rb_cc], queueing models [@ArcBlaGla_09_rb_queue], crawling web content [@AvrBorViv_19_rb_crawling], machine maintenance [@GlaMitAns_05_rb_repair], clinical trials [@VilBowWas_15_rb_clinical], to name a few.
4
+
5
+ In this paper, we focus on infinite-horizon RBs with the average-reward criterion. Since the exact optimal policy is PSPACE-hard to compute [@PapTsi_99_pspace], it is of great theoretical and practical interest to focus on policies that approximately achieve the optimal value and compute such policies in an efficient matter. The *optimality gap* of a policy is defined as the difference between its average reward per arm and that of an optimal policy. In a typical asymptotic regime where the number of arms, $N$, grows large, we say that a policy is *asymptotically optimal* if its optimality gap is $o(1)$ as $N\to\infty$.
6
+
7
+ # Method
8
+
9
+ In this section, we present our framework, [`Follow-the-Virtual-Advice`]{.upright} ([`FTVA`]{.upright}). We first describe a *single-armed problem*, which involves an "average arm" from the original $N$-armed problem. We then use the optimal single-armed policy to construct the proposed policy [`FTVA`]{.upright}(${\bar{\pi}}^*$).
10
+
11
+ The single-armed problem involves the MDP $(\mathbb{S}, \mathbb{A}, P, r)$ associated with a single arm (say arm $1$ without loss of generality), where the budget is satisfied *on average*. Formally, consider the problem: $$\begin{align}
12
+ \label{eq:single-arm-formulation} \underset{\text{policy } {\bar{\pi}}}{\text{maximize}} & \quad V^{\bar{\pi}}_1 \triangleq \lim_{T\to\infty } \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}\left[r(S_1^{\bar{\pi}}(t), A_1^{\bar{\pi}}(t))\right]\\
13
+ \text{subject to}
14
+ &\quad \lim_{T\to\infty} \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}\left[A_1^{\bar{\pi}}(t)\right] = \alpha. \label{eq:relaxed-constraint}
15
+ \end{align}$$ The constraint [\[eq:relaxed-constraint\]](#eq:relaxed-constraint){reference-type="eqref" reference="eq:relaxed-constraint"} stipulates that the *average* rate of applying the active action must equal $\alpha$. Various equivalent forms of this single-armed problem have been considered in prior work [@WebWei_90; @GasGauYan_20_whittles; @GasGauYan_22_exponential; @Ver_16_verloop].
16
+
17
+ By virtue of the unichain assumption, the single-armed problem can be equivalently rewritten as the following linear program, where each decision variable $y(s,a)$ represents the steady-state probability that the arm is in state $s$ taking action $a$: $$\begin{align}
18
+ \label{eq:lp-single} \tag{LP} \underset{\{y(s, a)\}_{s\in\mathbb{S},a\in\mathbb{A}}}{\text{maximize}} \mspace{12mu}&\sum_{s\in\mathbb{S},a\in\mathbb{A}} r(s, a) y(s, a) \\
19
+ \text{subject to}\mspace{25mu}
20
+ &\mspace{15mu}\sum_{s\in\mathbb{S}} y(s, 1) = \alpha \label{eq:expect-budget-constraint}\\
21
+ & \sum_{s'\in\mathbb{S}, a\in\mathbb{A}} y(s', a) P(s', a, s) = \sum_{a\in\mathbb{A}} y(s,a), \; \forall s\in\mathbb{S}\label{eq:flow-balance-equation}\\
22
+ &\mspace{3mu}\sum_{s\in\mathbb{S}, a\in\mathbb{A}} y(s,a) = 1,
23
+ \quad
24
+ y(s,a) \geq 0, \; \forall s\in\mathbb{S}, a\in\mathbb{A}. \label{eq:non-negative-constraint}
25
+ \end{align}$$ Here [\[eq:expect-budget-constraint\]](#eq:expect-budget-constraint){reference-type="eqref" reference="eq:expect-budget-constraint"} corresponds to the relaxed budget constraint, [\[eq:flow-balance-equation\]](#eq:flow-balance-equation){reference-type="eqref" reference="eq:flow-balance-equation"} is the flow balance equation, and [\[eq:flow-balance-equation\]](#eq:flow-balance-equation){reference-type="eqref" reference="eq:flow-balance-equation"}--[\[eq:non-negative-constraint\]](#eq:non-negative-constraint){reference-type="eqref" reference="eq:non-negative-constraint"} guarantee that $y(s,a)$'s are valid steady-state probabilities.
26
+
27
+ By standard results for average reward MDPs [@Put_05], an optimal solution $\{y^*(s, a)\}_{s\in\mathbb{S},a\in\mathbb{A}}$ to [\[eq:lp-single\]](#eq:lp-single){reference-type="eqref" reference="eq:lp-single"} induces an optimal policy ${\bar{\pi}}^*$ for the single-armed problem via the following formula: $$\begin{equation}
28
+ \label{eq:single-arm-opt-def}
29
+ {\bar{\pi}}^*(a | s) =
30
+ \begin{cases}
31
+ y^*(s, a) / (y^*(s, 0) + y^*(s,1)), & \text{if } y^*(s, 0) + y^*(s,1) > 0, \\
32
+ 1/2, & \text{if } y^*(s, 0) + y^*(s,1) = 0.
33
+ \end{cases}
34
+ \quad \text{for $s\in\mathbb{S}$, $a\in\mathbb{A}$.}
35
+ \end{equation}$$ Let $V_1^\textup{rel}=V_1^{{\bar{\pi}}^*}$ be the optimal value of [\[eq:lp-single\]](#eq:lp-single){reference-type="eqref" reference="eq:lp-single"} and the single-armed problem.
36
+
37
+ [\[eq:lp-single\]](#eq:lp-single){reference-type="eqref" reference="eq:lp-single"} can be viewed as a relaxation of the $N$-armed problem. To see this, take any $N$-armed policy $\pi$ and set $y(s,a)$ to be the fraction of arms in state $s$ taking action $a$ in steady state under $\pi$, i.e., $y(s,a)=\mathbb{E}\big[\frac{1}{N}\sum_{i=1}^N\mathds{1}_{\{S_i^{\pi}(\infty)=s,A_i^{\pi}(\infty)=a\}}\big]$. Whevener $\pi$ satisfies the budget constraint [\[eq:hard-budget-constraint\]](#eq:hard-budget-constraint){reference-type="eqref" reference="eq:hard-budget-constraint"}, $\{y(s,a)\}$ satisfies the relaxed constraint [\[eq:expect-budget-constraint\]](#eq:expect-budget-constraint){reference-type="eqref" reference="eq:expect-budget-constraint"} and the consistency constraints [\[eq:flow-balance-equation\]](#eq:flow-balance-equation){reference-type="eqref" reference="eq:flow-balance-equation"}--[\[eq:non-negative-constraint\]](#eq:non-negative-constraint){reference-type="eqref" reference="eq:non-negative-constraint"}. Therefore, the optimal value of [\[eq:lp-single\]](#eq:lp-single){reference-type="eqref" reference="eq:lp-single"} is an upper bound of the optimal value of the $N$-armed problem: $V^\textup{rel}_1 \ge V^*_N$.
38
+
39
+ We now present [`Follow-the-Virtual-Advice`]{.upright}, a simulation-based framework for the $N$-armed problem. [`FTVA`]{.upright} takes as input *any* single-armed policy ${\bar{\pi}}$ that satisfies the relaxed budget constraint [\[eq:relaxed-constraint\]](#eq:relaxed-constraint){reference-type="eqref" reference="eq:relaxed-constraint"} and converts it into a $N$-armed policy, denoted by [`FTVA`]{.upright}(${\bar{\pi}}$). Of particular interest is when ${\bar{\pi}}$ is an optimal single-armed policy, which leads to our result on the optimality gap. Below we introduce the general framework of [`FTVA`]{.upright} without imposing any restriction on the input policy ${\bar{\pi}}.$
40
+
41
+ :::: algorithm
42
+ **Input:** $N$-armed problem $(N, \mathbb{S}^N, \mathbb{A}^N, P, r, \alpha N)$, initial states $\bm{S}(0)$, single-armed policy ${\bar{\pi}}$\
43
+ **Initialize:** Virtual states $\widehat{\bm{S}}(0)$ are $N$ i.i.d. samples following the stationary distribution of ${\bar{\pi}}$
44
+
45
+ ::: algorithmic
46
+ Independently sample $\widehat{A}_i(t) \gets {\bar{\pi}}(\cdot | \widehat{S}_i(t))$ for each arm $i\in[N]$ $\mathcal{A}\gets$ a set of $\alpha N$ arms chosen from $\{i\colon \widehat{A}_i(t)=1\}$ (any tie-breaking) $\mathcal{B}\gets$ a set of $\alpha N - \sum_{i=1}^N\widehat{A}_i(t)$ arms chosen from $\{i\colon \widehat{A}_i(t)=0\}$ (any tie-breaking) $\mathcal{A}\gets\{i\colon \widehat{A}_i(t)=1\}\cup \mathcal{B}$ Apply $A_i(t)=1$ and observe $S_i(t+1)$ for each arm $i\in\mathcal{A}$ Apply $A_i(t)=0$ and observe $S_i(t+1)$ for each arm $i\notin\mathcal{A}$ $\widehat{S}_i(t+1) \gets S_i(t+1)$ Independently sample $\widehat{S}_i(t+1)$ from the distribution $P(\widehat{S}_i(t), \widehat{A}_i(t), \cdot)$
47
+ :::
48
+ ::::
49
+
50
+ The proposed policy [`FTVA`]{.upright}(${\bar{\pi}}$) has two main components:
51
+
52
+ - *Virtual single-armed processes.* Each arm $i$ simulates a *virtual* single-armed process, whose state is denoted as $\widehat{S}_i(t)$, with action $\widehat{A}_i(t)$ chosen according to ${\bar{\pi}}$. To make the distinction conspicuous, we sometimes refer to the state $S_i(t)$ and action $A_i(t)$ in the original $N$-armed problem as the *real* state/action. The virtual processes associated with different arms are independent.
53
+
54
+ - *Follow the virtual actions.* At each time step $t$, we choose the real actions $A_i(t)$'s to best match the virtual actions $\widehat{A}_i(t)$'s, to the extent allowed by the budget constraint $\sum_{i=1}^NA_i(t)=\alpha N$.
55
+
56
+ [`FTVA`]{.upright} is presented in detail in Algorithm [\[alg:main-alg\]](#alg:main-alg){reference-type="ref" reference="alg:main-alg"}. Note that we use an appropriate coupling in Algorithm [\[alg:main-alg\]](#alg:main-alg){reference-type="ref" reference="alg:main-alg"} to ensure that the virtual processes $(\widehat{S}_i(t),\widehat{A}_i(t))$'s are independent and each follows the Markov chain induced by the single-armed policy ${\bar{\pi}}$. [`FTVA`]{.upright} is designed to steer the real states to be close to the virtual states, thereby ensuring a small *conversion loss* $V_1^{{\bar{\pi}}}-V_N^{\textup{\texttt{FTVA}}({\bar{\pi}})}$. Here recall that $V_1^{{\bar{\pi}}}$ is the average reward achieved by the input policy ${\bar{\pi}}$ in the single-armed problem, and that $V_N^{\textup{\texttt{FTVA}}({\bar{\pi}})}$ is the average reward per arm achieved by policy [`FTVA`]{.upright}(${\bar{\pi}}$) in the $N$-armed problem.
2306.01364/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2306.01364/paper_text/intro_method.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ AI-powered image manipulation techniques, such as deepfakes, are constantly evolving thanks to the continuous advances in deep generative models, particularly generative adversarial networks (GANs) [@goodfellow2014generative]. The quality and fidelity of the generated images have reached a photorealistic level that is indistinguishable from real images to human eyes. Alongside the technical advance, society is raising significant concerns regarding the abuse of these techniques to create and spread misleading information, which will cause a trust crisis where "seeing is no longer believing.\" To tackle the issues, the research community has been dedicated to developing powerful forensics tools against malicious image manipulations. One crucial and promising direction is detecting GAN-generated fake images, considering the ubiquitous adoptions of GANs in image manipulation tasks.
4
+
5
+ ![Instead of learning GAN-specific features directly from fake images which may lead to overfitting, our framework incorporates multi-view completion and classification to model diverse distributional discrepancies between real and fake images, which can generalize to unknown fake patterns. $\mathcal{R}$: Restorer; $\mathcal{C}$: Classifier.](intro.pdf){#fig:intro width="48%"}
6
+
7
+ Most detection methods typically train CNN classifiers to learn specific features to distinguish GAN-generated images from real ones [@hu2021exposing; @liu2020global; @marra2019gans; @dzanic2020fourier; @frank2020leveraging; @durall2020watch], which work satisfactorily on clean test samples from the same GANs used in training. However, their performances will decrease dramatically when facing samples generated by unknown GANs or perturbed by noises, leading to limited practical reliability. One primary reason is that a deep CNN classifier may easily overfit unstable GAN-specific features of the training samples, particularly the low-level frequency-domain artifacts [@wang2020cnn; @he2021beyond; @jeong2022bihpf; @jeong2022frepgan]. Previous studies have proved that conspicuous artifacts exist in the spectra of GAN-generated images [@wang2020cnn; @frank2020leveraging]: Despite being easily identified by classifiers, these artifact patterns are inconsistent, varying significantly among different GAN models or perturbations. As a result, the classifier overfitting a specific frequency pattern will suffer from weak generalization ability and robustness in detecting other frequency patterns.
8
+
9
+ Based on the understanding of the overfitting issue, we are motivated to design an improved detection model with two requirements: (1) reduce the dependency on unstable low-level frequency features; and (2) learn a robust feature representation from other types of information, such as regional consistency, and color or textural details of images. Instead of directly learning detectable features from fake images, which potentially leads to the frequency overfitting problem, we propose a novel detection framework that incorporates a multi-view image completion learning and a cross-view classification learning processes, as sketched in Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}. The framework can learn a strong and stable feature representation from diverse frequency-independent, view-specific information, resulting in outperforming generalization and robustness when facing unknown GANs or perturbations.
10
+
11
+ In the multi-view completion process, multiple view-to-image completion models are learned *with real images only*, and then used to characterize diverse distributional discrepancies between real and fake images. In contrast to overfitting specific GAN patterns, the compact distributions of the image-missing characteristics modeled from real images are more likely to distinguish unknown, out-of-distribution fake images from real ones [@ruff2020deep]. In addition, the view-to-image completions can align the frequency patterns of different types of fake samples with that of real images, which helps reduce the classifier's frequency bias. Then, in the cross-view classification, the real and fake samples synthesized from each incomplete view are fed into an independent classifier. The multi-scale feature concatenation and low-pass residual-guided attention modules are devised to strengthen the intra-view feature representation. The independent classifiers are finally combined using an adaptive loss fusion strategy to enhance the learning from cross-view information. Our contributions are highlighted as follows:
12
+
13
+ - We propose a novel GAN-generated image detection framework using multi-view completion classification learning to build a robust feature representation for detecting unknown GANs and perturbations.
14
+
15
+ - We devise several novel modules and learning strategies that effectively benefit the framework's ability to capture and incorporate diverse view-specific features.
16
+
17
+ - We perform extensive evaluations which validate the significantly improved generalization and robustness of our framework in a wide range of settings varying in image resolutions, GAN types, and perturbation methods.
18
+
19
+ Image-domain detection extracts detectable traces from the pixel inputs. Earlier works tended to train a CNN to learn deep features in a data-driven manner [@marra2018detection; @tariq2018detecting], while more recent works prefer to craft specific features for higher detection accuracy, such as the co-occurrence matrices [@nataraj2019detecting; @barni2020cnn], saturation [@mccloskey2019detecting], specular highlights [@hu2021exposing] and texture cues [@liu2020global]. [@marra2019gans] and [@yu2019attributing] pointed out that a GAN will leave a unique fingerprint containing the source model information in the generated images. Some other works improve on the network design, where novel learning strategies or modules, such as incremental learning [@marra2019incremental], self-attention mechanism [@jeon2020fdftnet; @mi2020gan] and vision transformer [@wang2022m2tr], are adopted.
20
+
21
+ Frequency-domain detection relies on identifying the frequency discrepancy between GAN-generated images and real images [@dzanic2020fourier; @frank2020leveraging; @durall2020watch]. Frequency discrepancy can be easily captured in various spectral representations by a classifier. For example, [@frank2020leveraging] found that even a shallow CNN can achieve a high detection accuracy using the 2D Discrete Cosine Transform (DCT) coefficients as input data. [@qian2020thinking] proposed a dual-branch network that extracts the global and local DCT features. [@dzanic2020fourier] and [@durall2020watch] proposed to transform the 2D Fast Fourier Transform (FFT) magnitude into 1D power profile as detectable features. [@liu2021spatial] found that more distinguishable features can be extracted in the phase spectrum than in the amplitude spectrum. Unfortunately, some recent studies also pointed out that the frequency features are unstable and easy to be concealed [@dzanic2020fourier; @durall2020watch; @liu2022making; @huang2020fakepolisher; @jung2021spectral; @dong2022think]. Thus, detectors heavily relying on frequency features are vulnerable and weakly generalized.
22
+
23
+ Generalized and robust detection of GAN-generated images is now in high demand. Most existing works involve a preprocessing operation to strengthen the representation of generalized and robust features. [@zhang2019detecting] pointed out that a detector can generalize between two GANs with similar spectral artifacts in their generated images, which in turn confirms that a detector is likely to overfit specific frequency patterns. [@wang2020cnn] explored the effects of different data augmentation strategies such as compression and blurring in improving a detector's generalization ability. [@jeong2022bihpf] proposed to preprocess the fake images with a bilateral high-Pass filter, which amplifies the effect of the common frequency-level artifacts shared by different GANs. [@jeong2022frepgan] designed a frequency-level perturbation framework to erode the GAN-specific spectral artifacts in generated images before feeding them to a detector. [@he2021beyond] proposed to re-synthesize training images using a super-resolution model pre-trained with real images to help extract robust features and isolate fake images.
24
+
25
+ <figure id="fig:overview" data-latex-placement="htb">
26
+ <embed src="framework.pdf" />
27
+ <figcaption>The overview of our framework (white box). Several restorers first learn different distributions of real images via multi-view completion learning. Then for each view, a classifier captures the view-specific distributional discrepancy between real and fake images via intra-view learning. The low-pass residual-guided attention and multi-scale feature concatenation modules are devised to strengthen intra-view learning (orange box). All base classifiers are finally fused to perform inter-view learning for robust detection (yellow box).</figcaption>
28
+ </figure>
29
+
30
+ # Method
31
+
32
+ We design the **M**ulti-view **C**ompletion **C**lassification **L**earning (MCCL) to build a novel multi-view, frequency-independent feature representation for robust detection of GAN-generated images. As shown in Figure [2](#fig:overview){reference-type="ref" reference="fig:overview"}, the framework jointly trains a set of restorers and classifiers. The restorers are trained with real images only, and each learns to reconstruct the full image from one particular incomplete view. Then, both real and fake images are processed by each restorer through the same view-to-image pipeline. Since the recovery of missing information is governed by real images' characteristics only, the distributional difference between the reconstructed real and fake samples can be reflected in the restored information. Then, for each view, a classifier is trained based on the reconstructed samples to capture the view-specific distributional discrepancy. We combine the multi-scale features encoded by different decoding layers of each restorer with the restored image as the classifier's input. A low-pass residual-guided attention module is employed at the entry of the classifier to highlight the reconstruction difference between real and fake images. A self-adaptive loss fusion module is additionally designed to combine the decisions of multiple classifiers to facilitate inter-view learning.
33
+
34
+ Several independent encoder-decoder-based restorers $\boldsymbol{\mathcal{R}}=\{\mathcal{R}^v\}_{v=1}^N$ are trained with real images to recover the full image from different incomplete views. It is non-trivial to select the appropriate views for completion, which determines what types of frequency-irrelevant information we want to exploit. Since regional consistency, color, and texture have been proven to be distinguishable features for GAN-generated images [@liu2020global; @hu2021exposing], we empirically consider three completion tasks: Masked Image Modeling, Gray-to-RGB, and Edge-to-RGB, where the regional, color and textural details are previously missing and restored, respectively. The natural compact distributions of these types of information can be modeled during the completion. The significance of each view is also explored in the experiment.
35
+
36
+ Masked Image Modeling is an emerging approach for visual representation learning [@he2022masked], which masks a portion of an image and predicts the masked area, and can be leveraged to model the regional consistency of natural images. The masking strategy is that, given an image $X \in \mathbb{R}^{w \times h \times 3}$, we randomly mask $50\%$ non-overlapping patches with a patch size of $(\frac{w}{16}, \frac{h}{16})$. Gray-to-RGB aims to learn color information from real images. We first transform the RGB image into the gray-scale version and then predict the raw RGB pixel values from the gray-scale input. Edge-to-RGB aims to learn textural information from real images. We first extract the binary edge sketch from the RGB image using the Canny edge detector, and then predict the raw RGB pixel values from the edge input. Figure [3](#fig:views){reference-type="ref" reference="fig:views"} shows an example of different incomplete views.
37
+
38
+ ![Three incomplete views selected for completion learning.](views.pdf){#fig:views width="48%"}
39
+
40
+ Given an image $X$ and an incomplete view $X^v$, the completion is formulated as $\Tilde{X}^v=\mathcal{R}^v(X^v)$. The training of $\mathcal{R}^v$ is supervised with a dual-domain constraint, which incorporates a pixel-level regression loss and a frequency loss:
41
+
42
+ $$\begin{equation}
43
+ \begin{aligned}
44
+ L_{pix}&=||X - \Tilde{X}^v||_1=||X - \mathcal{R}^v(X^v)||_1, \\
45
+ L_{fre}&=||\mathcal{F}(X) - \mathcal{F}(\Tilde{X}^v)||_2^2.
46
+ \end{aligned}
47
+ \end{equation}$$ where $\mathcal{F}(\cdot)$ denotes the 2D FFT. The frequency loss computes the element-wise Euclidean distance between the Fourier spectra of original and restored images, which ensures $\mathcal{R}^v$ to capture the natural, correct frequency property of real images, so as to facilitate the frequency alignment between real and fake images. The optimization function can be therefore denoted as
48
+
49
+ $$\begin{equation}
50
+ \min L_{pix}^v + \lambda L_{fre}^v,
51
+ \label{eq-rec-loss}
52
+ \end{equation}$$ where $\lambda$ is the weight to balance different losses.
53
+
54
+ After training $\mathcal{R}^v$ with real images, both real and fake images are processed by $\mathcal{R}^v$ via the same image completion pipeline to enable the subsequent classification learning. To mine more robust and frequency-irrelevant features from each individual view's pathway, we propose the multi-scale feature concatenation and low-pass residual-guided attention modules, as shown in the orange box in Figure [2](#fig:overview){reference-type="ref" reference="fig:overview"}.
55
+
56
+ Since $\mathcal{R}^v$ is an encoder-decoder consisting of multiple layers, during the completion, the missing information of the original image is progressively recovered by the stacked decoding layers of $\mathcal{R}^v$. Thus, meaningful features for distinguishing real and fake images are embedded not only in the final output image, but also in the intermediate feature maps of the decoder. To this end, we build a feature pyramid to concatenate the intermediate features at different scales. For a decoder of $\mathcal{R}^v$ with a total of $S$ layers, let $f_s$ be the feature map of the $s$-th layer, the $s$-th feature of the concatenation is computed as: $$\begin{equation}
57
+ \resizebox{.89\linewidth}{!}{$
58
+ \displaystyle
59
+ z_s = \left\{\begin{aligned}
60
+ &\operatorname{Conv_3}\left( \operatorname{Concat}\left( \operatorname{Conv_1}(f_s), \operatorname{Up}(z_{s-1})\right) \right),&s\geq2 \\
61
+ &\operatorname{Conv_1}(f_s),&s=1
62
+ \end{aligned}
63
+ \right.
64
+ $}.
65
+ \end{equation}$$ where $\operatorname{Up}(\cdot)$ is an upsampling layer with a scaling factor of $2$ to align the scales between two feature maps; $\operatorname{Conv_1}(\cdot)$ is a $1\times1$ convolutional layer to reduce channel dimensions; $\operatorname{Conv_3}(\cdot)$ is a $3\times3$ convolutional layer to suppress the aliasing effect of upsampling; $\operatorname{Concat}(\cdot)$ indicates the concatenation of two tensors. Finally, the last layer of the feature pyramid $z_S$ is combined with the reconstructed image $\Tilde{X}$ to get the enhanced feature $F$ in the following way: $$\begin{equation}
66
+ F= \operatorname{Concat}( \operatorname{Conv_3}(\Tilde{X}), \operatorname{Conv_3}(z_S))
67
+ \label{eq4}
68
+ \end{equation}$$
69
+
70
+ The distinguishable features are contained in the restored regional, color, and textural information of the image. Thus, it is possible to leverage the reconstruction residual to provide spatial attention to improve intra-view learning. However, one challenge is that, since the original image $X$ is involved in computing the residual, both stable and unstable features in the original image potentially remain in the residual. As discussed earlier, unstable features that are detrimental to generalization and robustness should be avoided. Prior studies have found that these unstable features are low-level artifacts that mainly cluster in high-frequency components [@frank2020leveraging; @durall2020watch]. Thus, we propose only using the low-frequency residual to guide the classifier to focus on more stable features. Given an image $X$ and its reconstructed version $\Tilde{X}$, the low-frequency residual is: $$\begin{equation}
71
+ M = |\mathcal{H}(X) - \mathcal{H}(\Tilde{X})|,
72
+ \end{equation}$$ where $\mathcal{H}(\cdot)$ is the first-order low-pass Butterworth filter and $|\cdot|$ is the absolute function. An attention mechanism is then devised to exploit the low-frequency residual. A functional network is used to process $M$ to get the attention map, i.e., $\hat{M} = \mathcal{G}(M)$, where $\mathcal{G(\cdot)}$ consists of a $7\times7$ convolutional layer, an average pooling layer and a sigmoid function. The attention map is applied to the enhanced feature $F$ in Eq. [\[eq4\]](#eq4){reference-type="ref" reference="eq4"} to obtain the residual-guided feature: $$\begin{equation}
73
+ \hat{F} = \hat{M} \otimes \operatorname{Conv_3}(F),
74
+ \end{equation}$$ where $\otimes$ indicates the element-wise multiplication.
75
+
76
+ When the intra-view feature enhancement is ready, we can get a set of features $\{\hat{F}^v\}_{v=1}^N$ corresponding to different views. For each view, an independent neural network classifier $\mathcal{C}^v$ is trained on the feature $\hat{F}^v$. Since the features provide view-specific information, the classifiers will learn diverse representations and contribute differently facing the same data instance. To ensure the complementarity and interactivity across different views during training, we propose a self-adaptive cross-view loss fusion strategy.
77
+
78
+ The self-adaptive loss fusion strategy aims to combine the losses of different classifiers using adaptive weights, such that the importance of each view-specific representation can be estimated and respected in the final decision. The weights are learned and autonomously adjusted during training. Formally, given a view-specific feature instance $\hat{F}^v$ and the corresponding label $y$ ($y=0$ if the the sample is a real image, otherwise $1$), let $p^v$ be the probability that the sample is fake predicted by $\mathcal{C}^v$. The training of $\mathcal{C}^v$ is supervised by minimizing the cross-entropy loss: $$\begin{equation}
79
+ \min L_{ce}^v := -[y\log(p^v) + (1-y)\log(1-p^v)].
80
+ \label{eq-cls-loss}
81
+ \end{equation}$$ The self-adaptive loss fusion strategy can be denoted as a minimization problem with respect to the weights $\boldsymbol{\beta}$: $$\begin{equation}
82
+ \min_{\boldsymbol{\beta}}\sum_{v=1}^{N}\beta_{v}^{\tau}L_{ce}^{v} \quad s.t. \quad \boldsymbol{\beta}^\top\boldsymbol{1}=\boldsymbol{1}, \beta_v\geq0,
83
+ \label{eq8}
84
+ \end{equation}$$ where $\tau>1$ is the power exponent parameter to avoid the trivial solution of $\boldsymbol{\beta}$ during the classification. In the inference stage, the decision is made on the average predicted probability of fake ($p^v$) over all classifiers, i.e., $p_{fake}=\frac{1}{3}\sum_{v}p^v$, with a threshold of $0.5$.
85
+
86
+ The components of MCCL that require optimization include the parameters of $\{\mathcal{R}^v\}_{v=1}^N$, $\{\mathcal{C}^v\}_{v=1}^N$ and several building blocks for intra-view learning (for simplicity, the latter two are denoted in together as $\{\mathcal{C}^v\}_{v=1}^N$), as well as the self-adaptive loss weights $\boldsymbol{\beta}$. The optimization is performed in the following alternative way:
87
+
88
+ The completion and classification networks with respect to different views are updated independently in parallel. For the view $v$, $\mathcal{R}^v$ and $\mathcal{C}^v$ can be updated sequentially by optimizing the corresponding loss functions Eq. [\[eq-rec-loss\]](#eq-rec-loss){reference-type="ref" reference="eq-rec-loss"} and Eq. [\[eq-cls-loss\]](#eq-cls-loss){reference-type="ref" reference="eq-cls-loss"}. During the optimization, the loss weights $\boldsymbol{\beta}$ are fixed.
89
+
90
+ Next, we fix the parameters of $\{\mathcal{R}^v\}_{v=1}^N$ and $\{\mathcal{C}^v\}_{v=1}^N$, and update $\boldsymbol{\beta}$ by solving Eq. [\[eq8\]](#eq8){reference-type="ref" reference="eq8"}. To satisfy the constraints, the Lagrangian function of Eq. [\[eq8\]](#eq8){reference-type="ref" reference="eq8"} is $$\begin{equation}
91
+ \mathcal{L(\boldsymbol{\beta}, \zeta)} = \sum_{v=1}^{N}\beta_{v}^{\tau}L_{ce}^{v}-\zeta(\sum_{v=1}^N\beta_v-1)
92
+ \label{eq10}
93
+ \end{equation}$$ where $\zeta$ is the Lagrange multiplier. By derivation of Eq. [\[eq10\]](#eq10){reference-type="ref" reference="eq10"} with respect to $\beta_{v}$ and $\zeta$, the optimal solution of Eq. [\[eq8\]](#eq8){reference-type="ref" reference="eq8"} is: $$\begin{equation}
94
+ \beta_{v} = (L_{ce}^v)^\frac{1}{1-\tau}/{\sum_{n=1}^{N}(L_{ce}^n)^\frac{1}{1-\tau}}
95
+ \end{equation}$$
2306.02913/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-09-25T15:06:51.135Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/13.9.9 Chrome/85.0.4183.121 Electron/10.1.5 Safari/537.36" etag="U7hV1hucNfMe0258vxQE" version="13.9.9" type="device"><diagram id="fzPrenOi0wzJYP_gDLPx" name="第 1 页">7V3ZctvGEv0aPRo1+/LoPasrtm8SK280CUuMKdJF0bacr78DiuAyAIHBEA0MFlUqZYBkU5o+ONPd08sVfX738Ho9+XL7+2oWL64Imj1c0RdXhGBGyFXyH5r9eLyj0hs36/ls96bDjffz/+LdTbS7+3U+i+9P3rhZrRab+ZfTm9PVchlPNyf3Juv16vvp2z6tFqff+mVyE2duvJ9OFtm7f89nm9vHu4RScXjhp3h+c5t+teDs8ZWPk+nnm/Xq63L3hcvVMn585W6Sytn9kfe3k9nq+9EX0pdX9Pl6tdo8/uvu4Xm8SNY1XbLHz7068+r+d17Hy43LB/58++Gnvz7+cr18uuZ3/8TP/31z8/IJ4TtNfZssvu5WY/frbn6ky2PkGE2Yi2ffb+eb+P2XyTR55bsBg7k3uf/yqJNP84fYfNezT/PF4vlqsVpvP00/qWk8nZr795v16nN89MpHxRlH+1fSpSfJnXSxkpezf+nuj/8Wrzfxg72kBqbx6i7erH+Yt+xe5akWdgiVbHf9/aBvurt1e6Tp9N5kh7CbveTDSpt/7Ba70sLTQS68wq0vPBvmwqvWF54PY+F1cFQjBrnwAVCNHObCt081ahALL0RwVKMHufDtU036Cw1t4VunGoEdFn45e5q4qeZqupjc388fl3Gy3uTddl9dI/fVPPl9X+BU4NH11i1NNLm9Mu/dOd4kfe/RtdHI+seHnZ62F9fJRcTTyxcPxy+++JFePcw3H9JvMP8++pS5OnwouUg/c4KJcyA7i5X71df1NC7koJ0CzJ94E29cLNJ4dhIfyKLvCF08B13pvXW8mGzm306jCnmQ233DH6u5+esO+yi36VxHXOrDz6nAx5XYyTh2/C2xAllisfUwPC5URtD2edgvwiWPiEuMIfhHBBy21Bm2ImjYKlILbDNWZbFYcBC7xGtGEHNnEKugQCwoDPfKSmLBQewS+xpBzJxBLIMCsc3EGoEwcYlYcBC7OPcjiKUriFOXKhAQ20xcE4gFs8SSpq1gl9BI8LA9OIoH3/D66tg1LHYUD87h9dXBbWzJUXS3uFlQjwi3oiAagziKJWKhHxjpEtIK/oEBB7HoqLEi8nk99fa4Rc/uIC4UqxtmfdmL8CA4iHVXjRWlIylqQm6OrMbhOobq6o0w06DgykkGYlJ6wjVHlmo6sizHoFy9QbmwIsvJXl4XXHNkNQ1XRlwScQ5w3SaTXgKe2SRWn3JxLaYq/vipCFaBQEBmAqjMT/+lgsCV75IMNCr/RGdWKEgRX+WXCQJXfrWY5ah8o6O6nvxSQeDKrxb52yk/1+IYLhz6wwXp949c4G0FaFSTFZARBK78avGgUfmJjmp68ksFgSu/WnRlVH528/Z+8ksFgSsfvODLUvfZPFwPNLhn6Eo7J53u1rmtDF2W/gI9X3jFglt48LqjIBbeRrxSrS88eN1REAtvIz6Aha8lNanIATwTckYndsL+1eNrIDuhNPzMUne7NPy83yIDsT2kjS9vr8PeE0XTtodnBKLgkCQflyjnKCRUXFLXDDlGw6oTsU3ZPe9VtontFPumI2PpAzbi0o8vwzquy/Cl91nNOYuqMVzWkrXjEMjNQ17xLt8iX7rjMmy+zGy/3nzZdAyBVQsg1WtfnktpCIs9qWtS7x7PgaBUZvqG+KLUDgQ0vqtXy8oZxq5OXfN093ZpoLj0j8DaFRiNs+cwAoHS7vyEdt5fa2ERVi31xTFNzz0ZrwecEFhkBKEIIUq4kIhpTPgp4ISOMKMIaWFUT5jy4wspUUQVxppiJIwcae1qNNJaM80pEVhj2jCXgBQh1uEkdCaoEpb5pbiOBD9bN+IdYikU23ggkIEUIfaAiV1TpINjYm3wJdD+B5/CVvpGYArFSt4wbHm1OOEI29BDhUqTSHNjGnCNNaHYMiAIjTjDDBGGjDcppC+IZYQIMvYw4YaCCbW5N+KIYCY4Q8bGEA1DGqQwsAeQdo4yssAMCKYjQrnmWCuqzI+Ftq25ionG2NjFKeArQ5qjSAqCKReESiosKwXrSAuphDHDBBG0aUjXEp3sIaSdbeLgIF3TQaNkLELywMTEontmYNsaEysQ2HariNChNFAHBU1hB7kMhqTwQ2emqWiOLHDudIlNXpIafFb1pQpNs6cd8guLFF4h4buuk5BSQeBadWl5PxStZtpE+mq1TBC4VqHLeDukVWPM1vSwlksC16tXhe5FdXod0nSPnl/oYtwuabWu091SQeBa9cpx7adWMc6ax56sXCoJWq9Okw2Gold7j/R+XMslgevVJQB6WTYGj9WM5XnkinykQuTnaXyON9Pbq5xky7M4cU/N0KocKc2mZjj15u+ZFjAKrkbRqbt839SA7Xrs1hOVnPqj90wNNie1X0UnwAcFhqeFDCcFoIZqYY7qcXGEdNKSAfSg56xqyo9t0l6Y5cc2IliDUduo8nbvbMul6cwlAZJD2h00CudUD+GaodS+++J9oIgxtiQ1HSwEn0IROB4rsKProXb77Oh90HTOhmoMjdDDJYrRWE+i8iXs6I7G7rCjd51ilh2bDu6Az27wZkfvKsZL0OmaZbHHcYjotDB1wUFfZmxx0+iEHsoQ+N6d8ooDHl0rudvH4wWhcHucTuNsSRzw2LOAh42U9oN/4MME+sMKAe9SdnKBPyuUdY0AZ4VqKYH9w6N7xCPY+Bs2axwpfKiUZafo1CLi8lAp65nAipFCEdaHUll7ZgaOGGmtVFZWS4LsH47deTVYHGtSpeDbs17LaDHcgm/Z7mlIPRGWE4uuGWoO2FSQIBXfJXKbNyOGfnDi3BAqXPo1zi5ImXeJ3MbrvOXAD1UqYDXYQxWMcQO13Qa5ARd3y3aPY9rHMXHFcRr6ChDHSDRQ0K3laUH36ZfYBd3Wl4BXxgZ7ihNaBCJoGNeUc2Gcu4CLuFs90jG//Ic9Ms3FdYK9iKeXLx6u0hb9ydWP9Ophvvlw9O+jT5mrw4eSi/1nQjhsd7ZSAn4sMCm0UmhNVgoptFJkQ1bK69m/b/6+fzb/8+XT6+n3279f35BfnmDwAox934EqLRHqPmjiVgHTvkt5AwdN+etOBrnu+2nLra07eI1FmOuu2l538KKKMNY9U77e9rqDl1EEue7t8wz42LMw1711ngGfehbEutvditrnGZegbf/WvX2ecQky9nDd2+aZ1L7qdr+4fdBiH5eoFrTAFYIWl/WmOw5IFDqyx/GIQks0kB52nNtMriMuMwfHldvZWWXVEltPAnBcgUCH4jrQT7Ecs8QVs2HN0rIxq0gtmM0Yk8VioRHsEqEZPIJz0toK7fJAECwoDOva6cLFYqERDJ3L3gsE55xjFHr0gSDY5mCNQDi4RCw0gqGz33uB4JwBUoW+YiAItjm4JgQLe8Icadjyhc50b9gzPDiD11fHvmDJcfbeG7y+OviJ7XiGzlZ2WFN9uF2RiEE8wxKx0E8LdEZ9Lxg+JwWpCzaKyGf01L2zEzbcEVwoVjfM99Cp9b1AsOqojaL0fgrExbDNkdU0VqFT63uBVedIMgkKq5xk8CWlJ1ZzZKmmI8jQ6fO9wKpz/C2sCHKyhdeF1RxZTWO14kTV6u3JaxxVFggCpNDRkQmnT1NvvfsKVRQLjIuKI0tHXCSD64sU6D3CoKJYaFxUOx0dcfHYXgCALyqKhcYF8cHFRcOH+oeUYTBItbO9kUHKTAPvznEVxULjotqJ2YiLJAgDwhcVxULjAnoIZQ9xAcMXFcUC4wK8v1wHoizlwRMVFDAFLmGWjgxBzl1r7TV/s5+T3ro4AzlfqeOwzcM3dXCEar5Sx1mbRzX+XZyAnL/ZIa+g5TgBuWOPL0bQQcguabWDE5DPaJWMWu32BOQzeoUO+HVJr52cgHwm2aTXY0jCw44WJEKICSwRkhpZDf0UlVGSQW5cbE2F1r5NrJU0HjqnyuwHBOk0UWhfvMwjSbmmCiFh9NNwx6eKCeq9ZpLeeNi4YiJ1v7XaGxu9YnJxr7XaJx+7YiLu6GN39fkdY2Q99LHTwsVRq/3ysekYEeulj017Pf0kPOyMPjZ4gm+X0NAbH9svPbenWu2NjQ6eStshrfbJx/ZLhR197M49v2M8u4c+Nh/j2b30sfkY0e6nj93rAc/hYafEx8boxP3FfrDa+tgMYymYRFxpe/odSWbjkd07GNgw5jOIa3cY8+AQh5EqhJwVcVG+OxQ6iutgrNnpt5wEj/YzJhvDXLuzaYGnvoWKOYIFktyASzF7wqthIM7Ni5wYi5F4zt8sw9wJmcKNQz4DOeg+ON40d65Cq1+kh62vSnu9VDfRMjNbmm21iNNFCw9JA8GO8u3ZmZ3f2nCbTszIiJ1msYOSjU9yqhkz29Lpvre1tZhCQjGhNKWe5r2B1al9L7O7a3v2PYNuXz9CLgcNkjCBOMfJjOdCzFFfzCXA5krtkIVIUJhrN4pRj31//zneTG+7CEBplbL7TinPSDJQahhIY8pJs9gxRBIRQ01UEJGwhx02wJFOPEghBCbcsz2C4S5pONAQJGGaaImtCIjGEVKac8OdlBLVMOLaDU0MD3Eo2aqUkIhpiRRHlvHERaQFMnCjBgza+3TV7MkScyqMFGOiaXyKOCwixIhgZjtWktGmIRdsaKLXkEtJjmO7mzg+QgP3PvoVxyQnRA6TYsQY0cSQqWoYchU7qtYMuc3jTBScXlxfVZmWmf776FNnZ6IMMtiLrdEk/jkpskQQNEzFWNXbKHQGn3EsXHjxojnQVke/s9OePRr+uc+BVhai9h5iW3Og0wPIqkmGBU95/rOMLnuWa2zPWDqUIMViIO3ylL0Z+EYV1HE2hED41ODfksw+cpX5Pet7/v98++Gnvz7+cr18uuZ3/8TP/31z8/IJc8q/6QEBKEuZSLdMANWyjM/s9A7Jx9XswLK5eqAEwMMa9msTQCYYWBMBJGkCjwcpilApmz2hc8qr7MHz72BSNvv813JiUPT8Fzzh3hkYjbKB63zawGhD1lUKk8Fs01MlXBIlu08N0s6vaZ0aagkCeFJDoGQgpCsZhDXKVNqOp/+4iBJjBDpQMAw7Qdp1Lq37CV5Vpj0PFDiTQVhTYTNk4D+triQ0AW0ZuBzg9YAMzi1za2TgVdzYbzJIsVhKBoGFF203wX8wDCsRBG0ZgCSQgJxMBYnL0CxWhCKEKOHbDJV919eUAoWOMEuOEQVjkjDPai0pUUQVxppiJIwcKweG0WRQDdOcJkNrcLPJnAIkOwUkY7ON05muhWMU15HgR2OPii0nZ9YtFNu0d5ZGqofFwV2jVqWOM//sUwARIaopVYIIsS8lrEys2sBSoP3Pad6fbLgIUYKks3QflmG5pYWwTNL29umo/tErLSNEkCSIcEOaxC7HZRFHBDPBGTImQbMpMGm9yrBA6nwYW+5XBbbfM53kU2iOtaJKKbv3xNa0xERjbGxY7Fn5LXlyZksw5YKYB0NYRkUyaFFIJRg1bxK0YTyD1L91Hs/O9mtweK4pDUkyFiF54GByKpYwg9kGODg3DYmM82RPkZm/SoEZDsKulpPID5pC6Uie99EwggLjq39+/Zm/+x4v/3ga377/9ef7r+Kt27l3xVZjFhhiPOOxzMOp2TfoRBTBJBDlU57EiQ4/p3SyC+HsfzzjRBd9CTRMADrS9RAmuEiDitQDk0u+BBomADOM+wcTZnWDrYk/KoqFhoLXVJeqnYX7B47Bc4hTlc7gOaTYWNCoAYuk5EugYQIwPaJ/MGF2KUg9/FFRLDQUyAiF6lZHTRxRUSw0FFzigRclazkgoSSN6yxQ3JO1aIsVHvnr7hLS6v66MxLauoNXNIax7nZyYoOZyvnrDl4uEsS62zzTZFJo/rqDZ+YHse42z7S/7tCNswoW95ITuHPKPauj46O13JVID2+ODzAKTZFAzD+aKfbwPFrL2B5w6V/56wrdUKszUMwrDSm0EgKBou0yeJ/yMrtrBVzRaD4XQDfa6gwU3VkxrCzFDCv6VimdtZSagiJ0M61iKHqneAOwojMUw2ZF7w4cGVaEq5HJhyIJlRWrtIIAACZ1BSYLCpjUbg3jy5GZ6EnT2zV0s/zObNcip91IoYkZKBS96whtKALWERbaSz2PY1C7WWnbcTsJOQe36Fnt7rnR4HPfJOSU3Z5iZuhJLBJygm8/MTOIrDjplSBZNSuu50gZ2cUrt3LYmBl6vpyqFqseMTOQ5DkFkEfZd1wMIZNOkUq4CCZY5oS90BFGSTKqA1OpiaZUSG4NMdrO6uBS4mTAkLYHvLufWSWzOpQgSCIhKEnDsSfDOti2N7ikCG5wVj7+2g3WXt6zqMv4Y6wQf9sBpJfjjyWzVCkSmiettrCy5vcmreIloghjqRWiHAp+vz37/M/m3ds3v/357dtfPz7jd98Ie5LdFV99XSx+PJmulst4uolnGTBu4gejhGe3m7sUCpPF/GaZgNBoOTYQeJYETufTyeLp7oW7+WyWfPzZOr6f/zf5uBWVhF6/JH/W9g/lz674i0TW183q/hiTx2jebcrHaEtvrZab97vfEe+uX03u5otkkf83v4vvzR/xJv5u/v9udTdZph/Z7eW4njCwttI3ddoD7QjNWADFgXPVmyWXd/PlzajTCqF9+ygmR6eNqjQb2X+9no+PaQWVcqvFQesqzQbeXz58MSu23Mwni1Gz/s19cwmY1KNac7lerTbH2/F68uX299UsTt7xfw==</diagram></mxfile>
2306.02913/paper_text/intro_method.md ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Decentralized training harnesses the power of locally connected computing resources while preserving privacy (Warnat-Herresthal et al., 2021; Borzunov et al., 2022; Yuan et al., 2022; Beltrán et al., 2022; Lu & Sa, 2023). Decentralized stochastic gradient descent (D-SGD) is a popular decentralized training algorithm which enables simultaneous model training on a massive number of workers without the need for a central server (Xiao & Boyd, 2004; Lopes
4
+
5
+ Proceedings of the 40<sup>th</sup> International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
6
+
7
+ & Sayed, 2008; Nedic & Ozdaglar, 2009; Lian et al., 2017; Koloskova et al., 2020). In D-SGD, each worker communicates only with its directly connected neighbors; see a detailed background in Appendix A.2. This decentralization avoids the requirements of a costly central server with heavy communication burdens and potential privacy risks. The existing literature demonstrates that the massive models on edge can converge to a steady consensus model (Lu et al., 2011; Shi et al., 2015), with asymptotic linear speedup in convergence rate (Lian et al., 2017) similar to distributed centralized SGD (C-SGD) (Dean et al., 2012; Li et al., 2014). Therefore, D-SGD offers a promising distributed learning solution with significant advantages in communication efficiency (Ying et al., 2021b), privacy (Nedic, 2020) and scalability (Lian et al., 2017).
8
+
9
+ Despite these merits, it is regrettable that the existing theories claim decentralization to invariably undermines generalization (Sun et al., 2021; Zhu et al., 2022; Deng et al., 2023), which contradicts the following unique phenomena in decentralized deep learning:
10
+
11
+ - D-SGD can outperform C-SGD in large-batch settings, achieving higher validation accuracy and smaller validation-training accuracy gap, despite both being finetuned (Zhang et al., 2021);
12
+ - A non-negligible consensus distance (see Equation (3)) at middle phases of decentralized training can improve generalization over centralized training (Kong et al., 2021).
13
+
14
+ These unexplained phenomena indicates the existence of a non-negligible **Gap** between existing theories and deep learning experiments, which we attribute to the overlook of important characteristics of decentralized learning in existing literature. Accordingly, the primary **Goal** of our study is to thoroughly investigate the underexamined characteristics of decentralized learning, in an effort to to bridge the gap.
15
+
16
+ Directly analyzing the dynamics of the diffusion-like decentralized learning systems, instead of relying on upper bounds, can be challenging. Instead, we aim to establish a relationship between D-SGD and other centralized training algorithms. In recent years, there has been a growing interest in techniques that aim to improve the generalization of deep learning models. One of the most popular techniques is sharpness-aware minimization (SAM) (Foret et al., 2021;
17
+
18
+ <sup>&</sup>lt;sup>1</sup>College of Computer Science and Technology, Zhejiang University <sup>2</sup>JD Explore Academy, JD.com, Inc. <sup>3</sup>Artificial Intelligence and its Applications Institute, School of Informatics, University of Edinburgh <sup>4</sup>The University of Sydney. Correspondence to: Fengxiang He <F.He@ed.ac.uk>.
19
+
20
+ ![](_page_1_Figure_1.jpeg)
21
+
22
+ Figure 1. The validation accuracy comparison of C-SGD and D-SGD (ring topology) on CIFAR-10. The number of workers is set as 16, with a local batch size of 64 and 512 per worker, resulting in total batch sizes of 1024 and 8192, respectively. Validation accuracy comparison of C-SGD and D-SGD with other topologies and on Tiny ImageNet are included in Figure B.1 and Figure B.2, respectively. The training accuracy is almost 100% everywhere. Exponential moving average is employed to smooth the validation accuracy curves. The training setting is included in Appendix B.
23
+
24
+ Kwon et al., 2021; Zhuang et al., 2022; Du et al., 2022; Kim et al., 2022), which is designed to explicitly minimize a sharpness-based perturbed loss; see the detailed background of SAM in Appendix A.3. Empirical studies have shown that SAM substantially improves the generalization of multiple architectures, including convolutional neural networks (Wu et al., 2020; Foret et al., 2021), vision transformers (Dosovitskiy et al., 2021) and large language models (Bahri et al., 2022). Average-direction SAM (Wen et al., 2023), a kind of SAM variants where sharpness is calculated as the (weighted) average within a small neighborhood around the current iterate, has been shown to generalize on par with vanilla SAM (Ujváry et al., 2022).
25
+
26
+ In this paper, we provide a completely new perspective for understanding decentralized learning by showing that
27
+
28
+ D-SGD and average-direction SAM are asymptotically equivalent.
29
+
30
+ Specifically, our contributions are summarized below.
31
+
32
+ - We prove that D-SGD asymptotically minimizes the loss function of an average-direction sharpness-aware minimization algorithm with zero additional computation (see Theorem 1), which directly connects decentralized training and centralized training. The asymptotic equivalence also implies a regularization-optimization trade-off in decentralized training. Our theory is applicable to arbitrary communication topologies (see Definition A.1) and general non-convex and non-β-smooth (see Definition A.5) objectives (e.g., deep neural networks training).
33
+ - The equivalence further reveals the potential benefits of the decentralized training paradigm, which challenges the previously held belief that centralized training is optimal.
34
+ We demonstrate three advantages of training with decentralized models based on the equivalence: (1) there exists a surprising free uncertainty estimation mechanism in D-
35
+
36
+ SGD, where the weight diversity matrix $\Xi(t)$ is learned to estimate $\Sigma_q$ , the intractable covariance of the posterior;(2) D-SGD has a gradient smoothing effect, which improves training stability (see Corollary 2); and that (3) the sharpness regularizer of D-SGD does not decrease as the total batch size increases (see Theorem 3), which justifies the superior generalizability of D-SGD than C-SGD, especially in large-batch settings where gradient variance remains low. Our empirical results also fully support our theory (see Figure 1 and Figure 3).
37
+
38
+ To the best of our knowledge, our work is the first to establish the equivalence of D-SGD and average-direction SAM, which constitutes a direct connection between decentralized training and centralized training algorithms. This breakthrough makes it easier to analyze the diffusion-like decentralized systems, whose exact dynamics were considered challenging to understand. The theory further sheds light on the potential benefits of decentralized training paradigm. While our theory primarily focuses on vanilla D-SGD, it could be directly extended to general decentralized gradient-based algorithms. We anticipate the insights derived from our work will help bridge the decentralized learning and SAM communities, and pave the way for the development of fast and generalizable decentralized algorithms.
39
+
40
+ # Method
41
+
42
+ Basic notations. Suppose that X ⊆ R <sup>d</sup><sup>x</sup> and Y ⊆ R are the input and output spaces, respectively. We denote the training set as µ = {z1, . . . , z<sup>N</sup> }, where z<sup>ζ</sup> =
43
+
44
+ (x<sup>ζ</sup> , y<sup>ζ</sup> ), ζ = 1, . . . , N are sampled independent and identically distributed (i.i.d.) from an unknown data distribution D defined on Z = X × Y. The goal of supervised learning is to learn a predictor (or hypothesis) g(w; ·), parameterized by w ∈ R <sup>d</sup> of an arbitrary finite dimension d, to approximate the mapping between the input variable x ∈ X and the output variable y ∈ Y, based on the training set µ. The function c : Y × Y 7→ R <sup>+</sup> are defined to evaluate the prediction performance of hypothesis g. The loss of a hypothesis g with respect to (w.r.t.) the example z<sup>ζ</sup> = (x<sup>ζ</sup> , y<sup>ζ</sup> ) is denoted by L(w; z<sup>ζ</sup> ) = c(g(w; x<sup>ζ</sup> ), y<sup>ζ</sup> ), which measures the effectiveness of the learned model w. The empirical and population risks of w are then defined as follows:
45
+
46
+ $$\boldsymbol{L}_{\mathbf{w}}^{\mu} = \frac{1}{N} \sum_{\zeta=1}^{N} \boldsymbol{L}(\mathbf{w}; z_{\zeta}), \ \boldsymbol{L}_{\mathbf{w}} = \mathbb{E}_{z \sim D}[\boldsymbol{L}(\mathbf{w}; z)].$$
47
+
48
+ Distributed learning. Distributed learning trains a model w jointly using multiple workers (Shamir & Srebro, 2014). In this framework, the j-th worker (j=1, . . . , m) can access local training examples µj={zj,1, . . . , zj,|µ<sup>j</sup> <sup>|</sup>}, drawn from the local empirical distribution D<sup>j</sup> . In this setup, the global empirical risk of w becomes
49
+
50
+ $$\boldsymbol{L}_{\mathbf{w}}^{\mu} = \frac{1}{m} \sum_{j=1}^{m} \boldsymbol{L}_{\mathbf{w}}^{\mu_{j}},$$
51
+
52
+ where L µ<sup>j</sup> <sup>w</sup> = 1 |µ<sup>j</sup> | P<sup>|</sup>µ<sup>j</sup> <sup>|</sup> <sup>ζ</sup>=1 L(w; zj,ζ ) denotes the local empirical risk on the j-th worker and zj,ζ ∈ µ<sup>j</sup> , where ζ = 1, . . . , |µ<sup>j</sup> |, represent the local training data.
53
+
54
+ Distributed centralized stochastic gradient descent (C-SGD).1 In C-SGD (Dean et al., 2012; Li et al., 2014), the
55
+
56
+ <sup>1</sup> "Centralized" refers to the fact that in C-SGD, there is a central server receiving local weights or gradients information (see Figure 2). C-SGD defined above is identical to the FedAvg algorithm (McMahan et al., 2017) under the condition that the local step is set as 1 and all local workers are selected by the server in each round (see Appendix A.1). To avoid misunderstandings, we include the term "distributed" in C-SGD to differentiate it from traditional single-worker SGD (Cauchy et al., 1847; Robbins, 1951).
57
+
58
+ de facto distributed training algorithm, there is only one centralized model $\mathbf{w}_{a}(t)$ . C-SGD updates the model by
59
+
60
+ $$\mathbf{w}_{a}(t+1) = \mathbf{w}_{a}(t) - \eta \cdot \frac{1}{m} \sum_{j=1}^{m} \overbrace{\nabla L^{\mu_{j}(t)} (\mathbf{w}_{a}(t))}^{\text{Local gradient computation}}, \quad (1)$$
61
+
62
+ where $\eta$ denotes the learning rate (step size), $\mu_j(t) = \{z_{j,1},\ldots,z_{j,|\mu_j(t)|}\}$ denotes the local training batch independent and identically distributed (i.i.d.) drawn from the local empirical data distribution $\mathcal{D}_j$ at the t-th iteration, and $\nabla L_{\mathbf{w}}^{\mu_j(t)} = \nabla L^{\mu_j(t)}(\mathbf{w}) = \frac{1}{|\mu_j(t)|} \sum_{\zeta(t)=1}^{|\mu_j(t)|} \nabla L(\mathbf{w};z_{j,\zeta(t)})$ stacks for the local mini-batch gradient of L w.r.t. the first argument $\mathbf{w}$ . The total batch size of C-SGD at the t-th iteration is $|\mu(t)| = \sum_{j=1}^m |\mu_j(t)|$ . Please refer to Appendix A.1 for more details of (distributed) centralized learning. In the next section, we will show that C-SGD equals the single-worker SGD with a larger batch size.
63
+
64
+ **Decentralized stochastic gradient descent (D-SGD).** In model decentralization scenarios, only peer-to-peer communication is allowed. The goal of D-SGD in the setup is to learn a consensus model $\mathbf{w}_a(t) = \frac{1}{m} \sum_{j=1}^m \mathbf{w}_j(t)$ on m workers through gossip communication, where $\mathbf{w}_j(t)$ stands for the d-dimensional local model on the j-th worker. We denote $\mathbf{P} = [\mathbf{P}_{j,k}] \in \mathbb{R}^{m \times m}$ as a doubly stochastic gossip matrix (see Definition A.1) that characterizes the underlying topology $\mathcal{G}$ . The vanilla Adapt-While-Communicate (AWC) version of the mini-batch D-SGD (Nedic & Ozdaglar, 2009; Lian et al., 2017) updates the model on the j-th worker by
65
+
66
+ $$\mathbf{w}_{j}(t+1) = \sum_{k=1}^{m} \mathbf{P}_{j,k} \mathbf{w}_{k}(t) - \eta \cdot \underbrace{\nabla \mathbf{L}^{\mu_{j}(t)} (\mathbf{w}_{j}(t))}_{\text{CD}} . \quad (2)$$
67
+
68
+ For a more detailed background of decentralized learning, please kindly refer to Appendix A.2.
69
+
70
+ In this section, we establish the equivalence between D-SGD and average-direction SAM under general non-convex and non- $\beta$ -smooth objectives L. We also provide a proof sketch to offer a more intuitive understanding. The equivalence further showcases the potential superiority of learning with decentralized models. Specifically, we prove that the additional noise introduced by decentralization leads to a gradient smoothing effect, which could stabilize optimization. Additionally, we show that the sharpness regularizer in D-SGD does not decrease as the total batch size increases, which guarantees generalizability in large-batch settings.
71
+
72
+ In this subsection, we prove that D-SGD implicitly performs average-direction sharpness-aware minimization (SAM), followed by detailed implications.
73
+
74
+ **Theorem 1** (D-SGD as SAM). Suppose $L \in C^4(\mathbb{R}^d)$ , i.e., L is four times continuously differentiable. The mean iterate of the global averaged model of D-SGD, defined by $\mathbf{w}_a(t) = \frac{1}{m} \sum_{j=1}^m \mathbf{w}_j(t)$ , is provided as follows:<sup>2</sup>
75
+
76
+ $$\begin{split} \mathbb{E}_{\mu(t)} \big[ \mathbf{w}_{a}(t+1) \big] \\ = & \mathbf{w}_{a}(t) - \eta \underbrace{\mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))} \big[ \nabla L_{\mathbf{w}_{a}(t)+\epsilon} \big]}_{asymptotic \ descent \ direction} \\ & + \mathcal{O} \big( \eta \ \mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))} \big\| \epsilon \big\|_{2}^{3} + \frac{\eta}{m} \sum_{j=1}^{m} \big\| \mathbf{w}_{j}(t) - \mathbf{w}_{a}(t) \big\|_{2}^{3} \big), \\ & \underbrace{ higher-order \ residual \ terms} \end{split}$$
77
+
78
+ where $\mathbf{\Xi}(t) = \frac{1}{m} \sum_{j=1}^{m} (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t)) (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t))^{\top} \in \mathbb{R}^{d \times d}$ denotes the weight diversity matrix and $\nabla \mathbf{L}_{\mathbf{w}_{a}(t) + \epsilon}$ denotes the gradient value of $\mathbf{L}(\mathbf{w})$ at $\mathbf{w}_{a}(t) + \epsilon$ , i.e., $\nabla \mathbf{L}(\mathbf{w})|_{\mathbf{w} = \mathbf{w}_{a}(t) + \epsilon}$ .
79
+
80
+ The proof is deferred to Appendix C.2.
81
+
82
+ Asymptotic equivalence. According to Equation (5) and Proposition C.3, $\mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))}[\nabla L_{\mathbf{w}_a(t)+\epsilon}]$ is of the order $L_{\mathbf{w}_a(t)} + \mathcal{O}(\frac{1}{m}\sum_{j=1}^m \|\mathbf{w}_j(t) - \mathbf{w}_a(t)\|_2^2)$ while the residuals are of the higher-order $\mathcal{O}(\frac{1}{m}\sum_{j=1}^m \|\mathbf{w}_j(t) - \mathbf{w}_a(t)\|_2^2)$ . Therefore, $\mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))}[\nabla L_{\mathbf{w}_a(t)+\epsilon}]$ gradually dominates the optimization direction as the local models are near consensus (i.e., $\mathbf{w}_j(t) \rightarrow \mathbf{w}_a(t), \forall j$ ) and the descent direction of D-SGD asymptotically approaches $\mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))}[\nabla L_{\mathbf{w}+\epsilon}]$ . See Definition C.1 for details on the asymptotic equivalence.
83
+
84
+ **Sharpness regularization.** According to Theorem 1, D-SGD asymptotically optimizes $\mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))}[\boldsymbol{L}_{\mathbf{w}+\epsilon}]$ , an averaged perturbed loss in a "basin" around $\mathbf{w}$ , rather than the original point-loss. To further clarify, we can split "true objective" of D-SGD near consensus into the original loss plus an average-direction sharpness:
85
+
86
+ $$\mathbb{E}_{\mu(t)}[L_{\mathbf{w}}^{ ext{D-SGD}}] pprox \underbrace{L_{\mathbf{w}}}_{original\ loss} + \underbrace{\mathbb{E}_{\epsilon \sim \mathcal{N}(0, \mathbf{\Xi}(t))}[L_{\mathbf{w}+\epsilon} - L_{\mathbf{w}}]}_{sharpness-aware\ regularizer}.$$
87
+
88
+ In D-SGD (see Equation (2)), the "virtual" global averaged model $\mathbf{w}_a(t) = \frac{1}{m} \sum_{j=1}^m \mathbf{w}_j(t)$ is primarily employed for theoretical analysis, since there is no central server to aggregate the information from local workers. However, analyzing $\mathbf{w}_a(t)$ remains practical as it characterizes the overall performance of D-SGD.
89
+
90
+ <sup>&</sup>lt;sup>2</sup>Taking expectation over super batch $\mu_{(t)}$ means taking expectations over all local mini-batches $\mu_{j(t)}$ for all $j=1,\ldots,m$ , which eliminates the randomness of all training data at t-th iteration, represented by $z_{j,\zeta(t)}$ for all $\zeta(t)=1,\ldots,|\mu_{j}(t)|$ and $j=1,\ldots,m$ .
91
+
92
+ The second term $\mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))}[L_{\mathbf{w}+\epsilon} - L_{\mathbf{w}}]$ measures the weighted average sharpness at $\mathbf{w}$ , which is a special form of the average-direction sharpness (Wen et al., 2023). Theorem 1 proves that D-SGD minimizes the loss function of an average-direction SAM asymptotically, which provides a direct connection between decentralized learning and centralized learning. As Theorem 1 only assumes L to be continuous and fourth-order differentiable, the result is generally applicable to **non-convex non-\beta-smooth** problems, including deep neural networks training.
93
+
94
+ We note that the sharpness regularizer in D-SGD is directly controlled by $\Xi(t)$ , whose magnitude can be measured by the *consensus distance*, a key component characterizing the overall convergence of D-SGD (Kong et al., 2021),
95
+
96
+ $$\operatorname{Tr}\left(\mathbf{\Xi}(t)\right) = \frac{1}{m} \sum_{j=1}^{m} (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t))^{\top} (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t)).$$
97
+ (3)
98
+
99
+ Theorem 1 provides a theoretical explanation for the phenomena observed in (Kong et al., 2021): (1) Controlling consensus distance below a threshold in the initial training phases makes the SAM-type term quickly dominates the residual terms, thus ensuring good optimization; (2) Sustaining a non-negligible consensus distance at the middle phases maintains the sharpness regularization effect and therefore improves generalization over centralized training.
100
+
101
+ The implicit regularization effect in D-SGD shares similar insights with interesting studies on local SGD and federated learning, revealing that global coherence is not always optimal and a certain degree of drift in clients could be benign (Gu et al., 2023; Chor et al., 2023). Specifically, Gu et al. (2023) proves that the dissimilarity between local models induces a gradient noise, which drives the iterate to drift faster to flatter minima. Despite the shared characteristics, the consensus distance in decentralized learning is notably unique. The magnitude of the consensus distance exhibits dynamic adjustments. Proposition C.2 shows that if the learning rate is smaller than a certain threshold, then the consensus distance gradually decreases during training, indicating that the "searching space" of $\epsilon$ is relatively large in the initial training phase and then gradually declines. In addition, as shown in Proposition C.1, a small spectral gap (see Definition A.2) of the underlying communication topology (see Figure 2) leads to larger consensus distance, the magnitude of perturbation radius. According to Foret et al. (2021), a large perturbation radius ensures a lower generalization upper bound. However, the validation performance of D-SGD is not always satisfactory on large and sparse topologies with a small spectral gap (Kong et al., 2021), as there is regularization-optimization trade-off in D-SGD (please refer to the discussion in Section 6).
102
+
103
+ Variational interpretation of D-SGD. In the variational formulation (Zellner, 1988), $\min_{\mathbf{w}} \mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))}[L_{\mathbf{w}+\epsilon}]$ is
104
+
105
+ referred to as the variational optimization (Rockafellar & Wets, 2009). Theorem 1 shows that D-SGD not only optimizes $\mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))}$ with respect to $\mathbf{w}$ , but also inherently estimates uncertainty for free: The weight diversity matrix $\Xi(t)$ (i.e., the empirical covariance matrix of $\mathbf{w}_j(t)$ ) implicitly estimate $\Sigma_a$ , the intractable posterior covariance,
106
+
107
+ $$\boldsymbol{\Xi}(t) = \frac{1}{m} \sum_{j=1}^{m} (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t)) (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t))^{\top} \approx \Sigma_{q}.$$
108
+
109
+ Note that $\Xi(t)$ is "used" implicitly along with the update of local models without any additional computational budget. The free uncertainty estimation mechanism indicates the uniqueness of the noise from decentralization, which depends both on the local loss landscape and the posterior.
110
+
111
+ **Comparison of D-SGD and vanilla SAM.** The loss function of vanilla SAM (Foret et al., 2021) can be written in the following form:
112
+
113
+ $$\boldsymbol{L}^{\text{SAM}}(\mathbf{w}, \boldsymbol{\Sigma}) = \max_{\boldsymbol{\epsilon} \in \mathbb{R}^d | \boldsymbol{\epsilon}^T \boldsymbol{\Sigma}^{-1} \boldsymbol{\epsilon} \leq d} [\boldsymbol{L}(\mathbf{w} + \boldsymbol{\epsilon}) - \boldsymbol{L}(\mathbf{w})] + \boldsymbol{L}(\mathbf{w}),$$
114
+
115
+ where the covariance matrix $\Sigma$ is set as $\frac{\rho^2}{d}I$ , with $\rho$ being the perturbation radius and I representing the identity matrix. Interestingly, $\rho$ in SAM plays a similar role as $\Xi(t)$ in D-SGD. However, in the SAM that D-SGD approximates, the covariance matrix $\Xi(t)$ is learned adaptively during training. Moreover, the iterate of D-SGD involves higher-order residuals, whereas vanilla SAM does not. The third difference is that vanilla SAM minimizes a worst-case sharpness $\max_{\epsilon^T \Sigma^{-1} \epsilon \le d} L(\mathbf{w} + \epsilon)$ while D-SGD implicitly minimizes an average-direction sharpness (or a Bayes loss). However, the loss of vanilla SAM always upper bounds the Bayes loss (Möllenhoff & Khan, 2023), and they are close to each other in high dimensions where samples from $\mathcal{N}(\mathbf{w}, \Sigma)$ concentrate around the ellipsoid $(\mathbf{w} - \epsilon)^{\top} \Sigma^{-1} (\mathbf{w} - \epsilon) = d$ (Vershynin, 2018). In addition, the sharpness regularization effect of D-SGD incurs zero additional computational overhead compared to SAM, which requires performing backpropagation twice at each iteration.
116
+
117
+ Comparison with related works. Initial efforts have viewed D-SGD as a centralized algorithm penalizing the weight norm $\|\mathbf{P}^{-\frac{1}{2}}\mathbf{W}\|_{\mathbf{I}-\mathbf{P}}^2$ , a quantity similar to the consensus distance, where $\mathbf{W} = [\mathbf{w}_1, \cdots, \mathbf{w}_m]^T \in \mathbb{R}^{m \times d}$ collects m local models (Yuan et al., 2021; Gurbuzbalaban et al., 2022). However, little effort has been made so far to analyze the "interplay" between weight diversity measures, such as the consensus distance, and the local geometry of the D-SGD iterate. Our work fills this gap by showing the flatness-seeking behavior of the Hessian-consensus dependent noise in D-SGD and then exhibiting the asymptotic equivalence between D-SGD and SAM.
118
+
119
+ To impart a stronger intuition, we summarize the proof sketch of Theorem 1 and explain the motivation behind our proof idea. Full proof is deferred to Appendix C.
120
+
121
+ Directly analyzing the dynamics of the diffusion-like decentralized systems where information is gradually spread across the network is non-trivial. Instead, we focus on $\mathbf{w}_a(t) = \frac{1}{m} \sum_{j=1}^m \mathbf{w}_j(t)$ , the global averaged model of D-SGD, whose update can be written as follows,
122
+
123
+ $$\mathbf{w}_{a}(t+1) = \mathbf{w}_{a}(t) - \eta \left[ \nabla L_{\mathbf{w}_{a}(t)}^{\mu(t)} + \underbrace{\frac{1}{m} \sum_{j=1}^{m} \left( \nabla L_{\mathbf{w}_{j}(t)}^{\mu_{j}(t)} - \nabla L_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)} \right)}_{\text{gradient diversity among local workers}} \right], \quad (4)$$
124
+
125
+ as
126
+ $$\frac{1}{m}\sum_{j=1}^{m}\sum_{k=1}^{m}\mathbf{P}_{j,k}\mathbf{w}_{k}(t)=\frac{1}{m}\sum_{k=1}^{m}\mathbf{w}_{k}(t)=\mathbf{w}_{a}(t)$$
127
+ .
128
+
129
+ Equation (4) shows that decentralization introduces an additional noise, which characterizes the gradient diversity between the global averaged model $\mathbf{w}_{a}(t)$ and the local models $\mathbf{w}_{j}(t)$ for $j=1,\ldots,m$ , compared with its centralized counterpart.<sup>3</sup> Therefore, we note that
130
+
131
+ analyzing the gradient diversity is the major challenge of decentralized (gradient-based) learning.
132
+
133
+ One can deduce directly from Equation (4) that distributed centralized SGD, whose gradient diversity remains zero, equals the standard single-worker mini-batch SGD with an equivalently large batch size.
134
+
135
+ Insight. We also note that the gradient diversity equals to zero on quadratic objective L (see Proposition C.4). Therefore, the quadratic approximation of loss functions L (Zhu et al., 2019b; Ibayashi & Imaizumi, 2021; Liu et al., 2021; 2022c) might be insufficient to characterize how decentralization impacts the training dynamics of D-SGD, especially on neural network loss landscapes where quadratic approximation may not be accurate even around minima (Ma et al., 2022). To better understand the dynamics of D-SGD on complex landscapes, it is crucial to consider higher-order geometric information of objective L. In the following, we approximate the gradient diversity using Taylor expansion, instead of analyzing it on non-convex non- $\beta$ -smooth loss L directly, which is highly non-trivial.
136
+
137
+ Since $L \in C^4(\mathbb{R}^d)$ , we can perform a second-order Taylor expansion on the gradient diversity around $\mathbf{w}_a(t)$ :
138
+
139
+ $$\frac{1}{m} \sum_{j=1}^{m} (\nabla \boldsymbol{L}_{\mathbf{w}_{j}(t)}^{\mu_{j}(t)} - \nabla \boldsymbol{L}_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)}) = \frac{1}{m} \sum_{j=1}^{m} \boldsymbol{H}_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)} \cdot (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t))$$
140
+ $$+ \frac{1}{2m} \sum_{j=1}^{m} \boldsymbol{T}_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)} \otimes [(\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t))((\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t))^{\top}],$$
141
+
142
+ plus residual terms $\mathcal{O}(\frac{1}{m}\sum_{j=1}^{m}\|\mathbf{w}_{j}(t)-\mathbf{w}_{a}(t)\|_{2}^{3})$ . Here $H_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)} \triangleq \frac{1}{|\mu_{j}(t)|}\sum_{\zeta(t)=1}^{|\mu_{j}(t)|}H(\mathbf{w}_{a}(t);z_{j,\zeta(t)})$ denotes the empirical Hessian matrix evaluated at $\mathbf{w}_{a}(t)$ and $T_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)} \triangleq \frac{1}{|\mu_{j}(t)|}\sum_{\zeta(t)=1}^{|\mu_{j}(t)|}T(\mathbf{w}_{a}(t);z_{j,\zeta(t)})$ stacks for the empirical third-order partial derivative tensor at $\mathbf{w}_{a}(t)$ , where $\mu_{j}(t)$ and $z_{j,\zeta(t)}$ follows the notation in Equation (1).
143
+
144
+ As $\mathbf{w}_a(t)$ and local models $\mathbf{w}_j(t)$ $(j=1,\ldots,m)$ are only correlated with the super batch before the t-th iteration (see Equation (2)), taking expectation over $\mu_{(t)}$ provides
145
+
146
+ $$\mathbb{E}_{\mu(t)} \left[ \frac{1}{m} \sum_{j=1}^{m} (\nabla L_{\mathbf{w}_{j}(t)}^{\mu_{j}(t)} - \nabla L_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)}) \right]$$
147
+
148
+ $$= H_{\mathbf{w}_{a}(t)} \cdot \underbrace{\frac{1}{m} \sum_{j=1}^{m} (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t))}_{=0}$$
149
+
150
+ $$+ \underbrace{\frac{1}{2} T_{\mathbf{w}_{a}(t)} \otimes \left[ \frac{1}{m} \sum_{j=1}^{m} (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t)) (\mathbf{w}_{j}(t) - \mathbf{w}_{a}(t))^{\top} \right]}_{=0},$$
151
+
152
+ plus residual terms $\mathcal{O}(\frac{1}{m}\sum_{j=1}^{m}\|\mathbf{w}_{j}(t)-\mathbf{w}_{a}(t)\|_{2}^{3})$ , where $H_{\mathbf{w}_{a}(t)}=\mathbb{E}_{\mu_{j}(t)}[H_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)}]$ and $T_{\mathbf{w}_{a}(t)}=\mathbb{E}_{\mu_{j}(t)}[T_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)}]$ .
153
+
154
+ Then the *i*-th entry of the expected gradient diversity can be written in the following form:
155
+
156
+ $$\mathbb{E}_{\mu(t)} \left[ \frac{1}{m} \sum_{j=1}^{m} \left( \partial_{i} \boldsymbol{L}_{\mathbf{w}_{j}(t)}^{\mu_{j}(t)} - \partial_{i} \boldsymbol{L}_{\mathbf{w}_{a}(t)}^{\mu_{j}(t)} \right) \right]$$
157
+
158
+ $$= \frac{1}{2} \underbrace{\sum_{l,s} \partial_{ils}^{3} \boldsymbol{L}_{\mathbf{w}_{a}(t)} \frac{1}{m} \sum_{j=1}^{m} \left( \mathbf{w}_{j}(t) - \mathbf{w}_{a}(t) \right)_{l} \left( \mathbf{w}_{j}(t) - \mathbf{w}_{a}(t) \right)_{s}}_{= \partial_{i} \sum_{ls} \partial_{ls}^{2} \boldsymbol{L}_{\mathbf{w}} \left( \frac{\sum_{j=1}^{m} \left( \mathbf{w}_{j}(t) - \mathbf{w}_{a}(t) \right)_{l} \left( \mathbf{w}_{j}(t) - \mathbf{w}_{a}(t) \right)_{s} \right) \Big|_{\mathbf{w} = \mathbf{w}_{a}(t)}}$$
159
+
160
+ $$+ \mathcal{O}\left( \frac{1}{m} \sum_{j=1}^{m} \left\| \mathbf{w}_{j}(t) - \mathbf{w}_{a}(t) \right\|_{2}^{3} \right), \tag{5}$$
161
+
162
+ where $(\mathbf{w}_j(t) - \mathbf{w}_a(t))_l$ denotes the l-th entry of the vector $\mathbf{w}_j(t) - \mathbf{w}_a(t)$ . The equality in the brace is due to Clairaut's theorem (Rudin et al., 1976). Details are deferred to Appendix C.2. The right hand side (RHS) of this equality without taking value $\mathbf{w}_a(t)$ is the i-th partial derivative of
163
+
164
+ $$\text{Tr}(\nabla^2 \boldsymbol{L}_{\mathbf{w}} \boldsymbol{\Xi}_{(t)})$$
165
+
166
+ <sup>&</sup>lt;sup>3</sup>We note that the concept of gradient diversity is distinct from that in (Yin et al., 2018), as it quantifies the variation of the gradients of one single model on different data points. The gradient diversity in our paper shares similarities with the gradient bias of local workers in federated learning (FL) literature (Wang et al., 2020; Reddi et al., 2021; Wang et al., 2022).
167
+
168
+ $$= \operatorname{Tr}(\nabla^{2} \boldsymbol{L}_{\mathbf{w}} \mathbb{E}_{\epsilon \sim \mathcal{N}(0, \boldsymbol{\Xi}(t))} [\epsilon \epsilon^{T}])$$
169
+
170
+ $$= \mathbb{E}_{\epsilon \sim \mathcal{N}(0, \boldsymbol{\Xi}(t))} [\epsilon^{T} \nabla^{2} \boldsymbol{L}_{\mathbf{w}} \epsilon]$$
171
+
172
+ $$= \underbrace{\mathbb{E}_{\epsilon \sim \mathcal{N}(0, \boldsymbol{\Xi}(t))} [2(\boldsymbol{L}_{\mathbf{w}+\epsilon} - \boldsymbol{L}_{\mathbf{w}})}_{\text{average-direction sharpness at } \mathbf{w}} + \mathcal{O}(\|\epsilon\|_{2}^{3})]. \tag{6}$$
173
+
174
+ Proof complete by combining Equation (4) and Equation (6).
175
+
176
+ The proof sketch above outlines the high-level intuition of the flatness-seeking behavior of D-SGD.
177
+
178
+ **High-level intuition**: Model decentralization introduces gradient diversity among local models (see Equation (4)), which induces a unique Hessian-consensus dependent noise. This noise directs the optimization trajectory of D-SGD towards regions with lower average-direction sharpness $\mathbb{E}_{\epsilon \sim \mathcal{N}(0,\Xi(t))}[L_{\mathbf{w}+\epsilon}-L_{\mathbf{w}}].$
179
+
180
+ Previous literature has shown the gradient stabilization effect of isotropic Gaussian noise injection (Bisla et al., 2022; Liu et al., 2022b). According to Theorem 1, decentralization can be interpreted as the injection of Gaussian noise into gradient. There arises a natural question that whether or not the noise introduced from decentralization, which is not necessarily isotropic, would smooth the gradient. We employ the following Corollary to answer this question.
181
+
182
+ Corollary 2 (Gradient smoothing effect). Given that the vanilla loss function $L_{\mathbf{w}}$ is $\alpha$ -Lipschitz continuous and the gradient $\nabla L_{\mathbf{w}}$ is $\beta$ -Lipschitz continuous. We can conclude that the gradient $\nabla L_{\mathbf{w}+\epsilon}$ where $\epsilon \sim \mathcal{N}(0,\Xi_{(t)})$ is $\min\left\{\frac{\sqrt{2}\alpha}{\sigma_{\min}},\beta\right\}$ -Lipschitz continuous, where $\sigma_{\min}=\lambda_{\min}(\Xi(t))$ , the smallest eigenvalue of $\Xi(t)$ .
183
+
184
+ Corollary 2 implies that if the lower bound of noise magnitude satisfies $\sigma_{\min} \geq \frac{\sqrt{2}\alpha}{\beta}$ , then the noise $\epsilon$ can make the Lipschitz constant of gradients smaller, therefore leading to gradient smoothing. The gradient smoothing effect exhibited by D-SGD aligns with two empirical observations in large-batch settings: (1) the training curves of D-SGD are notably more stable than those of C-SGD, and (2) D-SGD can converge with larger learning rates, as reported in (Zhang et al., 2021). The proof is deferred to Appendix C.3. Further research directions include dynamical stability analysis (Kim et al., 2023; Wu & Su, 2023) of D-SGD.
185
+
186
+ In practice, data and model decentralization ordinarily imply large total batch sizes, as a massive number of workers are involved in the system in many practical scenarios. Large-batch training can enhance the utilization of super
187
+
188
+ computing facilities and further speed up the entire training process. Thus, studying the large-batch setting is crucial for fully understanding the application of D-SGD.
189
+
190
+ Despite the importance, theoretical understanding of the generalization of large-batchdecentralized training remains an open problem.<sup>4</sup> This subsection examines the implicit regularization of D-SGD with respect to (w.r.t.) total batch size $|\mu|$ , and compares it to C-SGD. To investigate the impact of $|\mu|$ , we analyze the effect of the gradient variance, in addition to the gradient expectation studied in Subsection 4.1.
191
+
192
+ **Theorem 3.** Let $B = |\mu|$ denote the total batch size. With a probability greater than $1 - \mathcal{O}(\frac{B}{(N-B)\eta^2})$ , D-SGD implicit minimizes the following objective function:<sup>5</sup>
193
+
194
+ $$\begin{split} \boldsymbol{L}_{\mathbf{w}}^{D\text{-SGD}} = & \boldsymbol{L}_{\mathbf{w}}^{\mu} + \underbrace{\operatorname{Tr}(\boldsymbol{H}_{\mathbf{w}}^{\mu}\boldsymbol{\Xi}(t)) + \frac{\eta}{4}\operatorname{Tr}((\boldsymbol{H}_{\mathbf{w}}^{\mu})^{2}\boldsymbol{\Xi}(t))}_{\text{batch size independent sharpness regularizer}} \\ + & \kappa \cdot \frac{1}{N} \sum_{j=1}^{N} \left[ \|\nabla \boldsymbol{L}_{\mathbf{w}}^{j} - \nabla \boldsymbol{L}_{\mathbf{w}}^{\mu}\|_{2}^{2} + \operatorname{Tr}((\boldsymbol{H}_{\mathbf{w}}^{j} - \boldsymbol{H}_{\mathbf{w}}^{\mu})^{2}\boldsymbol{\Xi}(t)) \right] \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad$$
195
+
196
+ $$+\frac{\eta}{4}\|\nabla \boldsymbol{L}_{\mathbf{w}}^{\mu}\|_{2}^{2}+\mathcal{R}^{A}+\mathcal{O}(\eta^{2}),$$
197
+
198
+ where $\kappa = \frac{\eta}{B} \cdot \frac{N-B}{(N-1)}$ , and $\mathcal{R}^A$ absorbs all higher-order residuals. The empirical gradient $abla oldsymbol{L}^{\mu}_{\mathbf{w}}$ on the super-batch $\mu$ are averaged over the one-sample gradients $\nabla L_{\mathbf{w}}^{j}$ (j= $1,\ldots,m$ ). Similarly, the empirical Hessian $H_{\mathbf{w}}^{\mu}$ is an average of $\mathbf{H}_{\mathbf{w}}^{j} = \mathbf{H}(\mathbf{w}; z_{j}) \ (j = 1, ..., m).$
199
+
200
+ A corresponding implicit regularization of C-SGD (and SGD) is established in Lemma C.6, which demonstrates that C-SGD has an implicit gradient variance reduction mechanism to improve generalization (see Figure C.1). However, as the total batch size B approaches the total training sample size N, the regularization term diminishes rapidly, even in the case when the learning rate scales with the total batch size, since the ratio $\kappa = \frac{N-B}{N-1}$ converges to 0 gradually.
201
+
202
+ On the contrary, Theorem 3 proves that the sharpness regularization terms in D-SGD do not decrease as the total batch size increases, unlike in C-SGD, which theoretically justifies the potential superior generalizability of D-SGD in large-batch settings. The underlying intuition is that decentralization introduces additional noise, which compensates for the noise required for D-SGD to generalize well in largebatch scenarios. The proof is included in Appendix C.4.
203
+
204
+ <sup>&</sup>lt;sup>4</sup>Please refer to Appendix A.4 for the detailed discussion on the generalization of large-batch training.
205
+
206
+ <sup>&</sup>lt;sup>5</sup>If we apply the linear scaling rule (LSR) when increasing the total batch size B (i.e., $\frac{\eta}{B}=s$ where s is a constant), the probability becomes $1-\mathcal{O}(\frac{1}{(N-B)Bs^2})\approx 1$ and thus is not uninformative.
207
+
208
+ ![](_page_7_Figure_1.jpeg)
209
+
210
+ Figure 3. Minima 3D visualization of ResNet-18 trained on CIFAR-10 using C-SGD and D-SGD (ring topology).
211
+
212
+ This section empirically validates our theory. We introduce the experimental setup and then study how decentralization impacts minima flatness via local landscape visualization.
213
+
214
+ Dataset and architecture. Decentralized learning is simulated in a dataset-centric setup by uniformly partitioning data among multiple workers (GPUs) to accelerate the training process. Vanilla D-SGD with various commonly used topologies (see Figure 2) and C-SGD are employed to train image classifiers on CIFAR-10 (Krizhevsky et al., 2009) and Tiny ImageNet (Le & Yang, 2015) with AlexNet (Krizhevsky et al., 2017), ResNet-18 (He et al., 2016b) and DenseNet-121 (Huang et al., 2017). Detailed implementation setting is inclued in Appendix B. 6
215
+
216
+ On CIFAR-10, we use deterministic topology. On Tiny ImageNet, we use deterministic topology with random neighbor shuffling, which can increase the effective spectral gap of the underlying communication matrix (Zhang et al., 2020). We adAs demonstrated in Figure 1, Figure B.1 and Figure B.2, D-SGD could outperform C-SGD in terms of generalizability in large-batch settings.7 We also note that the gap in generalizability between D-SGD and C-SGD in a large-batch scenario is larger on the CIFAR-10 dataset, which we attribute to the smaller κ value (see Theorem 3). To further support our claim that D-SGD favors flatter minima than C-SGD in large-batch scenarios, we plotted the minima learned by both algorithms using the loss landscape visualization tool in Li et al. (2018). The resulting plots are shown in Figure 3, along with additional plots in Appendix B.
217
+
218
+ These figures demonstrate that D-SGD could learn flatter minima than C-SGD in large-batch settings, and this dif-
219
+
220
+ just the effective spectral gap by introducing random shuffle to accommodate the "optimal temperature" of models on different datasets.
221
+
222
+ <sup>7</sup>This is due to the fact that the training accuracy of D-SGD is almost surely 100% in all settings, making validation accuracy a reliable measure of generalizability.
223
+
224
+ Note that it is not a rigorous claim that decentralization will always improve generalization in large-batch settings. The experiments reveal the generalization potential of decentralized learning.
225
+
226
+ <sup>6</sup> In our experiments, the ImageNet pre-trained models are used as initializations to achieve better final validation performance. The conclusions still stand for training from scratch.
227
+
228
+ ference in flatness becomes larger as the total batch size increases. These observations are consistent with the claims made in Theorem 1 and Theorem 3 that D-SGD favors flatter minima than its centralized counterpart, especially in the large-batch scenarios. Future work includes visualizing the whole trajectories of D-SGD.
2306.14672/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-18T23:56:18.896Z" agent="5.0 (X11)" etag="6CaHYhQBgAlqfmqcU8St" version="16.6.7" type="google"><diagram id="mIY-lCPwdl4jeloxe-e-" name="Page-1">5VhNc5swEP01PqZjEPjjmNhuemhmMuPptDmqsAYlMssIOYb8+kpIGAOO7baxnbQXm33Srti3q12JHpks81tB0/gOQ+A9tx/mPTLtua5D/L7600hhkCHxDBAJFtpJNTBnL2BBqxetWAhZY6JE5JKlTTDAJIFANjAqBK6b0xbIm6umNIIOMA8o76LfWShjg47cYY1/ARbF1crOYGxGlrSabD3JYhriegsisx6ZCERpnpb5BLgmr+LF6H1+ZXTzYgISeYzCk/NY5HeQeG56Ox2Tb96smF5ZNzJZVA5DqPy3IgoZY4QJ5bMavRG4SkLQVvtKqud8RUwV6CjwEaQsbDDpSqKCYrnkdhRyJn9o9U9D34oPW0PT3JouhaISEikKo+VX4sP2WK1WSpWecVB79SpvFspwJQLYQ1aVf1REIPfMI5voqm0BuAT1PkpPAKeSPTffg9r8jDbz6hCqBxvF34ioM7hsSN1/MqTuRUNq7D5TvrIrTZ4jBeBC/6xkoNbRFccU31bwm6Fdx0zCPKUlKWtVspthtAuBkJDv57bLhVVwR7be2YLvelZe1+XTqbB4q3RWdfLN6XP30ZeqMn21ZpkmcB7TlIP2BRYL1Uyy90fmuEvmaAeX/qm4JPu4rPkrxy9PHxkeps8/J33jD0Wf5x3eymelrzoUfsSt3CaTODvqItnBJjkZm3v7SiowhSRjUvuQBSjeT4vx+4fzcnjWvOzu6w5JkITX+j6ipIDTLGNBk5cmifV56kMdpywPB49TTn93gI8sLEcfu+wK98iUJ/VWHLfyZ9BKDOOn1dq+UrUM+e0C6bQMGSI6hsok27j9Fyebbj38P/POOfYcf9nE89vnkdEfJt7AbRlqN4lTJ163ddzrfiHKbkE6SahqvWymHeUsSnROqliDUIDuCCyg/NoOLFkYmrspZOyF/ixN6bRJtVOlm/5Nz59qW+o6mpmbqTadSYFPMEGOyu40wURbWTDOW9AbdCLi+M04uMd1otPddbqXna3AeP9NYLxB/1yBUWL9Ec9ssPpTKJn9Ag==</diagram></mxfile>
2306.14672/main_diagram/main_diagram.pdf ADDED
Binary file (9.92 kB). View file
 
2306.14672/paper_text/intro_method.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recent years have seen an increase in the demand for transparency on machine learning-based decisions. In safetysensitive settings particularly, practitioners need to understand how a model reasons in order to ensure its safe de-
4
+
5
+ Proceedings of the 40<sup>th</sup> International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
6
+
7
+ ployment in the future. In many scenarios, their attention focuses on assessing the importance of a specific predictor variable such as a treatment in a clinical model, or ethnicity with regards to model fairness. Ultimately, as humans naturally have a *causal* approach to model explainability, users may want to understand how the treatment<sup>1</sup> causally impacts the outcome i.e. through what mechanisms and if this matches their prior assumptions on the causal relationships in the data. Here, we focus on the following question: in the presence of a general black-box ML model, how can we compute feature attributions according to the causal beliefs encapsulated by the posited DAG? We provide a framework for locally explaining a treatment's effect in such settings, with the following goals: *reliability*, *safety*, *interpretability* and *high resolution*.
8
+
9
+ Reliability, safety, interpretability in XAI An XAI model should aim at generating explanations that are *reliable*, *safe* and *interpretable*. Reliability, also known as being "true to the model", implies that the explanations do reveal the functional dependence of the model and are robust to distributional shifts. Safety relates to the ability to protect the framework from hazard, in particular attempts to fool models with deceptive data, also known as adversarial attacks. As defined in (Miller, 2019), interpretability is the degree to which a human can understand the cause of a decision.
10
+
11
+ High resolution for safety-critical XAI In addition to the XAI goals cited above, resolution is often necessary when explaining a model to ensure its fairness. Following (Chiappa, 2019), we illustrate our point using a simple causal structure inspired by the Berkeley admissions dataset. Consider a predictive model for college entry with three features: sex, exam results and department. The sensitive attribute, sex, potentially impacts the predicted admission through both fair and unfair causal pathways. Sex may indirectly and fairly impact admission due to some individuals applying to more competitive departments. However, there may also be an effect through an unfair direct path, representing prejudice on the part of the admissions officer. This motivates *path-specific* measures of feature importance, instead of an overall single score that groups all paths together.
12
+
13
+ <sup>\*</sup>Equal contribution <sup>1</sup>Department of Statistics, University of Oxford, Oxford, UK <sup>2</sup>Department of Statistical Science, University College London, London, UK <sup>3</sup>The Alan Turing Institute, London, UK. Correspondence to: Lucile Ter-Minassian < lucile.ter-minassian@stats.ox.ac.uk>, Oscar Clivio < oscar.clivio@stats.ox.ac.uk>.
14
+
15
+ <sup>&</sup>lt;sup>1</sup>We use the example of "treatment" in clinical models as illustrative of a central binary predictor variable.
16
+
17
+ We introduce a method for explaining the local effect of a binary treatment under an assumed causal graph. We assume that (i) the treatment is an ancestor of the outcome in the directed acyclic graph (DAG) and that (ii) the DAG is compatible with the data, i.e. that it respects all conditional independences that could be found by running conditional independence tests in the data. The posited DAG may come from prior domain knowledge, or indeed be learnt from data and represents the user's beliefs. Our aim is not to understand the model's "internal DAG", and thus we do not assume that the posited DAG corresponds to the DAG of the underlying model.
18
+
19
+ We show how augmenting the predictive model with such a causal DAG supports a novel targeted extension of SHAP values, allowing for the decomposition of the black-box treatment effect into interpretable path-wise Shapley effects. We provide stand-alone theoretical results for decomposing the original on-manifold Shapley value (i.e. Shapley with a conditional reference distribution) of a treatment feature into path-wise local causal effects. We claim that our method achieves the four goals presented above: reliability, safety, interpretability and high resolution. Our contributions are as follows:
20
+
21
+ - We introduce Path-Wise Shapley (PWSHAP) effects, an
22
+ extension of on-manifold Shapley values for locally explaining treatment effect under a causal DAG. Robustness
23
+ to adversarial attacks (and thus safety) is guaranteed by
24
+ the adoption of a conditional reference distribution. Reliability is ensured by the acknowledgment of the causal
25
+ structure. As such, PWSHAP reconciles both safety and
26
+ reliability.
27
+ - We show how our method can be used as a non-parametric alternative to mediation and bias analysis. We further show how PWSHAP can be used for fairness studies when the causal graph involves a mixture of fair/unfair paths, and under randomised treatment to assess effect modification (also referred to as moderation). We further show that Causal Shapley (Heskes et al., 2020), the closest method to ours, does not acknowledge moderation.
28
+ - We establish error bounds (i) from the outcome model to the Shapley values and PWSHAP effects (ii) from the Shapley values and treatment model (referred to as propensity score) to the PWSHAP effects.
29
+
30
+ To the best of our knowledge, we are the first to interrogate the link between the Shapley feature importance of a treatment, and the standard notion of the treatment effect as defined within the causal inference literature, as the expected difference between potential outcomes under the two treatments.
31
+
32
+ Shapley values are a local feature attribution method. They quantify the importances of the features $\{1, \ldots, m\}$ of a complex machine learning model $f: \mathbb{R}^m \to \mathbb{R}^l$ at an instance $x \in \mathbb{R}^m$ , given only black-box access to the model. The local prediction f(x) is formulated as a sum of individual feature contributions: $f(x) = \phi_0^f(x) + \sum_{i=1}^M \phi_i^f(x)$ , where $\phi_j^f(x)$ is the contribution of feature j to f(x) and $\phi_0^f(x) = \mathbb{E}[f(X)]$ is the averaged prediction with the expectation over the observed data distribution. The Shapley value of a feature j captures the change in model outcome comparing the prediction when the feature value $x_i$ is included to when it's removed from the input. This change is computed from the difference in value function v when setting feature j equal to the instance feature value $x_i$ , averaged over all possible coalitions S of features excluding feature j. If a feature is included in the coalition its value is set to the observed instance value $x_i$ . To model feature removal, the value function takes the expectation of the black-box algorithm at observation x over the non-included features $\overline{S}$ using a reference distribution $r(X \mid x_S)$ such that $v_f(S, x) = \mathbb{E}_{r(X \mid x_S)}[f(x_S, X_{\overline{S}})]$ for $\overline{S} := \{1, \dots, m\} \setminus S$ and the operation $(x_S, x_{\overline{S}})$ denoting the concatenation of its two arguments. Binomial weights |S|!(m-|S|-1)!/(m-1)! take account of all possible orderings. The Shapley value of feature j is thus:
33
+
34
+ $$\phi_j^f(x) = \sum_{i=0}^{m-1} \frac{1}{m\binom{m-1}{i}} \sum_{\substack{S \not \ni j \\ |S|=i}} [v_f(S \cup \{j\}, x) - v_f(S, x)],$$
35
+
36
+ i.e. $\phi_j^f(x) = \mathbb{E}_{p(S|j\notin S)}[\phi_{j,S}^f(x)]$ where $\phi_{j,S}^f(x) := v_f(S \cup \{j\},x) - v_f(S,x)$ and $\forall j,\ p(S\mid j\notin S) = 1/m\binom{m-1}{|S|}$ . Shapley values have become a gold standard amongst explanation models due to their desirable properties (model agnostic, additive) and axioms (*Symmetry*, *Efficiency*, *Linearity* and *Dummy*—see Supplement D.1 for details). However, the method has not been adopted in critical settings due to the considerable limitations of both possible reference distributions (Janzing et al., 2020; Chen et al., 2020; Sundararajan & Najmi, 2020).
37
+
38
+ **Limitations of Shapley values** On the one hand, *on-manifold* Shapley values (Aas et al., 2021) use a conditional reference distribution, conditioning on $x_S$ to better account for correlations between features $r(X \mid x_S) := p(X \mid X_S = x_S)$ . Sampling from a conditional distribution forces the model to be evaluated on plausible instances that lie on the data manifold. It thus improves the adversarial robustness and thus the safety of the method (Slack et al., 2020). However, on-manifold Shapley values have been shown to be unreliable as they can generate misleading explanations (Janzing et al., 2020; Sundararajan & Najmi, 2020). On the other hand, *off-manifold* Shapley values use
39
+
40
+ a marginal reference distribution, that is $r(X \mid x_S) := p(X)$ (Lundberg and Lee, 2017). The resulting explanations reveal the functional dependence better, also known as being "true to the model" (Chen et al., 2020). However, sampling from the marginal distribution breaks the dependence between features. Consequently, off-manifold Shapley values are sensitive to adversarial robustness and thus deemed unsafe (Slack et al., 2020). Note that adversarial robustness is key for fairness studies. If an unfair model undergoes an adversarial attack, it may "counterbalance" its potential prejudice on real-world data by forming predictions favourable to disadvantaged groups on implausible inputs. Since only the marginal distribution is used, the resulting Shapley value of a sensitive attribute might look fair even though the model would predict unfairly on real-world data (Slack et al., 2020). Ultimately, Shapley values can't provide both reliable and safe explanations, which may hinder their adoption in safety-critical settings (see further details on Shapley values in Supplements A).
41
+
42
+ Shapley values also have limited interpretability. The attribution of a target feature j is the result of model evaluations averaged over all coalitions excluding j. The goal of this procedure is to acknowledge all the correlations amongst features. However, if some features are assumed to be independent, this assumption fails. Averaging over coalitions with/without independent features may generate redundancies and unbalance the resulting attribution. Also, the interpretation of on-manifold and off-manifold Shapley values is agnostic to the assumed *causal structure*, if any. When a specific treatment is of interest, causal interpretation of its Shapley values should be done in light of the relative roles of other features: confounder, moderator or mediator; see Supplement B for a definition of these notions. Interpreting Shapley values causally would be a case of the "Table 2 fallacy" (Westreich & Greenland, 2013), where all coefficients of a model are misleadingly interpreted as adjusted causal effects. Thus we claim that under a posited DAG, the Shapley value of a feature should be computed according to the assumed statistical dependencies, i.e. the edges in the DAG, and interpreted in light of its causal links with other variables, i.e. the *directions* of the arrows in the DAG.
43
+
44
+ In PWSHAP, we use a conditional reference distribution to ensure the robustness to adversarial attacks and safety of our method. Meanwhile, we are able to generate feature attributions that are both reliable and interpretable, thanks to the tailored causal interpretations of the effects we compute.
45
+
46
+ The intuition behind the introduced method is two-fold. First, we decompose the Shapley value as a weighted sum of quantities that can be interpreted causally as treatment effects along coalitions. Second, by only considering rele-
47
+
48
+ vant coalitions, we are able to deduce quantities that can be interpreted causally along paths. Since the treatment T is of special interest, we separate it from the other variables, that we call covariates C, such that X = (C, T).
49
+
50
+ Let C denote covariates, T a binary treatment and Y an outcome of interest. We assume that $Y = f^*(C,T) + \epsilon$ , with $\mathbb{E}[\epsilon|C,T] = 0$ . Our black-box f is an arbitrary function of X = (C,T) which aims at predicting $f^*$ . We aim at explaining the specific effect of the treatment variable T on the predictions made by the black-box f for an individual with values c of covariates C. To do so, we first decompose the Shapley value of T into a weighted sum of "Shapley effects" which are inspired by conditional average treatment effects, commonly used in the causal literature. We refer to a coalition S excluding treatment T as a subset of covariates and note the value function as $v_f(S \cup \{T\}, c_S, t)$ when it is taken over the coalition $S \cup \{T\}$ and $v_f(S, c_S)$ when taken over S. Notations are summarised in Section C, with a running example to illustrate them all in Supplement I.
51
+
52
+ PWSHAP relies on two assumptions: (i) the treatment of interest is a causal ancestor of the outcome (no anti-causal learning) and (ii) the DAG is compatible with the observed data i.e. all conditional independence constraints implied by graphical d-separation relations hold in the data. The user-supplied DAG thus only encodes the conditional dependences and is not assumed to be identical to the underlying model behavior. The "direction" of the arrows in the DAG is only used for causally *interpreting* the PWSHAP values.
53
+
54
+ First, we notice a connection between value functions $v_f$ of the black-box f and conditional average treatment effects using coalition-wise Shapley values.
55
+
56
+ **Definition 3.1** (Coalition-wise Shapley effect). We define the coalition-wise Shapley effect<sup>2</sup> of T on Y along the covariates $C_S$ indexed by the subset of covariates S as:
57
+
58
+ $$\Psi_{T \to Y|C_S}^f(c_S) = v_f(S \cup \{T\}, c_S, 1) - v_f(S \cup \{T\}, c_S, 0)$$
59
+
60
+ The coalition-wise Shapley effect can be understood as a generalisation of conditional average treatment effects. Indeed, for the true model $f^*$ , the RHS is equal to $\mathbb{E}[Y|C_S=c_S,T=1]-\mathbb{E}[Y|C_S=c_S,T=0]$ . Under the typical causal treatment effect identification assumptions, i.e. no interference, consistency, and conditional exchangeability given C (Imbens & Rubin, 2010), this is the conditional average treatment effect (CATE) (Rubin, 2005) (definition in Supplement B) when S is the complete coalition, i.e.
61
+
62
+ <sup>&</sup>lt;sup>2</sup>Note that our Shapley effects are orthogonal to those introduced by (Iooss & Prieur, 2017) for numerical models.
63
+
64
+ containing all covariates. In addition, $\Psi^f_{T \to Y \mid \emptyset}$ is the "base" treatment effect, i.e. a population-wide estimate of treatment effect. Its exact causal interpretation depends on the structure of the DAG, but in some cases it equates to the Average Treatment Effect (ATE) as defined by (Rubin, 2005) (definition in Supplement B). The coalition-specific Shapley effect can be linked to the original Shapley values as follows.
65
+
66
+ Property 3.1 (Decomposing Shapley values into Shapley effects). The *coalition-wise Shapley value* $\phi_{T,S}^f(c,t)$ is equal to a weighted estimate of a local treatment effect,
67
+
68
+ $$\phi_{T,S}^{f}(c,t) = w_{S}^{*}(c_{S},t) \cdot \Psi_{T \to Y|C_{S}}^{f}(c_{S}), \tag{1}$$
69
+
70
+ where $w_S^*(c,t)$ denotes what we call the "propensity weights" defined by $w_S^*(c,t) = t - p(T=1|C_S=c_S)$ . This name follows the fact that these weights are related to whether the sample is an outlier or not.
71
+
72
+ The proof can be found in Supplement J.1. Property 3.1 shows that each coalition-specific term in the original onmanifold Shapley value is equal to the product of two terms. The first is a weight that depends on the propensity score. The second is a measure of the treatment effect, namely the coalition-specific Shapley effect. As a result, the overall Shapley value $\phi_T^f(c,t)$ can be decomposed as
73
+
74
+ $$\phi_T^f(c,t) = \mathbb{E}_{p(S|T \notin S)}[w_S^*(c_S,t) \cdot \Psi_{T \to Y|C_S}^f(c_S)].$$
75
+
76
+ Although we connected Shapley values to coalition-wise Shapley effects the latter still only apply to *coalitions* and not specific *paths*. However, the coalition-wise Shapley effect $\Psi^f_{T \to Y \mid C_S}(c_S)$ can be understood as the causal flow from T to Y through a set of covariates S. Thereby, we define the causal flow along the (undirected) path from T to Y through $C_i$ as the difference between the causal flow through all covariates and the causal flow through all covariates but $C_i$ . See Supplement G for a generalisation of paths of length G or more.
77
+
78
+ **Definition 3.2** (Path-wise Shapley effect). Let $S^*$ be the coalition with all covariates. We refer to the following quantity as the path-wise Shapley effect of T on Y along the path from T to Y through $C_i$ :
79
+
80
+ $$\Psi^f_{C_i}(c) = \Psi^f_{T \to Y|C_{S^*}}(c) - \Psi^f_{T \to Y|C_{S^* \backslash \{i\}}}(c_{S^* \backslash \{i\}}).$$
81
+
82
+ For instance in the fairness example from Section 1, the path-wise Shapley effect of sex on admission (Adm) mediated by the chosen department (Dpt) $\Psi^f_{Sex o Dpt o Adm}$ is $\Psi^f_{Sex o Adm|Dpt,Exam} - \Psi^f_{Sex o Adm|Exam}$ .
83
+
84
+ Path-wise Shapley effects thus quantify the change in model outcome when specifying the feature values along a specific path, compared to when all features are specified but the ones on the path of interest. As such, PWSHAP measures the effect of the treatment on the outcome through a causal pathway. Ultimately, conditioning on all other features reinforces the locality of our result. It can also be seen as a contribution to the shift from a global estimated "base" treatment effect to an individual estimated treatment effect. However, note that PWSHAP violates the efficiency property i.e. they do not sum up to an interpretable quantity like the original Shapley feature attributions do. Moreover, Property 3.2 shows that integrating PWSHAP effects can help isolate covariates that are conditionally independent on the treatment given other covariates (the Supplement J.2 for the proof). As shown in Section 6, the actual causal meaning of this conditional independence depends on the posited DAG of the data, however.
85
+
86
+ **Property 3.2 (Integration of the PWSHAP effects).** Let $C_i$ be a covariate such that $C_i \perp T | C_{-i}$ where $C_{-i} := C_{S^* \setminus \{i\}}$ . Then for any function f and any value $c_{-i}$ of $C_{-i}$ ,
87
+
88
+ $$\mathbb{E}[\Psi_{C_i}^f(C_i, c_{-i}) | C_{-i} = c_{-i}] = 0$$
89
+
90
+ Using Property 3.1, we can express the coalition-wise Shapley effects $\Psi^f_{T \to Y|C_S}$ from the coalition-wise Shapley values $\phi^f_{T,S}(c,t)$ as $\Psi^f_{T \to Y|C_S}(c_S) = \phi^f_{T,S}(c,t)/w^*_S(c,t)$ . Therefore, the path-wise Shapley effects $\Psi^f_{C_s}$ are computed as:
91
+
92
+ $$\Psi^f_{C_i}(c) = \frac{\phi^f_{T,S^*}(c,t)}{w^*_{S^*}(c,t)} - \frac{\phi^f_{T,S^*\backslash\{i\}}(c,t)}{w^*_{S^*\backslash\{i\}}(c,t)}.$$
93
+
94
+ In practice, path-wise Shapley effects are computed by replacing the true propensity weights with weights that use an estimate of the propensity score. For this, we further need to assume positivity holds. The path-wise Shapley effect of T on Y through $C_i$ is thus estimated in three steps: (i) computing the coalition-wise Shapley values for $S^*$ the entire set of covariates and $S^* \setminus \{i\}$ ; (ii) dividing each of these terms by an estimate of their corresponding propensity weight; (iii) taking the difference between the two resulting quantities (also known as coalition-specific Shapley effects) to isolate the effect along the path through $C_i$ . Note that division by weights requires overlap, that is $\forall c, 0 < p(T=1|C=c) < 1$ .
95
+
96
+ # Method
97
+
98
+ We compare our method to two baseline explanation methods. Our first baseline is Causal Shapley (CS) (Heskes et al., 2020), another method aiming to explain a model under an assumed causal DAG. Like PWSHAP, Causal Shapley splits Shapley attributions, although the split is binary (direct/indirect effect). In Causal Shapley, the indirect effect of a feature *j*, the distribution of the 'out-of-coalition' features changes due to the do-operator (see Suppl. A and D.2 for further details). Our second baseline is on-manifold Shapley, a natural choice given that PWSHAP augments the original method. Section D.3 details other graph based Shapley methods (Wang et al., 2021; Singal et al., 2021), which are not appropriate baselines here due to structural differences.
99
+
100
+ Higher model fidelity, lower reliance on causal assumptions than Causal Shapley We claim that PWSHAP has higher model fidelity and relies less on the assumed causal structure than Causal Shapley. As the direct/indirect effect split is based on do-calculus in Causal Shapley, the computation of the attributions depends on the assumed DAG (both the edges and their directions). In contrast, PWSHAP computations only depend on the hypothesised feature dependencies i.e. the edges in the DAG. Only the causal interpretation of PWSHAP depends on the direction of the edges. We view the fact that our approach is agnostic to the choice of a (compatible) DAG as a strength, as it allows different experts to explain the black-box model output according to their own causal beliefs about the data or phenomenon being studied (see D.5 for a detailed discussion on this). Ultimately, by applying do-calculus, Causal Shapley computes feature attributions according to preconceptions of how the model should reason, and as such is "forcing" explanations to fit to a presumed causal structure. To further illustrate the limitations of relying on the causal assumptions and show that PWSHAP has higher fidelity to the model, let us consider a black-box with a single covariate C, and a treatment T. If we wrongly assume C to be a confounder instead of a mediator, the indirect effect of treatment i.e. the mediation of treatment through C would be null according
101
+
102
+ to Causal Shapley (see Property D.1 in Supplement D.2). By contrast, only the causal interpretation of the PWSHAP effect through $\mathcal{C}$ would be incorrect, but its value would remain unaltered.
103
+
104
+ **Increased resolution, better interpretability** Compared to both Causal and on-manifold Shapley, PWSHAP has higher resolution—as it is path-specific instead of feature-specific and improved interpretability. In Causal Shapley and onmanifold Shapley values, feature attributions result from taking an average over coalitions, whereas PWSHAP only considers coalitions used to compute effects. Ultimately, evidence has shown that on-manifold Shapley values and Causal Shapley values can generate misleading interpretations (Sundararajan & Najmi, 2020). In on-manifold Shapley values the attribution of a feature that does not appear in the algebraic formulation of the model can be non-zero, depending on how the data is distributed. This is induced by both the conditional reference distribution, the average taken over multiple coalitions. By providing an exact interpretation for the computed quantities, PWSHAP overcomes this unreliability issue. If a PWSHAP effect $\Psi_{C_i}^f$ is null, it means that specifying the covariate $C_i = c_i$ has had no impact on the treatment effect compared to marginalising it, according to our black-box (see Lemma 6.2 and Property 6.1). Meanwhile, PWSHAP remains robust to adversarial attacks, as it samples from a conditional reference distribution (Slack et al., 2020). PWSHAP thus reconciles safety and reliability. However, a limitation of PWSHAP compared to both baselines is that it violates the efficiency property (see Suppl. D).
105
+
106
+ We now show how to obtain error bounds for quantities like path-wise Shapley effects from other quantities like the outcome model, according to Figure 1. To the best of our knowledge, these are the first results regarding error bounds for on-manifold Shapley values. In the following, $(\hat{f}_N)$ denotes a sequence of estimators of $f^*$ . The proofs can be found in Supplements J.3 and J.4
107
+
108
+ - 1. Convergence of the coalition-specific Shapley terms: $\forall c,t,N, \qquad |\phi_{T,S}^{\hat{f}_N}(c,t) \phi_{T,S}^{f^*}(c,t)| \leq 2e_N^{\mathrm{outcome}}.$ which implies the convergence of the Shapley value of the estimated model to that of the true model.
109
+ - 2. Convergence of the path-wise Shapley effects: $\forall i,c,N, \qquad |\Psi_{C_i}^{\hat{f}_N}(c) \Psi_{C_i}^{f^*}(c)| \leq 4e_N^{\rm outcome}.$
110
+
111
+ Property 5.2 (Convergence of estimated coalition-specific Shapley values and propensity score implies convergence
112
+
113
+ ![](_page_5_Figure_1.jpeg)
114
+
115
+ Figure 1: DAGs for Building Blocks (Up) and Error Bound (Down, Cvg=Convergence, SV=Shapley value)
116
+
117
+ (1) the arbitrary propensity score model $\pi^N$ and the true propensity score model $\pi^*$ verify $\epsilon$ -strong overlap,
118
+
119
+ (2)
120
+ $$\forall c, N | \pi^N(c) - \pi^*(c) | \leq e_N^{\text{propensity}}$$
121
+
122
+ (3)
123
+ $$\forall S \text{ s.t. } T \notin S, c, N, |\hat{\phi}_{T,S}^{N,\hat{f}_N}(c,t) - \phi_{T,S}^{f^*}(c,t)| \le e_N^{\text{Shap}},$$
124
+
125
+ with $w_S^N(c,t) = t - \mathbb{E}_p(C_{\bar{S}}|C_S = c_S)[\pi^N(c_S, C_{\bar{S}})]$ we show the convergence of the estimated PWSHAP effects to the true PWSHAP effects, $\forall i, c, t, N$ ,
126
+
127
+ $$|\hat{\Psi}_{C_i}^{N,\hat{f}^N}(c) - \Psi_{C_i}^{f^*}(c)| \leq \frac{4e_N^{\mathsf{Shap}}}{\epsilon} + \frac{4||f^*||_\infty \cdot e_N^{\mathsf{propensity}}}{\epsilon^2}.$$
128
+
129
+ PWSHAP effects are interpreted by revisiting causal inference concepts of confounding, moderation and mediation at a local scale. As our method stands on theoretical grounding, we first provide objective evidence using explicit equations.
130
+
131
+ Under DAG (2) of Figure 1, PWSHAP effects are causally interpreted as follows:
132
+
133
+ $$\begin{split} \phi_T^{f^*} &= w_{12}^*/3 \cdot \text{CATE}(c_1, c_2) + w_1^*/6 \cdot \text{ "CATE"}_{C_1}(c_1) \\ &+ w_2^*/6 \cdot \text{ "CATE"}_{C_2}(c_2) + w^*/3 \cdot \text{Diff. in means} \end{split}$$
134
+
135
+ where "CATE" $_{C_S}(c_s)=\mathbb{E}[Y|T=1,C_S=c_S]-\mathbb{E}[Y|T=0,C_S=c_S]$ . We refer to these terms as "CATE"s, in an abuse of notation, but note that they are not true causal conditional average treatment effects, as they include confounded paths. The term "Diff. in means" stands for E[Y|T=1]-E[Y|T=0]. To isolate the spurious effect of the confounders, we further assume that the two confounders are not effect moderators.
136
+
137
+ **Definition 6.1** (Local confounding effect). In this example, we call the PWSHAP effect of $C_2$ the "local confounding effect of $C_2$ " and note it $\Psi^f_{T \leftarrow C_2 \rightarrow Y}$ . In other words, $\Psi^f_{T \leftarrow C_2 \rightarrow Y} := \Psi^f_{C_2}$
138
+
139
+ Notably, for the true model $f^*$ , $\Psi^{f^*}_{T \leftarrow C_2 \rightarrow Y}(c_1, c_2) =$ CATE $(c_1, c_2)$ – "CATE" $(c_1)$ . This quantity has been referred to as the bias due to unmeasured confounding (assuming we observe $C_1$ but not $C_2$ ) in a segment of the sensitivity analysis literature (Veitch and Zaveri, 2020). Therefore, our measure of confounding effect is a local equivalent of this bias. Indeed, integrating out this difference over $C_1, C_2$ yields ATE – $\mathbb{E}[\mathbb{E}[Y|T=1,C_1] - \mathbb{E}[Y|T=0,C_1]]$ where ATE is the Average Treatment Effect. However, if a covariate is both a confounder and an effect modifier, its path-wise attribution will cover both phenomena and the two effects will be indiscernible. Ultimately, PWSHAP contrasts with sensitivity analysis methods which are meant for quantifying unobserved confounding, whereas our method measures the impact of an observed confounder. Further details on such techniques can be found in the Supplement D.6.
140
+
141
+ **Lemma 6.2** (Integration of the local confounding effect, true model). Let $C_1, C_2$ be two pre-treatment covariates such that ignorability given $C_1, C_2$ holds, i.e. $\forall t, Y(t) \perp T | C_1, C_2$ . If, additionally, $C_2$ is not a confounder, i.e. $C_1$ alone guarantees ignorability or $\forall t, Y(t) \perp T | C_1$ , then the integral of the local confounding effect of $f^*$ w.r.t. $C_2$ on the joint distribution of covariates is null:
142
+
143
+ $$\mathbb{E}[\Psi_{T\leftarrow C_2\to Y}^{f^*}(C_1,C_2)]=0.$$
144
+
145
+ The proof can be found in Supplement J.5. For a variable that is not actually a confounder, the integration of the local confounding effect thus yields zero. This can be generalised to any number of confounding pre-treatment covariates, by grouping all of them in $C_1$ . For any blackbox f, a stricter condition yields the same result as a corollary of Proposition 3.2. We give that result and an example of local bias analysis in Supplement E.2. Further, if the local confounding effect is zero for all individuals in the training set, then we can hypothesise that the model did not learn to predict through the confounding path $T \leftarrow C_2 \rightarrow Y$ .
146
+
147
+ Here, "moderation" refers to an effect modification as in (Boruvka et al., 2018). In the setting represented in Figure 1, causal graph (1), where treatment is assumed to be unconfounded, we interpret the PWSHAP decomposition as follows:
148
+
149
+ $$\begin{split} \phi_T^{f^*} &= 1/3 \cdot w_{12}^* \cdot \text{CATE}(c_1, c_2) + 1/6 \cdot w_1^* \cdot \text{CATE}_{C_1}(c_1) \\ &+ 1/6 \cdot w_2^* \cdot \text{CATE}_{C_2}(c_2) + 1/3 \cdot w^* \cdot \text{ATE} \end{split}$$
150
+
151
+ **Definition 6.3** (Local moderating effect). In this example, we call the PWSHAP effect of $C_2$ the "local moderating effect of $C_2$ " and denote it $\Psi^f_{C_2:T\to Y}$ . In other words, $\Psi^f_{C_2:T\to Y}:=\Psi^f_{C_2}$ .
152
+
153
+ PWSHAP assesses the local effect modification induced
154
+
155
+ by $C_2$ by "unspecifying" this feature. Having null local moderating effect would mean that $C_2$ did not act as a moderator for this specific subject, according to our fitted black-box. Unlike previous methods (Imai and Ratkovic, 2013; Athey and Imbens, 2016; Wang and Rudin, 2017), our PWSHAP approach to moderation analysis does not involve subgroup finding—a technique known to be underpowered (Holmes and Watson, 2018)—and is nonparametric (see Supplement D.6 for a review of moderation analysis). Ultimately, we show that in the presence of pre-treatment moderators, Causal Shapley compounds the main effect of treatment and its effect via moderation into a single "direct" effect, whereas PWSHAP explanations are able to distinguish the added treatment effect due to moderation from the main effect. We compare PWSHAP with Causal Shapley on an example as shown in DAG (1) of Figure 1 assuming $Y = \beta T + \gamma_1 C_1 + \gamma_2 C_2 + \alpha_1 T C_1 + \alpha_2 T C_2 + \epsilon$ with $\mathbb{E}[\epsilon|T,C_1,C_2]=0$ and where $C_1,C_2$ are two independent moderators with $C_1, C_2 \sim \text{Uniform}(0, 1)$ . Treatment is randomised: $T \sim \text{Bernoulli}(p)$ . Details about the following are given in Supplement E.1. PWSHAP yields:
156
+
157
+ $$\begin{split} &\Psi^{f^*}_{T \to Y|C_1,C_2} = \beta + \alpha_1 c_1 + \alpha_2 c_2 \\ &\Psi^{f^*}_{C_1} := \Psi^{f^*}_{C_1:T \to Y} = \alpha_1 (c_1 - 1/2) \\ &\Psi^{f^*}_{T \to Y|\emptyset} = \beta + \alpha_1/2 + \alpha_2/2 \\ &\Psi^{f^*}_{C_2} := \Psi^{f^*}_{C_2:T \to Y} = \alpha_2 (c_2 - 1/2) \end{split}$$
158
+
159
+ where $\mathbb{E}[C_1] = \mathbb{E}[C_2] = ^1/2$ . PWSHAP effects through $C_1$ and $C_2$ are null if $C_1 = C_2 = ^1/2$ . The PWSHAP approach thus matches the default behaviour of local explanation methods: paths through effect moderators are given zero attribution if the moderator value is equal to the population average. This highlights the true locality of our method. Furthermore, one can check that the moderating effects integrate to 0. Again, this is coherent with the overall definition of randomised treatment in causal inference. By contrast, moderation by $C_1$ and $C_2$ is overlooked in Causal Shapley as $\phi_{T, \text{indirect}}^{f^*, \text{CS}} = (t-p)\{\beta + \frac{\alpha_1}{2} \cdot (c_1 + \frac{1}{2}) + \frac{\alpha_2}{2} \cdot (c_2 + \frac{1}{2})\}$ which does not reflect the local behaviour of the model.
160
+
161
+ Under DAG (3) of Figure 1, i.e. with unconfounded treatment and two mediators only depending on it, the causal interpretation of the PWSHAP approach to mediation is:
162
+
163
+ $$\phi_T^{f^*} = \frac{1}{3} \cdot w_{12}^* \text{CDE}_{C_1, C_2}(c_1, c_2) + \frac{1}{6} \cdot w_2^* \text{CDE}_{C_2}(c_2) + \frac{1}{6} \cdot w_1^* \text{CDE}_{C_1}(c_1) + \frac{1}{3} \cdot w^* \text{ATE}$$
164
+
165
+ where CDE refers to the *Controlled Direct Effect* (definition in Supplement B), with $CDE_{C_S}(c_s) = \mathbb{E}[Y|T=1, C_S=c_s] - \mathbb{E}[Y|T=0, C_S=c_s]$ . We claim that the difference
166
+
167
+ in CDE is able to isolate the local effect of a given mediator and has a causal interpretation, as outlined by the local mediating effect introduced below.
168
+
169
+ **Definition 6.4** (Local mediating effect). Here, we call the PWSHAP effect of $C_2$ the "local mediating effect of $C_2$ " and note it $\Psi^f_{T \leftarrow C_2 \rightarrow Y}$ . So, $\Psi^f_{T \rightarrow C_2 \rightarrow Y} := \Psi^f_{C_2}$ .
170
+
171
+ **Property 6.1 (Ancestors of outcome).** Let $M_1, M_2$ be two post-treatment and pre-outcome variables. Assuming that variables C include all confounders of the relationships between T, Y and $(M_1, M_2)$ and that $M_2 \perp T, M_1 | C$ , then for any value c of C and $m_1$ of $M_1$ ,
172
+
173
+ $$\mathbb{E}[\Psi_{T \to M_2 \to Y}(c, m_1, M_2) \mid C = c] = 0.$$
174
+
175
+ In other words, if $M_2$ is not mediating the effect of T on Y because $M_2 \perp \!\!\! \perp T|C$ , and $M_2$ is independent of $M_1$ conditionally on T,C, then integrating the local mediating effect of $M_2$ yields 0 which is coherent with our intuition. The proof can be found in Suppl. J.6. See Suppl. D.6 for further comparisons with the traditional Natural Effects approach (definition in Supplement B) and with Causal Shapley.
2309.16585/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2023-09-25T07:46:15.109Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.3.0 Chrome/104.0.5112.114 Electron/20.1.3 Safari/537.36" etag="KXhh1EI4sUJPNNs4KB5G" version="20.3.0" type="device"><diagram id="teQ25MRjxOG5yTV5MwtR" name="第 1 页">7LzX0qZIki36NH05bagPcYnWWnOzDa0+tObpN2RVzVRVV033OdZtffacnZb5/xABAXi4+1rLA/IvMN2d/ByPlTpk+fcvEJCdf4GZv0AQCID48+ttuX5qwRDwp4ZyrrOfD/qvBru+81/O/Ll1q7N8+c2B6zB813r8bWM69H2err9pi+d5OH57WDF8f3vVMS5/viLwXw12Gn/zvznMr7O1+qkVh7D/ahfyuqx+uTKIEj/1dPEvB/88xFLF2XD86low+xeYnodh/WmrO+n8+xrvF7v8dEPcn/T+543Neb/+IydMjDCUsKQ6pXEK/+vw7gQa/uPnUfb4u/38wD/f7Hr9YoFnlMfYzw51VPWa22Ocvj3HM99PW7V232cPfDbnYY3Xeuif3f+AgadhHOp+/WH/D/X8Bf4KPFajgb8i6PvrP/99nqvQv+7H8T/ph5/bpcFfdX5+3Qn/OBn4s8GRH6P+9YP8ST/608WJP+0nftMB0f/x66sTP10dhP/k7J/7//bqz1+Y+tu5/GVi8nnNz181/Ty3fD50+TpfzyE/935+9t+f4wz9eff4L6cFf3Hx6lcOi//cFv8cJ+V/DvxfrvRs/OxN/w88C/rXeBb8+UPH+pu5AX9le/Anr4P+zKug35+M/7oX+ckt/uTkD/R3vAr9yS/+rBv/nUdjv/aZn4b+s2ggfh8N4D/Tn/4zIV6/855feRTxBw71+Vc5FPI3/jMPW5/l7zlvthnmtRrKoY+/yjCMP3tOk6/r9TOqxNs6/Nav8rNeg/f0Nzn8tBv+2H2m86dd5vx59B871692jHyunwfL51/a+uchfxrs8Ymf93892rv/X8P92Lt+vff7AZd1HtqcHr7D25LlRbx91x8HZ+QLbE9jP/T5Ty1c/Rrzx3l/OvfLsM1p/vfBYI3nMl//fmjnWZn/t540598nbPffIuk/3S8+f5toUID/X+BffkmBv/KYx/fX37rAb638s0GLx5q/a4q/dfnmn/Sx6jtF1BtJ9cMTyJ87ujrLvn+Wy37rqf+E6MR+F5zA3wYn+gfBCf+rghP940mA/kdPwu8xF0T+zbOA/V829z+EzcHwb13r307n8P9L5/6PpnMf4LcO9W+nc78I/P9/4Tb8e8j4dwM3+AcVgP/5yP35XXr9tyM3+AcE6vfWX6p4fDfTbf5e1Byn7cvS/36y/cV2v0u9IPD55xgThNC/ftDf2BOH/ooif4tYf2BRBPkrSPyrjPoPgNb/V40KIdBfkd/6KIj/FYT+IaPCxF9/edR/vlGJP0oY8//wvA0C8O99HPv8e1MG9IcAOv8Pz9x/U5YC/82ZG/oDAP0XJ5n/wP5ZmRt/5BnxG3siT+r4W8f+oyQDov+Zof75RkX+vlHLxz7jP26E/1wiipNfRgD+W+MgEPZX4PnzgaEPChAI/tvoh3/b+beJGSf+ioEETsAIjuEggHz+1oS/P+RflbKhv2/N/zfiDfrjssD/FW9/R7z99y7/D0cv8Vf8QxAIiiEwAXweP/qNg/5J768cFPuDoMb+VWkS/IOIfnUG/H8SWv2TJg5Efjs3vyyJ/zxz/8DU/asQLr1smRj8fQCxCag75sQ19B9ZI/4F4Orux3I6FS/jT4vxRX2+9qR+7mCyeI3/ApM/7ULc2JdPrNQepVsHIPPlQD5/NNutWLd8tlzz+cHYNBm+7dcwtwtJUl3VflnTs5BehzI0iZutRNkoL9K3KsdV1IoXS27ZviObvQOCA+ba0c4gbiJqnPLtvNhqoWNcj79AlDcIU0aER9/ZZ4/1xujt4P3M94Y2iywgiTxIYm9h/DOuqoGw/5ziJB3wzgj16bOYmVMBRYv5Us48lYBlbHl8aCojTgQBDZwkcZ4zledfvzu5KYbKns4rfVCxOoczYD8d080s+O3SIRmxVC0whrjMB0j3lOo2tJJcgYYmNumHqx1f2CU9p1BAAKnZyo7aezuGI2okblerX+AMP34ddjEsgB2iKRHweJi009qfw44AUmqS56aZiZRd42YiO6zUdtOnL3tug7x41qzm4EYoE81sJBqtTgVIYVRpEaMknqAYmpqRUdUPAUtnkl9Uq/Fh2M5vGtX5HXMt41Dbo4RLWQCLPivuCKfxIWgSDCNnk7s+F1QsZTO2NIXS9UL5fHB5YFkk89wYAh+rj6NSltz3vMc5YRbKgLrapXgJn43KH5ynyOHgd4pzIwDyRNHKwrgxTdgXRfKqNlppkqQgYmETzM7kct3S7fnORLJOxhWXpCBlD+jCXARgLYSbe1hOuAu53wlwtCR4hv+WvLiInfc04Qs0Wvl3W8VSCAkiMEXQS0dCIrRetmkgZtg22zsZfq6wfoil35Zzciof/QaQL1q1zYrIh3Q9qPV3kCRPL0TDULz9CQmbB2PDjG27JDTZcC3F0lOCcEa4STXlRl7M2FTYVuQVb+IHKH4immuzxUukDIMyouVEPnmikVuVc5Bn2qmcUAN9Ph8/5JV8R2pRnVa60wfmKHmvc9aAToZbsXeOUa/P7aNo4S31StZjs7Gh9aAYSfnKChhsLCtHM6xZ6uNSD7kZE360zq8D7+bM+VeQE7OxKhQ1LxWWWV+sSwqhSnrMKyN5Wl3Sm7ZNARTjQ4J0jgakResrLZJYsJkgGbVat2rg5sFeeGr31x1REgWOtK747XlEyv64wRbGTwTWtTktpq9QKd0jlvTpnt4V/8HUOPBxpMiNAg9N9a4LcsOThc14Dmj50uBinWx49vP8IMMsFY+aYTfW8hmnontWCTSHgDt1oRECS9g9PL+tidTuME/GuJSYDkkT9XG+SSlmhJ2/zoa9PuHF0vpe+Y2SYyiv5WTnoRKwMFKyT+8KQqzKhvn2Cm9gUvoaxA1njlUCnapYaYfDDe1gkZxqNj2jzYLGzohmKhzscTCdSR8vfIYX2kOuAZK/DUsIKkrNEBrH4iK2c2qTmxs1hUpNmMulvvJQIsaX50fVrCLWEUXzk8amtEJLZ0oV/bkCBOHDKN84ytenctYSLwzNRTnwGTW8Ev72z+Ws27eJm+XCtCdBKvt+wQ8z9iYs15Gf//Dpk2gjUcHsomxmWHOMvHMHVXoecDhwAzLBEUIPMbRGStXBoSbAb2l/BeEuCaADYWywZ1Zf79Yto6FnZTYSWySsnywHBL7S2HxhDsAtx5v6DBg2841X5JvIvFzbjQXF46zUHGrKnBI7VXRlyFJd1Oll+tzjfTaJcX5bjOKXV8znLOyixe8CPZ6wk8g1Ifz4mWsCqy6VBGpG1M4nOdDiJkPXt38ih6pTJpXuz4beE2m3/qFu3+UL7xGGiXvMsl47btRkgLaL6OP3uTrFEFZFL6R+2AbXCf6V6pnOTwUbxEhW9TuY9Fr1teoBgWWzmRfc9DxMFzJvey6mUDK2PPftAftlcXmCXue1hep3Ad9rxjfG6KTPixDBgo1Qkk9odoBhGozrLbWMh25VsilwYoM6ZNpDMih1QXkPUnrV4y8iXHLogTAOjpGzLV17xhfmM8LdEejNNVg62Har0xh+s9fL8FyTe68ZPjFkmh4k6tlnhkzIh3WuFy2RVnUgJYfTFSHPz6GKxWgKZ8/4m7EhJ2KzzqfIUBANozCsRfRR5lRvXAQbseGJbWw5pyI3ZZINIlmI32zq51Aq4ts+qZ2Psp5Kv+wX7pPkxthGlLDVVb6lH2xJ32cTfD+gRHKuLh8dClxjc3asjfoiCJBD2A8my6UyEPcMXTPpzgyL/+QzTv88hn0YGucIz2/eFfO0r4pEvvulfRo2FQt19EYCnT3u3bCxcx2JNSB83pbcJZMCBaJ8z0ME4mE/1HSYcirKyijhrks2OoUcvQeF+URpKM8lXXrAfLSbRzdd0LnhN0L6NJk1qwOh2pCb04vJTE9SX7uqschVLTl1UWcOD1vhiPsu5J2HXmHD7dQm0XqhoF/x66ru69IqRx3JRoGt22ZRO59mttVldjoOrLcsG9jO6BEVgngYF/gC2tBO5lbjm6HcewHvEQDuG/5uE2xvTxritCNCkI83I+TO/exjieqGEx0yK7h4YquaBLK6YWkxijGWxGAIT0jbIKxDldrLrIyAqQIdKKJy4R1GcwV/4a/z9So2Ip8Ezn4HJi7crPtObWTWE+BTuVhPhnN9etg7NjCof2CpUKQJxqrF1F67kA9+hlJWZhWnxLAS+OR07ht9vi4mLrp5RCUBhZL5RBUbs8EEyD1r0xzE82jSYBvVpkdpqxTFa2k33HBY8N3OVbzIDNGXqrmcOIHJddkVdFIwUHwfZqjtjT0o3c2GHTMRg5jvYnws62n0gaK1ap42y+ZjJZCiGntZqJSg6fq3Lb9R2TUs5ZwJ1kTR1WwbmAtojTLn54rpVOlm8Av6bvnVvjJSZtLaka/JA6AIejCW8PbAMgSBJzy5apbAHmbLyVAKh+TJLhOGSx+zDi+G5T7fnCHHdE/N6DmGengPd5SZ29wMcfUlrILPZN2d69Lrp0U9OYGOW0mjQXXvl+/9cPx91PrgYUkoqZS6CqzF6GNrVnCWh88BGqAOGtduIJToBQCfl1g1APRNUqFPsP54pELuLdRio9CebXKHXxPrmSGJN2k4VnFzq9P58eyIbmp9GwckKSqHHq0DOUqTyyLoJDFeCTJAcQuuYQM+9bS8JugMPD131yY3msNQIDGrw0t1T3H/yBQSJjmAFcOuVN7bP11k7Sr0UNqUuTgjgOOEw1Mv6BsyTHJ6Z0EWXrfR3SLaw6R1fZiXWpF2pko53JJjNcOqoR+8J1PgwpCPkSrgBNNMJoR5hy1y/erSmewKvfgasiVT4u3qenYisVQlF+hZDGYv9qTE3Rw5LAFY9LA+iovVQXMbAKZkASMq4GZufXFabcFltUyal0VfRe43OSHItBgfSTOxJP/AXriryu5c1hvO2EJuWcPigMqPoWhgg9iEETa/4OmuNihmSV57Ct9eZnIy+jQc8MtggKZjQ/OYP0Bg5XxIMpxTW25ZS/ebI7y1iGOcl29VZBKlBqEl2uTDDosXkWGFUQT9vlZu+TzHLsaDcXR31c927F2tsJJrRRsri34Z2C8ANShHtapV7eoR/JxwHkeP7QCKz1A+eE6XYboZMasu5IBHWHorfB8org9dTSE0gR5qNRlXnKuagY2U6hgXScaS0FkTi0F45QDIbHhTOqM2yrnSDUF3c+HoYbwbDJNGzy2dDCKf9AJUrC4+/PTjoDuDKplZkzf3AI4VFg2SQB9jNWAAFkiitVoVFsOQSb7WCnfMSoCtSpkUYZoT9pB9PNrK+PHNfheKDH2fOaNiFgHVqAQGNcjfeXZ11rnykw8w15zKHsc5Kfoc6HxWm1pgK0FMGhbFs4I4SE1SDqGjrf7KNhcRz1r58J97rOxxd9isrAxWN7VavKjHvhTK508CoHRVus18LT80/+N6KlnPs8wP9UiXkDrc+z5JJVBS8X6alC26bwAAmzIddpagaSZtBvlEGnSljClGx64KCEOMwV7i59KaWrTMFikKZMua2dQPs/BFLXs80RZsH76P4xaiv5zODMWFFiy9ug4V8HsIpPmxor/rufCqYknVZ9SFg6IowLNrZ1rGKowE+2qZCjeWIlL3juOqlbrn/tFHTTWQnWmCgK6QatMPAmC28qHT6E+IexSlAQUp45rALLiMfZgfX2/IzoDvhEnCORqBl/QoljxSdmDClLm6cro9buJWaz3ZEgfD9F2iixqnAr+nqdFnIXWQ+WR+4C1oBJ+pPbC3eLYbEuuCrCCa0Ns55P1YZ/F8qaLETGjDbVj8ZkTuBzunTdE6Lc2WVCiD3oaiDiBzJQ2u3WGSj5B2N17OIht3ZdDuI2ovhlQf9MGnXHnC2N5QwWWPSauMWz3yNyk3k2XBIkq1l8sMg1ySmM65ov55CN+izrNOBN6C0iy70ovJBVeYQJFUSQfTzyKNWjzzCaKYSiMAeRKf1FJsSodyFLfmAX8ZnH/v0XnB8kmDCkk9cuXJKENRUuKMxaZrJ0F16FBzvhLwepQJLU7rgx0sW2vTqfZ+azEpU1yj02sHNHn0osug8Y2Lx1sdTHvRzhiJTUdO48RXFmMvPCl9MA4quBl1/1HjAZ7lS9TN5CIuG/XRODgSzAV5uA4MnUFZ1BY98pl4170LfF3eLqqaf1Sp+eKa0GxIIV2G/p3tN+9tdTN6IhS2wjAjqTcUaceKr6M7HXJbTAaXeb7ByaPyCm0sZxZr4lk27x/h8t4qDpP71MIX2rlYKUs5c68+wIM70Re6+Fz3OhyaE88v06pWnsyWYRM/QB6GrMFDPC4/CofJDFAT/aZsLLvMu64z0qMlzw0SM/N59MgN11qnRxu7teY5N2qnh1OGzbEMNUpgTKWW3AneldMZpPT9TgUIU/sx9bXR10WEKa1dMV3ixOjLs8iRDkcELSt5gjRaPKtuWnmKzVVqUK0D9jEgwSyYLPg2rUvA8OadGFpXrK/FVEtZqdZXtmLuPNbDmUbrI8kmTotckV75hzisZAMlAoWiJxnwARBtuAlX1Wb3b1xVumXcSswc6uC8VCqE57Xm2AdOaYoVVbP3nucXze0mF+oIQdefnEdjDOaYov2H37qmgkmiPpWM4cz19USniLkEv/ggj7lvID+SA5Fd52KbfsoFq5Jk9zPhudDjfusLouQ8kAJdvjB+RisTEljhMlRMckjkjYJ1vJgEHSg0hhCdDVe65pF50i8OsSYrtfLZatsif7m2quVpbJQ0Ka2WtKrvbZshhgdP/lUWVuRPVikXnSKhSjY06qaAUHNuNqeYNWhgy8Df7Hv1TLKUZkl6QS1a5OmEMAuVAsvYHE9XHNYJ3Rs7SdpeT4Z19cxqUr6SOkKydjcsQihhlvzGBOOhDhXNMW/VhNOl16SJsF5B7MN5CWfUy7U1iogriPeNkcYcMkCLFfCZhYPUyE2OQZsY/VyUr7RzRqfu2QpTrV2WM61FwEAKpGx9aRFjIyFeeTqbdi6AXuJIh5VLZxV48p5TzUy+vtFg5cvFI3MNWaTUENp5E9J0SBZdhU0Edgf7xpQBuPGNszaxqKF9NasFT8D1qLUCz8VX4Mfxwz+yZDC0uNqWD0WfhmPjYeXpUMgW5cBWZTho4faCXPHFGcBpeipx7O8XaiY6uV6+uGp+5aR6A7BnpUVm0oOImlYMH2vnj3g3AFKk4geBURUhybCq0RuQ3h66Oci3qJFsp94SdUKm+1ZWJbVDhDIelfyMffPxZD6kFhRBlzdNphwwnBjh6CuT8MsSqKpE3AKFjYaFh3jo39IKViuQZ2HWPRZdYdYUTl1SJxbTwF3HkxkHAI6Lw1C5g0pztSkrHrYfRl3Lo32BaFNtskFUb1mHojO1YNCtcgHp8t+y4s2mZKhE21JrzOjVgIqEhBM3mxORux0OFpV+SLZ3o6WacJkzK4GMLfVLZA2sY1bLP6kh4vgXg3XhTYxQzISnNpirA+pkaJbqWlwZyYkOFSy7MMrKZxPinBsfOMBxkw1/MMNwSn2PhkEGD4s4O5Tz9Mb6Zq2Azuyq9EjSrtgwBwKh3/Jjdlg6ROCE24VPbNof72u5dZYwjc6y8UcBpDmv6BGIC/J44WV+5TDUJ2/t8UyeHQYFS15CPUhS99HgxMiDfc/stEgyY6zxoYfbhrIZ7kEkkW7tXCqyLHthceIB7a441eIt0uo78pDUDuYXY0XeCDg+Qc0Ggm1yIDdYDKGq7fcr2M+ks08UPZyDYcUKK6HlBT6kJUNSNONxr5ptPASIYdOwHhgezE338VmEP5uZWR5hCqE5ChAG06SO7AeeeXU85T95ontUDMq01Nl9RYRwHh9FHFSFyMMhChvuVreGo5B8eHv9PD/HFSVrvhWk8fJENtQeN0i6Qv3S77xxtIKmlDtZ3Sow2nlqYaI36xL1ulgcuPYMrWGOzpm8IteWXen3Hc8QeAtEDtzkLW9vhbCA3mLUxaqOx9ED2DWYsVFUmS0Cmd3ZrknS0uvM7NE8CRQT5A5D3SDbhIFM6hbY+JDWH4mHKUJQRXkUdp1kK0DrpFWQV86W3IoqwJAsHi7A9/yDwGp3/Jgn0wzGPqPzkE5bmLNJGA9MRayM7J2hubnDo4MQ5lHxx3NY3Nv3BeQiWSY/ctzHoA0GKjSyMnQ4FD52AUhVk1MiHvVk/eTRQfQmPExqBYtg2X2D0ASRT9R5PWKusisVj+yLnC2AdJLsyPSi2awoIcE/hdVgBioRai7qcmIm5Xp6ax/5UCapCK7VqF4OZz4YzKFnUz2E1Ej4Ojd1HutF2RJhxnSis2+T2Wviz3nXhayyYkG1Zhot/B33rCQis07O+hswRxHx37ei4Jy6RlHhurCjHHXolQQgrQPPzbnWVvdV3jz6ysrMEzw+OVa9RaICNY7NbYrB4dlBgbEn8ZxmhYYmF4nEQwj5wJSOR7CLQRCb9UC44dfWBcGp8m/X0/FOcaKCCXJqNuUtXsLw7XQmjCBkeNd7LIwM64KckV3jYlFBVAuqrFMvJnK3hskSjqT+qqfe2TzsIU9wi7Zz43r01igFeWTnAg3iZFH1R2vEiAHg0h4BKTB3SSJOAVlVQZ6MpT2l4kMIjorlB3At+RQJCzFpmNQrQuYZqXTf7CIuUnvp/OZM10XZdQckpidksYG9yx+e+vK5dUmi9Uims6SrSe1+1NuRYp7y42iqaJhO+5EpvOBPvijBl1+XPRksRQhkYxYYdiludZV9pyfuOX8crWf+CTM3wXe5sUUAWF6PzskUCuXa0LHg3DXOL7F4JMjI7uaaXxkq1y54Q7FdJ753Qn3oAjzVznKsynlOUseMA5OeboP+5tPFsRjiOMsHp7R+HbCLqCBQ/tqWdpCJ6ZZVg9MtsCtncbCne6xl68LW16a3DXLl6W5kRPoWhyIdWvMoeJzB/TQYl9SXo0M8iPBd65M/9FIAwpPW+mx2iHXYId9lli6EOL9cWfg+b9rL8ir7vDobOp2PhHwyAKlkhmWfHMhQcb+mwqATnyqszpZ3680oLAQgZQ4DFaZ9tKFAu74hJXPflWzKpYZ7oCMZcVk0fi6xyvMGdcXTWpaaBnwRP1qVuIZHZiXww4GZAUKD5342vs/s+i0dJlzWKVyKhorfPMwwaudoppO3uHkk3xoLaXEfnwdyB1pC7XS8ZyVtAFsChzOikVaxRmmOR7FTJEdy6mUFLGb15JIiNrcv66/iPOxqdjKkpa58cvc7Efdcwvg0qyGUn28S4qGZvAKVMaVHYQskeYea8lE13CxqMTdHtvZ0v8czzTioyDzR0rbPtuRIN/88kpLKVdhuykMRk95BHCVA8ryFz1r3eC4uaMmEJfZlutYZ1HzMnEkPVLtg0OL60AYnjmTUSpqajHv9icXGMb6O5+onayIpsuPTKeQf2OMpVMBZKdjyM0ezLsmHWaVHznu8wXx1IV3qmwPQmlVBHSaq5E1+IORh4lCftcPCwodjyKoKgRT++kd0FW63N2a/hpoVLFt/f6YHHW/miglVh73hGg8AgDFWlbUbc7q85LWvq1idrV0kox9MbZs8DNdFLJpUWeeRtLi3kooHxluIwXktHGF05hyRD/OWrcmBimK6jO0REzMuKQz0k7P1Tn4E60uXEHBb485YF9KcgMaJADhAkvgYN3G0+UkfpGScKA7Z/EEJ+OuzaBfIo0NcItXEkUtkNyxQ80IAqIAphboD+po6WjX3Yjxh0Q4q1ZrbSJM/UXHCPGmzySO9QmgcyZq1g7eIsNFRj/xTBd0vOVOzRzHJziwMmEPCeRTCvFrtWJ2WnHdTteRHV58M696znXNac6a1z3dJ+gkUlBN4ptRGwSwvAqFmyQdWpwx5A90fOtBSK9KrAH64BlV2WFNVUVaSGVlxsPcuYU+a5Sk5QszM6NrrEnv292WyjzDhufxH6YJCZIqtzHq/0dVoye/cYR2Hw3S66CcEpQzrQXTIraKBoYRaktyhRYOGyEkZySRaP3KnedI4sjHnFqFfm0Nn8l56emi4l2ZJItlOaj6zSIat9rpaVjxiPIN+ybarrrZE+khCBIDbCW/73Auqc7JItzQtIupexupgGltXuvpdA6Jm7eRC7C6Ik2858EXrTuB8BVouCqlZHA1X6ACFVrMpXlQNUD7qjWm1ECXjNEk+dDp5R10T1pPqAlUqY4nqVD+q7U1sWNzGuXtNlWLkq7F76ydkWyNNQvHowccbgBhQiYNPwKszTpsHFsUvb84Pguxt79RyxNKgUSeBZ9RXtz/aPjzMSplW+qtg+clN/Shy1tl4rKR/gKOohpcpjp6QE1zLzeyBG6gXSu/MmGL7LiIByD6VZvOp3iUu1zoG+pW5FL3BJAoKHdM90zC/3Dqvhc+BGl9MWL50zGOuFfsyuy+URrqqxQBJ8pYSZt1kpCffikVcbPU1tk2YLaVV21TeQquQnK4BxnZnQXb8We/qDSD0etdwOGkkS60aA9JnTajajO5AqsOTfaTGSrju9r3IzKhdw+ztolH44xbrC56Sp32gSAxKu+1/WpWiPgUYvF3sHfiJBggdWBsN2Awr8FrNg2TE/nqrKbniunbprgLHF6HYkF85PTdcuR/okjx8JiqGwAduhVYRWcuBkTsyq817NIIO7LQGqWIeDfgMCLdWwaM5xy+y1byLgl2ecE3ftYCsjy7iJIyqaZ/TZz/m9+UFYAiZFst4qn0f6SQ2TMKpN8zC4l3v8msRzuKcWkYIwAPwFRCV+WDse+InPMZZ5MmsSxAQAxGSLFkqI5kYb1LkW+RbVlB5ukh7SEq/9N7rXtJ98cBbcFkWoFWKgqKCqhtMz7Z8h3Sa7yv4oHfivj1JUFkyvvRJx+mCHLdnYl5eLVChawbc9MFN5OjdJaHoHCfzbQkVyzTKUNVuYEmzh6irkBe6rUnRlSwt4SiBNqhQoNlF01J4a05rLXl+0BNoKSjPe/7wmLsLuoRF1U1021ATzgJ1scIVq/nhelj/8LmlwWKV57Og7jU5HHGywmllMJO5yph6tvv2fEuSJ7bcRNGgAImDliuzYG4n4cJ6AX3I40XyJnuiHEN1BLIFjj+5G3+QlqNaflKSppP5ajt9CPPJi687Gmj0gURJZw+m2hucmlQmRgeqnlKC4IRHnL2VdDbRZ/1LIJ8SasNMcYbPWYRHM/Cebp/1KMzCI5TirNJTgzcOTbg+E3GO7xtS8nWCoeP4z2YJCNZFq7o0ZW5w2Og3ExFdSpNBXqcJoMJINM9GsI8bb5OVZSXuoUUPHVjoWQAMjRFoh9NU7351tNmRiLPU8cEolNjoQp4DesTuVOtzonoHc7yzwoiuE65PrGyPpbIuAVCw3SG3D1qY0MpfghQx7ZfnyEe6ekMXyYPDdn2FkMphNjvu8hzOhX2l2FjUWHWC+XkIy7w9OGUtO2VQ721agkL4hedQgKBv8VDUnW1tzm0DvIN0Vu5DO1gz/yXZ9qFQcUpG1BAl7jgxXycyATeseAvtV5mKI4j37PxjKLUoMDBbDnQ4malv99dqnxJKx0NsxRDvShjia29FZFHQbCKz0G1ufkA0oFudyq7uTrAjPqS39B4WCkJQC4lAi5K92Fi6a6XXaKKuoC16vaznH6sJe6WLAscroiUyBevoPYuzlBMclCo2pBh95YfDsyVEsrYnBpiAvul5qo8gWOfMPwpjz5WpZJwqZTeeIdICTexHV570cdLekMpD9VGZUMteDSienhiqZii9Y1DTpVc49ig91XuoEogxFmMfcYhVYFQ0Yb+/87b/tARQkuqTFhO4a2SfRpLKBs4kwMJ3YUNeByoyOrnnjN4DI06g6NEPVIRROZ2dp5kX3zVTRf+ieN6JAHmQsnHojXyGPBlS3Opjs0JbDUxVJOszE1xfU0RCr05Hoe0JSECQMeBCJcA2adxkwPgQlDfLVB5HVYK/ld3MnXtLxuTIJMy6FDxnd6CGIab8MnsdnqaX7rXHpkWiLyN6PMgmfjsIGUMJQLycJ2E1VQu1wyiFYzf7ONNZXSLewpstUBaI04Dr1GZ20Br+Y+mfm1W908qe4OEx0D8hSW5k+j0s3Q8Pg5X59w2FgIcBvn5SgTfHyPu6C/uCHZ7JeJjXF3SmJteQeP39buoOgKryQKU+Vz7dTg7g0Ai5CKEUp0H0sdq66OBAplA5NwM2koBTaRMS9x8WxZmfbqTkb62A5PEFkrtKqeyZXYrAmbKoy49bfrixK2VG7iF7XpeZ56ubwZ9JUKFGtB76e4oemIqLz8uNsBT5p9Ku2W7sRo7k4+4aAufl5wEDkTsvw8vJWF2XbU9lzZAX29mZvRd3jRI7lZTnkkGChtXKT3JELLUAhVJag/q6z7tWdFIkKVnRg9tHLFQZKAg475wkXcq3rNnsiPQ06DfkRIG82CHJBcwp9dVDM5PJR3S/6+ah1I4GFqiJuwaPDqH57bvn1IXfDdOkpJjqhzcK18NgJ4rVNO0YZU+L+yyAuCxGRFMRJFI0twNiBWTbIo4Fl0AWHX4HWlVaaHWwGqoN0RtyX0DtS/h9e0jdUiAig4xoizmwgPyLVpmZa2RKGK/UbnsuMgPFvSkrGDnKwTrGTTZ1JTFe8LTa6KlhMmikqGb95ThO49jXtz9BAMA8bEABZ4gyqxKf+WRGMzQ9hrnSqfRbLGIZxeDP1A+AWlE0xK5Wq/dV+30ZEDHeytansa44ZEZ5hThjWkhifCUT2NjV3Oi0a9K8rbqM2UnCh3TohIFE1lkNktNGYxzewq2M0nieo4BFLOZ91sjKn+JIfiiviSUeFBrtlrjHYlZAF0kwcGVJ0bN/8AcE+iw/Otun4ml7Aaqgf2RCjqT9+dazW1sK0rBNxx2mt8lsOhXW/WPGSDoN2ebuXIXFw64yN+1iUcsDe2zgdLFonZc1Php0VSolVw26a9epxietuj76OxGgHFey6mm02jN1VJ582MEmzIAX8zKR8C3T8nSFDuebU1WKY4OWwmYrQnhxC3ovH6JQrE3p+rG8hoaZPjc80+r8oZ3f3QKTQbRsBjVt4sRO/UThKARBwujOuZzrC105xMl53ZNXnyBvhNy6B/3C2R22b82WGZq7MQx7Bcm54jEOJM4Q8VxavC4Sd5dxILdzZsJg6zAXcWjs1UjRmL4fjKjbMKX0EHh/FQnf7AJzdS9XWmPRwId/UY05EKoF0SoBkpOOZpWvt/yYlaNBe6HxJnSHft30efLcFev4FUVLdjfXyc6uVIfVbEFnaCaP7hY90bN5AEd5W9de0hqz00GWTxYNQtjF+aGBrfNL2Tzdl4IjArd8ghdsbZwBjftpGCVjSx47XXQmreZpU99L/lxjKRYU+yCW9l3BuiMXSjSBc5Maw5GqdWOxRkavkL0y8wFlrtBjEnAc46D7R4mfTIzdKAuXQneX7ZatsAbWn3CtqSl4ZMvCQQw+cTOu2xgQwQRN0tyHYu+M2bKHV3wmH1lksPkY3s2keluB6GpVaES1ViJoyfBAkAwv4izEdTAR44MuzyPLerGjF4xMmaWDXyuyP43/ZPYn9MiqUlnfgid1Tnu2e18vVZAOp98a7wNjiK++/mkaTKKYTeQp8JdLnIKnla1C8qR/RGVGagS9mSchlj1Kb6hipzml6zSQM6vuo1AgmlZHAbiqiVfTgbG0PZMV2O/KdWid9aWabWkBBkpi06A94hHraxkofVG19ghyZFbbEo+ErxvrpamO0RiTKitXBvm4zod71w9xFGlc9J4QGR8+RVqoXU9Rer35gueRtJBF08EGDT7alDfHk0234hH13wh5FLjLSCyoQU2cKHBEIkUcC/tDdiHkiZQeH+4RCg7E6uR9tuzzCyjLBpwG5X0jYv0OomYn36nd3Lf25NjNaUYnTWzy56UY4sGefkl0aM8liFFn963A5BagZ+v0BDOKPPQCKrsFfrbz9xOSrfOk/crUKs6W1rLmkUVzSLFyDHeZsj1O+pntVGaE2VXynVRbKQcRX46yek33GPJ9b5xTX1L9cJ3mUuddpxdPEhPpqpHBHE0W1ojGkQ2Y/4zWAEpLKqf6gnQKinTIjBvOnTbg/mAa7L9vPFLdnJacdMacwOegxQHf6oFezj2K1lyLCnfngz4njP5SCC/lgo9FUyUaxsmzJUk7+s3YZsXH7QYQWdU4JL3QrTx4+rBBn9Vsh9nt2grLacYP3OTLvu9t784LFFDW68bmL/rBzfHZHaxVp4qrwFDKDCW5kF0UliSjxx8j48lOPIWeQ7hq6xOrv00eYLxdB4gbG2n4ETQHtNCIMs8U69pXmud0PXKOkAR9taUg/ikso2684sMiM9QiYxFVwQf4hF0h9DTJ8LR02nRqHAZPTTkfZYuOBhWvWINrJUjY6I+IkWp51VAwvQ6dO9vHG+XK7GwYUDmTBrygat3ZChY6XbuqrD83wIehkafxkQJTqcoHUa0iFj3X4/UyQv1beeaktNhpVnmLIddtKLzIOmaIVbR3beclFMarH7f6Hr3BoEXSNR37Uf7eh/JDfb0KVTsgskOOnY8iWWf58s6aFQZYO5pEQyCpde4Vk+vDHPL2j9UJC9OJR1RRbhq+hXAtvL9I7ti7qPLvwq0X9jRBZxGFs05bE0HdgQ81EjscgBXk4czu1d293q6mGFcXquh1VhTON10XzD2tDhJM9tP7UmjVOS59JCJogNYsYfLemgBF9wEeQb5jOes7z9x8xj6movl2rne2aMdZfgA6fPLR4PByi4iODfWNg+DeviUsdFKgW2yy7MPXp1FlEBNTGIkZ/HF5Vb57NkIyMAdzyW/zC48z5HkWU1EC7g2S3YrD18NbVirH3ntuK2SuRuiFKIiW+0to6l7JsofwSP05P/PeH1Nk+pP8vjmHsUX6WTVzVCLGB0Oz/IoShLmbvDiHbUUiffJSj/cridyPMIkW4zimlnPrOl3i4+ybDzGnwq7u6bhY5YPlsFB8TnL6bkfsJqio1zyob5gsFXZcffcO97OtjWqVbhrAzkagEaUxMewAAVBlX2bHTj6I/GBS4Zl+bQs0ySk83L/8/U0ulpkJQivHAcs3FM/26bLhjthdV/ty/1V/jSBKdEMpY8rn6FuiIiWVgg9sVCCtTUnlrbgUhU2wcB4T3G2HYmmtic5CocisVcYeHegnwKGXm07cEc66GVqZkKN/IETA5leFrHVmsXFlC6yYlOqQrdVREtWPBWTL+DziaKWruPqAW6FlViYrh6RTPl1e4InuNRs+QlUccYDM9ViIvEYxJSbVNN0LS96D1A1n7HQyow+v3VeoAN7SCrHjBmmHPjyw4Q/OqlTYU/PMRiErCeiP8365SMmnU4DpptYOt/PbOhpwnFh2WWkx28jqmJoKubqW/8gPdktHwKv2qRrM2gSOlVkuaxDCmAiBrBq+lEuyQCZ/hs2l+E60ndLofryb2GmY+EjAQZeHzl8eEsWG3Jda4UfMMulbGcu54K0CikTZKb02sMwJrQ+t2AJmkLVzqo3i4SAiOtD0nmmmA1uHC4z7xgkx0b91UoOJacStEFpU8l3IyfAtPPWa5X6vldiyckH8+ShnmJvgMaPrByZY4YQZkgPTSaUqKcvQd32aQbpU28CWBOZEYcgik0pZouBMqiZpGOyHvTwHKbYZsxwD2o9FPldZc779jTnXd+2S5zBvd/z3ZYc951Jc/wAGs7HAhrCdF9OPeh5Az2urYGcSHbqIyQRvkmUFIW44r+beD0ycTmwxKWyOo98HvsukPR/j14FlFYnuzNsXHgpih5Be7i+JIdmUhCVZjhLYsWkqtFyjMx14D3R68idJv1D0OT05WUZwammEdUNF5Yb4qPmJG0EM7gs5n4GGFJCd43Aq/GZPwx0O8yJihxwNamJo3V6dAS/NOzP1ia/qFwrBoMsiTSgPMKovO40OguzrUbkpD6bpSCCWuuukqst+8+Z9Sw8G/eqoGE8sMGbrmtA9H+wSJDhPzMIOj6kZ8VmzKlBAQo0G1dgMxqp4XYcApreO/H6cJZe83FEVl0YQN6BQ/yHTNdIvhbdyHTHh4xTKkRa+E5cHA99UVTEzeMRx7CpWDZMncdk3amkVJ06xHXYrGYlHgJyaLl4WHk6cNJICIr2wClv2EuDd6XeyBwuq5DUw++zzfT+wkbadfuxW1lPwKE6AQKuhHE/Q48kPztAYEsqJyYYiGoQS5ytMJvmKJ5nMcHHQPuSHc41n15jrVxfBJH1gOlJsPoNZRLmOvNO5bHC/n0+P23Vi2p6oYNv7Zi+ZjV1jWQb9LpwT2VWhyD06HJOKFoJ9+3Y5P1dB2vS3icN4/fwgPtgjg/A1jL4UlzBvmSROqi8mFrN6ZvGHgMCIu4ccZRDmBdBpx6TMUqOMRtlBBaj0Pu+siKHAO/f0CDzfNMQA/MQTqypIycrd11Q85lpZ1iR8+9gonMkjj6ahHEoaTuGuCRsP7dO1Kc2zjDgfyMW+dX8pTITuyJhU1uF+ujGe51UxkO6WJAeGaYf7UBLUDsGdsLZVuzQycfEv1fHcENHDo6DHpEZSCTbL8hBEv2OtI8so3z7fD0rnQ7K9/npJgVQ10JSWOIaSCmucqKw8qEAVaVSh7RykNoRjgszcsS05opFTdBJkCcmq4kkCJUTyHyCYq9gyH9Lgl2uczdg6C7RY+VABjoFyPWwjECsXrTrFHcNdpNGks9G+IjHzpzfP6vo+xJCiN4o1Yi0oeb8IMbd637utTB6m6JWhfGvlGbtJ183UBO3z1sstC5MXVTiAnog7dt0QYUz6oSMeSRa9ZTCWimJHfMtU8Vp9fsTXS3rp9524Rgdvf0XQ4LBW+uHkj9Vz0wMDiMqm9v3alAnn3Xso4y4bRE8trpBqcEtJvTCtbHkpWZl52tz5gGhFR2dCujqGGct8Eb9sAWYGHxa8egAHQXSNN/D3jt5afdXCxySF4gMSwoA1X4R5VxCYGLVfvUSzbspU6mjlyJntdHNXNfxkFBEWENFKXXCNmWCgAGepVCd8l/Gg7knGQgCVi6CahdF8iZ0ibwn59te4V6iQMQqrkuWVPFLaPJUXNSjVvUxYBguw3lVy4JAxIWt8vT/thGWLW1Cpv78l2jk/grgPN2or181/tDlZ+0waJt1s+heZvRIloliqC7NMik+Appcog8WFU8lruLQHh5oylVuXHKCVYehBDdpFicqgjKX6csZ4YHGbaqN8JH8sZGHC/uiDRUg+D9hOeome9JKl1v3SLW5v309qXrVXBMtLMoVaWx/w45/7oNiSGtGRsf1xNXseMj/EKnrvR9/RwxdeltKVH9AR0+bj2+6GHQ4rHs2VugWIPupsdF2yXgQA9kgXjV7eI9o/CrxiHUHIMZrb1rSFhJxwPtHmdGsJRgevc6gJF0tOdiAzIm/XFvc8oJqmix3AvPYt3CG3J+xTvqZH/gRIrU6WBNnxeYJTdWkmbtOoeT2xOlrQyTKmHJm3MJmhNyJGhh9kVaMpL3Fky4Qdi1tvsHbT6AMOcdpWlbhw6tQcRBbr+6HUuGZk+En1+byTXfEZ9KC5HtdmIU9Jr0QLT6pO8uMzLCkinkhXojdMl44V6wglPElhxABpu5c9SiN/5qQBlLOWgh6TVHfotJpqm/CuJ1tEIiZlrbZwciXQU+i7s6c+xPX6ozqQM5TepvqxQimH5Cw0QexbXFmUULaAVtnryF0t0NCtbpsWODs7ZvBdbUqeW9KmTld5NfBHv7WDIoEV7WUMw6FOCKtVVH54aAFdWxebbhnNuDCp5krycaF2zPJpq4pjQUfQpi86k4wr5lyc6co+lS2ls5Yob2Ii9FktYpwzAuYmLLqgIsJsKNtn2EGdooQEEeoKdGoL4Yd96SjflCxyp5OUGlmoggw6kcw6/0hcGNfKZiArrTTs8ZCRt1b9YgEHCUp7hnQwVF7DxC21PLQzCL0faivqidMML3co1Zbr5Oxqy2b5YF3Ws8MWH7piAyFKkqd2uaDushYBGCTV5OoCUACe1x6p6ZAtpopJJvdndXYxN26SeFRhvpLm2dhO2tRvbPz43p5y3JFEKvjaPPJIKmGC95+ZwPx+O7xJoExmDcKkg4t4ZScdmEW7dFzGCT0dLQMmgitbr4g+ncGovAR/s1MaiV1Kp1bdLQllrt7Qq/vopFXMijSnsRlIiiM5uibUl1DiOE9Swbkdgenz1UzrlhX7uxFUHwBJznBRilKhEv2scnYR6F4N6e6ANG+j32AV+ZfBDIrJ95iV542mc2NLejeVb95B0myYBZfPuzgCYGV30TmZvPUK4evzNIFSePCI525L39f5PxbjDXnc5B+ixxHmlSa6uxMbhDQ8csvtRs7DoFCyToSsUvq7lEYXch1TQudDjSGHcpSGsWmUKZUSzpwxgYZwyF6kHXugWe+lKCFxYwfYSJ8q6gYmpdOtGD2QeQilxfBIk5lxmcnTqnPBW3SmtONgMqgqYqSUz8qGA9s/0nhcbtlvANJnIuBTvQRVf1csG1r7xqQuFtnAVpXzfk0yu/RLCHRf/7K1Jvr7WvVg9oUkCbBOMmaYhdTeUz+R5a+UPVn4OcsURRRbq7hMtXHD7rUEPNOhbgm0kdIfMPaElDsjlxJLjMMf1rGvcEoudmF67//DkCqpvGcXw25AXvrVrPoKRB5bRpalJ0R0xbKBnmUWMjeBHqlZRfkUgsyHZ8F2ZAtrO8+JUwVPkmbF+CGJX9rRAVx/qwKCfq2bujAju3bCRcg06Sa3GJhis745mJkP0byOhiNzLM6HC4NF/tKdWv/xEQTlyk9q75U58D1edIWA9+bXQMeBoATDl7XTndD8sgrmdbysX17kNf43TVexJLmSBH9JDEcxM+smZsbS16/Us+8whzHrrq6UIjzcgzLyAo2iKWoWGG1COhDDr/EYGsrR2JfDisjLwsCN5BnApn7VIckdF/a0HGI/a7Tkq2G7PLREz/mmEXKUQASbejQzXrFCz+lMdjF+tmOdycGiCFZX9rYsxtLX3V3iAWAM30PqzugTXnrQCxysjVfjeeGodIx2ESRjzYLqi2xURwpDncQIAq+yXpkrfioMBluHdN+J6OY/hPYMKrFaEasP5J5tTt93tsxCwaJCFqIQXU4rQ79nKYBsbFbDcHkaPK6DLW+rwFAYCd99hl6GnDdEkl/BUcx7D7RwauDV9mY6n7iItLdeGmDtlniuX7Nx3PtIbZtnr3EW7yKHG5lnccgBFdy5pCgcqtxNVre15ucsOenyk6sKOH81noJNRH2YhYLieNND8tnn7Y5b9yB33ZdSqt08sl8LSM9Epj2v9IkEWhck6ZPNMqfsK88UwnTDWs9UTLS6mWXtx7RWOz6t3E4UossNRPLXBWq79BjYJVIU1Ux/+WE5oEf25oNOieANo3fVGU/zjHIS1mzRLibfnqOGRPwlztnf1NhC/LzsageIM2s4Zp5JuhI/cG+fl1pPYYQ8zv5oVChW26il2XTUaSHls+RrgLS4KLvruRYl9d7AE+0Py2rQSyd2Wj91SgYqwkvtGDqNqo+VbrD5cdJXfHfOtbOOoAl7rYTL+KXojJsxlI3nMmq8X8gtiBHvolb7JnissKsy9nVHnJtf0TN4MkS4FCMa03Xor7RB60RB/1a1lH0Vs4KwtrDgMRAp809Dy0rNiyL5nYki3ZeHUPrDgt/n7k3Fege7Kctlyd4kHahaWbOQJ74e29esQQwNNhubEeR4qnwN96/mpQV0opWvovYYXTKAXNBrvR8NHSXf2b1NmW82c5kYmnEjR3M5/oyPMqWIxsdBZ8sAnVfNsj3NUEFL6+WKdKIm8WfVaHkK9LzHCZE7UN4hjdXVh1+nh/Q1oVdjpcnatcqXI0Wr0PF99lKtV/nWDJsttSK7DQ0kWIOH4ZFsVdw1K6ev8uBJBL/mGCf3jXZM3Nb3+yisqs/MpY+3jQfo4PL767xZXV0DsFdnr38J04+KaBTRd8Mzdrn9t6nAd+fIlzXy1cogBZqR/LUv0Z//0sE06/sCCcjSefR0AxTjCljLt8kLyAhvi1c0TVNI+3QuzWdYfc09N9jWH7CLYLZM31yN26jjpNmWVCldspq8mGBGj1sJydvMq0wi8ULwMDEkAedKRi5RjReKuwrLn7JLwGsEHLVHhtUhMa09kAxUkbY+jikqPQsniaLAB7YUlDmimHzx9becB2MFPw8buIWeoWPsIPPgJEDDl5EWF3IRVOY/ZAcoXMGFy+ZWT8vFdngx07oazJYNHagAQRz9RL6qCYVJ/1JLIUDFAwVxvJ23tY+dHFMLBxRJ8wJwIUo4mfBzvXvSLZQmC5GU6AvNi286YCQFDBaEvxKI8cxidhgac/5+7PzbJf56KVLQRLtLrK5p4bxznfcOuNF0+VLiWO8hOUMk6PVqJMsZWQwHiZVjRSa77K5Tkp7n7gpoKkH9mybX6sUXiYDGwsKd5rva4wK7HPiCue0329UpfeXTp66YilbRalWYZDmIC+NzX1GAQbRQTAO2TdJfHUrf1Gz3iYj33SZOoxHHSS22GtVVVQ+9T4jbTIGo6ij9QdvJ0imsob6d0CftDRS+9J5ocTdQlT3XtwfYfSgxAJPjPEqp0s6JKA/N93m1fvEPqgZ02kn+vloKNZFI9Dx0aLIbWOxIqhaAB2PMCKv41ag80gftmEuswNC6DNlF58YRiPM0R/Vq6NoQ6xHDvnjuQ3jU8FP633wL2lT2WSrNMVA3ITd+AwOf+zGyS6cczkc86xfSYd1B/6WZoPbrzjDRzIrUGCyklfqtem+HWQAAr1tCWkDtHTbaJh9VDF2Op8X/wWpQjVYmrsbRtnsk3G9YU5ZGmErBu5ZvkBuFETWV8p64Hoc2XrWDwt2lyEQ1BKoDRwSV0BQUPxWk3gz4okxEKCSMs8JF7ae5XpklcuZ8VMlXWpvqw8x+WJx8rAZxhDPVbIQ4H63PylqQZNjInCpqVfqqmsTT9nKG/5y/p06dCQYJVWO6Zcr+d2t7ETP3sNyaAfkpIvC8dIzixsZveGWXwa8HagoTYN4FtklvJVNbG7QSF4gYKhGEABDRWbW9dZy/VDcwrXXHfmt7FmYZC0VwcvX9RH+qIPCvpg/nsIkcQsDx6LSxFDZyvz1bJQjcSUqHyO0V3zAvIXXQG2HR2QLJAeNXyCbps5toFqa+7p0RmLMuRSmTfvKzs3ieC8W2XEKpUK6Ae3ZLB2BB94B6666tKDt0W/0P2DW0iNeXkb1Kg7kWRmZWSrKZ2iw5q+X53mkW1C++dr42V+nFmK3zNR1NtyA5FtCNe+WwzrJE79a/JCm8oyMnnCKL/RNFRXdvPU4xcTc5OxYu9ML2YTOcRHCxTa14jXPlnGi75sD9Ab/9tYe6Nd3JOEbDw9ysE8S5mcBfDNsS/VzQPQFk4MEsRCHgVh0PJAOs38g/Ykrvl7LNyZDEw/oxBCenA880jx6hhjpaHbByRvcbN3wu7H66vj6QIPLBZ4RZRG2+pMxyM8lj8qlk28C2EHNjwczlegYB4fXibqwSG16ugFrKe2ouhbDbNotE6H5p+9dJoGLaGuLNxfAjzF7t9xjxlz5QVdB6w/UXDL9sy+8ZQHaW67bpKXaGReZlYvrPF5rcGu5HZDtS2psjkq9F+5AR8AglMQ671P04MdzeqedfCYS/bun8+vdGwrTJuf7+Bnd0jD8vfVh/fPx58nQLQjZfSkcQIpQogSBPHcKDC0ehIOswdzbOF+4gKLuUG399v1E7vMJKTncnaDYwbAE03PLqish1+IZBD062FeUXSVcPYaS0nCvMLczmGQKWObQ7+xvPbE4/cZfMCJR1qccb+KPRCDdHtuXo9EZscPUJtdFAbuGTQrADDvUmYlFJ4KcLsJaqTRiclHRHuj5lavN9UPF7hP+OZGfqk40Ve/yNasY9rEssr+y9QobVD/NQB+L1yllOjdzbo6oMVu19Zdpp6ryclKZu+SBMK5rEWLySglBGtC9/FncfedUuYQc+I2YUuDUxhv+YmvG6hbOh0efIpYOVDcyvlaMDxOP0SZ1ujpm+jj5k1MXYXZVWeL3SLnINsK+jKfh78KmUTEu/tTGjrBhjdXwmmeJit5Qai4DAKAMpuk/XegGHyzK6zVbsTivrdP0h4QOGIwQhKV5N5flPfeaioZizgZmAA5kgYFtTmfhduIC7/yJuOH9qKDS1G5dOVNW+bjz7JlA/UM7Na4+UsSGvsCW8lfUcrHR8XDSnnGk2Hub7N6aGZo3tx6o9p8hoK4oc4uEZKC8m0esRBCIC1vs+8zdD0PXlCjkLFgKm6pPeF8K8YguCV4Qk8P1G+8io7dAQqHe+TKBJ3sK989VUjZfkHEZ9TaUdFQnQUSplNSXwhQZvjSTir3WSBhOS7CpoJNLrr+9GwD4p13/lESumYfH1VXrmGQJGpdQVdxmGrpIgD2Q5UPwKm9gTrMWbSMcZUFtH3Gcwhx319WDCXrIwi3VaFuM6GgqblBHQoPCjsx/UCEhVetFIa0kUxFbSDs2eHhzoRz15qp3H3F+FCW7h73tsXrNQCu4FHcv5TrIfBcl+Hk5pHqNdKODpr1PJWYNXXgjuqYx1RIrQhIhoJpnTGLXInmteL29Qmk+9VysfDkPWAyyaiIEB2tpkLnkwwOUKhH6Z1xToFIdFOjRL50VhjFl717oFz7iOg1S/9g7RtqY3Hnaxrhn2EpXgib51ZUf4l/xN2qJI21c3ZAn6AIXXnC1oYz/yvhTPf6mkTRPbGh8q5xuyWcgRGuQJ19p4fXkKnPJmF0d+oxKU+0rVTKrY+RU+slCmBk/ZrZJoliuwDsGp4m+04LaZzbhyvSX/CrkW/iCwD6edB96O9KonNqGmDXdz/RAVMK3Nlw14g03Us9bEAllgIWszh5tJqyV0l+nXCk7+dnBUqJUQesQE9mV+ZXPLW/0BNEGkPgg5tT/I4DHpa41J1CNrSAnzA2qgzRQ0+P4bsityFGDrY+D0Xw+MbxDOTf/Eo67jjF9mOWqEdu19lRapnNxayc18y4FtdHUe4z/W4VG04OEVCaTp/TjZ6B6eSgI5Azi/XTxNY+dQyCdzs+S4uFqaNaDXXAB6pBbiuOLsRyis9bymPoJxFGVLpptnq7GwfX7TxtoVMDB+CE1N+bSyAaELX27M/B7Q+zIZC0AJlOTwhrsUZBT5vXNBC9FYfjOehpNKwIvgCINkc7lvnsJMsyeTPM65IFhX8fE5VLxo32aZuTY91bE52RRFZFyUmK2gQ3B20SynWSHNNH+V6tpnbZrTBSdZtBNrqwjpNGg3LVQBQE8eHWf3gUpxPjmPAeOVGmUNDSgPAr+Z1VFpla9Idr1ilPCqxezLU+csece5ioFacXmSX1Eda4hUj7UHGQ/Gume/pk9OS2xcJP12QllaI/eLsWgBkIUBQQgg0v0ok3L4Ch4n4o/bKDF6lMDUVCU9ZV8CJVpSFSU3VXaDKOrP8RNUm2r+LCMYor+9Vtix2AqQ2pkj0x5i8lTxEjJbHzHKaa4GeXyCv6Vt6Syn8a9NuoAYHlK9ipkmQApl9gbuOippixrsqT6wSHsmCYOMYFGSGQUCez/BwLwg5DJB4JcH3LrFIstZT8Ki2Sm98gqJoK0Cmle20ZW/cTpNaafAEK8f3MwOEoh2x2rQ7mMDuUCUS0UdBXhSH/4wL6+3VVKywkphyypDKSbXLYarlcQM5W/8ozjEUoCN/o4zCGFFldB5TpYVUPAPduoPToz2sCT2z7b5QBJuj+qLenXK9vIpQfzaUF93EtSw+r3sEIh8V2Onallm+uWrFU03nNpqHNAHHMOwdjFJkI/jQfGJqemuRdcuc1cQUzjQT3Jl5P2B4SuMCGxWvnK8UGw/OpH/hpf4bRnrrPjdwUg5HRthJcw1nDTRDXT9ISFjgoYY/Bz+JwhM67LBCjKke3W+4k54G8Q3usADFMXX3apUTsOKClllVqtW1Fk2f2Z9DBf4rRsF2prWkv5+i02O5uqSzK9RVg6Hkxi0pGmd0T5cvRGpQU71ANeXSscAN0xKhToky3OYOSVWn5Wqatb/q9J1xyYCrE90xcoSH2FcJ1O6LCVS7GsSH2y3QGpplZ3P0FLiRDA5qpYlvjPrsZKhk1tQbegvXyKk7XWnaDxkqsbIrbK94YpFs0kHYhD2DjzfvbTgeCnC5z7l123ZM0onZCnecspW2zaeUYCqfzHTZThVVZGgDved1BdxjKyZXz2Ns20QiyBxQY7Cc2onDg19Chn1iwbXr7lwJGk+FvrteFSIgNltuZEW9Y7UJJ0tgQGubJm4haug9LcIYKACnzgT9Po7+F5UHO1ay0Y9AiZf0T502rQzt52lIT2dZ+254Q4mNlHllX8Nwkz9Rk0s3X1pJq/upPwmzPxWNkmAKKbd4lHdNYu1orTisi8X0CH016hC/5iwYjVtZp/37S1enwK65cVositaSqlWjVQV/b5wfrSHGhu+VsCvE20kPrMs7zOhPHU6WrQBaa7QV2UWOTdxPJkQnlWvQzzJJlvmaNNvDLxH7QKTKameqtjDiAu32NqpGKXm1EOTHGZvg4QllleLeA9utfO2I2H7wYAFCiIDebZP0KuByTQXe/rOGT4jOZhr8p6G1lZcqKb6x2s+ApNQMuD2/QpvESXllC7q0lXZK+UIOgXN3bqlNOa4T18IEpqFqlNYZaXet31QSqEuLzXYODH2+KY/QxaxX4dyi9Cqga8tlO/KoDy2ELpTYACerSQjkIJlp8t9WA0RHZngcmKWRWBJNX/JOb8QmP9Ii7SbU27xv/mmGEUdOWX9WO1g9ZQ3/pgxbSrFPmG/cOGytLc9uoTsotpbe+GJyWTKibO4rtSQJvNXhw37ZWi3naRGb0Fme69t7kLY72agTVfBTGmVwrjzqLQUPgV+arJShYpqWhSQj8QruXZmvxeoZWsZ/gpt7MBGX+899WXpEA+oDoWPTuZvywsbkGVBoodsKjVsEdm2IeqH9dY8ps81HDCVOSQFkPWirJbRe8nNmyNdzNrXwMR3K0R2h1rM7aYDcbBT5ZwzpyFh3O2cJw2PvlMvyL6Yb8D7ZSC78ehKKYIZj7s07e2SsWipmqK3zBGk2FMp2Ech06fYbC06Vy8WRlG9XKirmwMjMSvHWYaFOLB8JA/ZgbEDylaqdBEXnU0FO08u5XDWSlYtZKG3bOWMHLi7lQBIQCpLdl9JPJnMLpIDHfMQAQEvBB2TCVt13kvxGTN152VfAYtNv9moR3DvqP62kOCzi5cil9jnU44TBhHoI327T3kdvnTuVWLNOCXtSOF0d/7uZywWZ3Oun/txjyyKqy+fSnNRxr9KxDD69CP3vX7KWmwihy9VBdNvf5nt7GUVjvDqQqfSvMj/lr7zxi8k7UBgau7TDo5SMBSmJ98s5Nl3nWa4SNgLcv5tliLD5gVbLtvqmYaC2iox7kuHEESkLJTPBSYAgkKHSssV6wy1+dxM0/dlntzpY/Y3gLLkqXZsJbNZKv5yyih4TXDEpCZdmGmlY18L23Scrq2LlRtZpFP3UBIAv2Rb6pjXcQ6HqYAPmtGK8FH8wx4Z8/pmse7vv6+W0w56s7CXfvSc2/zBDV33gPrqPliyMXKU5LrhJndFjA+clbiazttSDMmMyjq6n4EUQprgo5hgtVv8mB/y49v418BHMjhrVoD17fT0SXP+/nu5YbGpaF5Byjpq/aLq0f7jVvYQwm+5q/maqqjjUR8Fcm71YvfHI8u/hfF7Fl1Ku/Q20Oegy7L0iTtZMg9uoXoGRojBZo1W77mivlPqZUSHBk3meJ9sUn0E+vJIvIbta4FNtKS/u094FwoQEPGZlr9FejKoaJq8aJG+sR+6yCiMwnhvvUdT5i/iq+Qah/wLmn6wdOdCvmQOPeHkbm5276DmTE8SVUpUAtODwoj2KwBroGthTJiEy7IX9DpCVehk+3tClWB780MT2tOXo3XWh+QjUGVoUFkH8hXfQgyQ7uHLVMxy9SaI8QvIm3Iphvao/kp68hglNINUs67JBOPlRtsOQ9lKVVfXfkaYz2WBGgL5Fl27FAemn5OM4/uI4pcJP737Mp/w184BK0oqJ6Wmpht6hYn/zH0lwrKzBh2fnjPQBpPzfzXJksZ8Exq2XY3kXC8Pvia0LL+NfsLkIXqTSwalbUFhk0mwGAeTSDAwD8HfXn5pKi/JWrGaC8/PDKdlPjofpChuliR7dlHGy1wWCLztvsYAyeFLp9k6WG5vlgb0YWPKeAQediO+HZMmk0aq8Adeol5d80LD5yJRVpbXKUw4g+1szzsLwr0M6bLbHbstYAb1mwNLQTUg5vyq0lRARZnzfvyHvvy3NFEEeXIW21+mhguLbKLfuSexsH9VzM/wMFg1CIvlHcVAYkhYky/oDStpxTh7xIYXTK/BOFqRfVFaNylFlXKnUghj5hVBTUZ1kCODF/uiHZhoOa7aQcTtVossLS1XoSPkHKprkZw9NMAY48GXI5jGpl+r3ciS1CUN6flP7zU8SBpM4FuS8PVB5I7BaNNIMxoNNEPPW3c/M6FQJJ5l3Xky+RKlD18rhZQySXZ0GpbC55d0aBBSY8kCyiSyZsQ00laBiS+VCjwpA9Q7E5pqHJQsN0ldQhQjspuezWnUBd0R0r+ad229LGjnhgCM8lUhmC7YIrEJA8vX0HsQqJbhCPNVJPKCnVx/9npScSzVGtwMlJPS8Dk1rIPRXwWpHMABLLD2RPnoCRJ/lc9d8iePXIZBsej4iqtiwCRMyqQ6i2E3FWIFWsv+ObTWeZJHSkRpKhSQHxq4XgRKBlUBFzw2Bp72dItfG6Vzfr1oh3VGKmzq5fLbcMX96VPtrw8O7Wv0pp0ix8wwKJSz2UTbac1npq/EJeRGPaTitcdIwljKsU3LF80O+3BZqzDWeP4Uj9wtORc0xm3TwZzvV6bZV9Yq2ys1xXb0mo1kMwpvR9/GQBIJKp3ykJEoGj6oCyXK2ZfT1UtvSndt4NEr3yqrvnPdh1/+Rr4BFVvlAYtMui0+bMZ4uYVN7+dJOCD88KE/4yvy8cm4+v01Ksbqnd0UMUd8uE3UourElfJFMuFhAi8wYcqY1nWx2uub/aOB8bK5zjNDw7dkH6MmT6k6h3MNTseMyGuZ68pFTbgvdobzo9qeK2X0ItkRQSobBO6RnsAd/ihTnd0gDajYg62ubzDft/+2cTdqtr3Bw9O5Da+/wqd8lExHKY6VoTNlDGaSrOcj/wo48p/HKeogSlngU0yGlsLUJCtCwkrFkh9fAvwnaflFZVWD4dzLmUyJKRt0t47shK6yktEvI+Lf3slVMLSfM4CRS0Z7hd2Ozo94XlbQ2F5KnXqnSZBIczz084Ho1j7Vo21xHtOOc8+DAhFflzD3Vw7wbAzmB0NgCGGDYK0gKZ4TLh+wvTr6yo0PF9OLwGx5RN966XKKs6sU3cE0CowHA9GNuVOjTmdzAVVwu2/7e+iLHRF74usv/pZ8Yo0D58SABdWXKSPX86WXJoRqYLvUbIb5V7ESzasqeZP4ge2njOUV/4o6vqs/EWIoR8Xq9EfcIe2jJ51VNeMV8zSgbViPoMeIkoHKWTf52pki2JDtZZVjuQ3/6NlthKGKMB2HUIkmuYoOuJnaUk9JTaPY+VCChkdQ4T5dv0F6rwEOD+aOwXUtDovc0py4zmGutTgA7nA0xaP8K6Rhdy5gf2meApnc95CZCI3KrKPm6+VVmQwwZndxhrg8iCfPD1JNiAS+8EMBll0dgpO0uFSOh/CZAh/VIllzE1Nxv+obuVdEhonrbsOAdS6alLEwoqBqzYxV7Lp2CG6oqL9ilk3mmwE0TuWffLXltZFq5DkQEMPr4nuoPAEf2xrl84H+5lrGhZ5y2tVxOD+tM/bfFtNWExacwNSV3b3NzoADXPMEBWrVNK79sOZi6z+6oFJkJp9yXxXTUn5dP0xI2copoKC4/sL2pGqK2SPky1eJLwqRrckTbRjUUYniUPfyXm/qGhXQbVuU4SEv59Ep2LDGACQfQM9rAaOup7Q1s8yqDeLymVj6rqCi+YGfWAaHhyQnV7oxUNv9smv03QJI3JiroprFLdcXx7psdhSANUXVFTMvUPZskiRU8rgmEtCCkIYA09t9DWnxJSdw6loRR/5SNs/R/m+bFFBLJjAHQIIeg58vRGIhvUk/WUTzIOZY0TACJYwExk/ThZvuE/rkH1+z0uk1d3RDuIloeivn5pX2jBgH34MuIYouFYKeYgL8RNJWveLmOKlsH/7jK21IYTB+qPlsESHfTLNppBSOzSgB6GR7aB2Eti9rQ27MWIVbvF85I9dDPylmzzdefdXZmcQkgoXi5sY376/w31jsSnFBlOxbq0CoFsFF+0IeH4mGfxLgfVl1gVYO7QJLgfZNENLtPAhDIWm3lb/mboStmLkL6EF/13dEVRNeOj9yBNUE8d+aQZELsYpamTuYFLXnbUu9uSFckEdcxhXp8vGg0dCvve7lfGRPO9xyAesDwUmTo2PKLeIvFpBHbgr8ee6T8Bk2w0W1+aw+Thsg4rIIFg6Yw3QtsWixpA0Nez2VzvxgYL6KMjoZbcEi0TzoEh91T/WhFD40JUZXOw437sWssFQmPFm/2tuUA0WaJehFW7PEWE1+skTrHpn4TYxEybwC7F4WgbhNZ/WkMh5rWch7ZiHXCkBXzSbQ3LDwCVIMsVvSmS0qzbB3vivW+Sy7viuo6HTZGObLYUSdJZE/cqRiPloWbmaiX90p3UwIbUuBsWAAuUZG4ZmaSwJeZktREh7hU713CSSfuaQywQXjtMDlTG2D8sAhoiNTbl442MZ3OqGOZKY3Xyf9V8cBq6i8Nst2Ec6OeUosZQhW0if1neAXAUZEj3HkRL0nuCFjFjHUeCpOm3z1O05rNVY+PJQ5A6+XdwP1IJPpgud2EF3uRqzkb3wBS7cG7BcWoRJzaDA5hT9yg2Y1fY61sZ9RfITPU0Pm+fcM4kQ1SQMp2roZz7JCfq6ujg2pclqFC4XvEYD5NB8ZzYUKVF55NlQwJ79+n9LulztaSmCQpIVguUCYpLZTR7fUl54fvyDLEKaAg7qRObtzRZl9Cdb7LK8PBlL1uEMkUKCNrOGPstpBwF28cJ6m2FINYMI1UtYUSU4UQewfXhfZE78nMRTCom9g6xX5q/it1NKTLA3seDrvLN7iQlVLwN1PcO/XhEC4xRumOcfeMsSTtW/+Jj2z/WlRZ1eAdaw5weOmH7gGDI2Wa0nqDaOcVKFsMiFVPhMGQQhGd6MxjWJTjgEyEhpWUm/dYrUhv/Brf6yn8OVO/MWKPEilF6k2M9Hbd0g8LH95QCEK1GnRXzU5ukK2Vr8nQw7YFUB2umS5PyzSstPVbHOYSheJI5VI0V0DUsJYahdp/nezl5D+hs4tiA1VIAdOLpbD0u1q8wj+2WR9s2ndbz+V/hk4DKB+1NYziJLreQsS7EsSKQE1zwVDvvmy6eBeVMVCZzHCztz56gH8+4xtu/A3J1euHRb3XwIHhyyu84AlAsJsG3/MTbtWS06Kn3PThChWMkGqXUyZ4gW46q/ZTpvJM39ucO09gbKbc1+nMHlQmhukFw0k5+ETaXyfWSQQWAigGVCmZteaPQTm8I6VNOcJlbSW9HjIuwStd5Qb8o74eSfoilF6mM94l1U9avPXMFCiR+gSq8dubYaFk9WudkT2HvMKtxc3BQZpQUpC3G9VBV4S7w8QzPMpDqYoPCuTu0hzSnavDk7IcXlNdL0LV0GpWFFPCkiLoDrh8dR3IXjT2NyuoVX4a48rLWT7fVORcMpRSuWZ5355CFYeWp8CP5t1KC4Tf01jcqrFiQH6xHrBVFFwQVCTlO236YUOfK/dB3Gf0ArLn50ehR6y4w35qyB/KRhQJq8RCt7YlFWDcp0bhXILhIOdJW87NNbymFSrrNDLOrqUM9Lv65SCqKGDs7EAHfCQ0Puh4KOe4OUWwVfPLycxXZXmeh9iVFHSvHME3BjnjuYUOb2mSKZrTnWSuTqthVwOYIGGSQiRswCms0zzql7eN7+FjxTuZtY1UDs2wCj+ekehQ3kmFAFLBh7dBGYrRKHQovIKTaHJ/QbaIAcnZMn6pBdVSpxGwINfcvH1FcWAtLAZOc6RXouTVt2s0RQ7zza0eA+Q7DY/K6uqnRrMH/AICjbGYR1o6f3MYdBssuKVxKPEbt0o7oDfWMyQy/vlPamTWpopOQYRJpBMIQt24s0z+JFc5Fbbq8aPmfkzhf1Lyd9iJyUbe/DeIiPZrtW931QNw5bZvpFXSncVBF8kCkQtRffHxhAMS3DIY0Xyk/wlRTvm/QGU43IOvukb3KVWMuw8IYFCEifaaYHej76ka3IhsXosdZuU0TZycm+YR7+VXyuMHcR6AtXObL8HswnPEfyhYlpmpimr2kUybxUqroGtCodDhXw6/P5irdk1g+PKosNoIuPeJN67RlnpYfxywIKFkl4YZLClz2w5t2zj/vV7VnLYAJsRDLnhpVzvAXsu+XUSIKG9n2uHf0sRDMkIXJRo5L+UxM367biUtntliQh31vQ65/51CNCASw1kSm9pq6fuYncR5E0zNqhxeku73pYh/kCTLyjGTFTZUm06RiLT5CGKZfAAk/U8ZMbGl/q2QT2oy+vJBBNHMmPV4Y7D4rUP4i0cw8Uxp5bUYhzLItP/qwJIKmAsTxgcKY/o6XxK5FbggBErxCOsI6DYBQbI/cKva+CZjVZSrQivdv6l23zny3O5HuUdls1z+XRSAwOZPfOCxWZoaCChP1a9WjwPgoD62SDhc1SgtDw4WYNiu/snjtaYLBEEVc2NYhC81/CgdZqLjanoFIotk8ThM62/tbvtX+snwfpwYWWTPlTGQmMKizVA8CXcn2LbfNgTGy3BWAvVjdLfeQJDv0Keq8kYJYvKKKa2+fG6sj4Wm0ch8e+aRrspq+aZY8NAGuSlvUfOxiW/DDD0vbeX6/Jup78UI5ZPDu5ens6b9C9UavF9KaADENOq7UTeRWV88kgrHE1+7BRnhzu3UTiv6AqclBMr0r7WF+wEnVygHJjibE/C5tAZwuesTqa68qy3C+evVFQb8Kn6Z3s5JUzn1VNWvtNa++jfouaBygDbQgw1taATp+T89csls34v0WWkK63ccJu/u34cfokIC7ha+yV8SmzdzaGDEPBFMukoKkWQLI4bqUB4GhDYGLseRYhRR7/qfprW2t3X55cY4Ig9XXlYVyuY9ddeygraS3nltrs4GA+v+YAIFQKG+AiTr7hIr1dPzxr5WTXVsLAK1kiy00TCy3vYvKpLKyA3tRaVqWmUNIWPsJu9yTalwDyG+dwQKAdYxfNcPFQnGV9pe3ZjRqQKI6Wn1ZRbZr0+jM7jD5cuqjhi64CFT3QlIMWz3NTUP0qzjDKdNW5YXX3/tvhWCi1RQA0Qo4u9RDOS3d93PERoeSAM3OCGMgV26izOBnaS3eLV0mSzHpnHu/GfCmcHNoutl1nW3vAb2ThuWvpINtNGQkNSYAtr09Tgzv2VTAuhaU6TxjKlbEIapbjDauInIJnhKkX2/X6gdC/bK8BfgsinyBgxTfVVU5JDf+YphN2F07VJkTA3m2xygkxGm0DXGPOjfDTLLlxvo3+2XjbGPmzM9qhcaqsn8sT5QR9f/wCfXkjZ12HoJh7T/6SGy5qPrpsV991vkfrE5hY/1qViG2YLWkp21ngQZta/sn15vAf6mYsE50m0hUZ6nTHNeJrc4zK/E2xUlUaSVq+ZpJ7ZZdtAbVn5WJr4xWPNtPIScy1WnTKOu7LPbSGQPr4SSj9Wkg3eQrZcuE1kF9TZkMtxQIlCkrRKS8ChSV3SmBnVQodeqAhObwgqhnbwGSaTEbuabpuJxOiJGyj2yyhV3Col8kkEoRJR9TBPeCupWB48a5WGZtNF/c0vDiQ0E444CMKAKXw2+POyeTMiC9ZPSg/Na2Ay3Sc6MmJaau4zx/SsIhGPenqAJMG2c+inMnokNBh0oRSQMJMTjdfO+L/fV+VZYOiAUoKy1eBRV1eqOAVafzGRpeaT7/NHcIwZhe9cu3/vK8t0GmlKytD8MIraZ6ES9FpTe9+6I08qRo6iUBz/apgfvXGO675Qhf8yRFTV7C+2kTTWWpfns3ARCQrVCAZutrXtNvKBoPjDbvEQXV+3OM/UCuVR1SpM0VPZQvV1J2zGrnmqvX5ZRkgBMPpgVY0CxaCIXt4nraf3jN9WV/6WLDvDcICR2gh3fmqZfvruVdvc/WG/9rGrT7ZY2DFknE1laJ0JLGdzKyWoq1+9ImpJPSroilyqMA1I14+t81SP1EwjZn/XSNiOM1FI70mbfhllFlNSq3Q2jnlBLMIcMb24K7H35Vqurcv3wJJu+UIRQT90tjvH4g0B6Ihsl/ds+DTNJZo3Ia2tbLtZjPsEO20VyINrJ9J1fZJ23p6JjjO19UvFwTz2g2J1t0kZoljzltG+6Nm6qBsJozuT2kjRyQUchgEKTMfKpbiO2MKInhKJjik+KrB0t9kDN5tYFAMuJiHR9RGhlf4JU6vxpcqCsSH3sUyyI9ahvzd45BnmIYv8KuubDDsKzWMeFbIZBsco2h0BI+Zji4FWAMwrrJIauiMvGLLX5rtChoVmvKCDzF+WPYxGqZV6LNuofDu0dG82pTryNwVOj/WhSFzko1d4jVVhSMjZ4QgvPcIEd9fexQuXcVHJRXT/aBUybg3ofbIRl7hfyPTc45yGANrBwuF0GWGVQcEik7GTeJrsiEfjWlvN4ikcci3C3iEqvMibuHq3uhci2RCsl73kK+bXiHtSFjfJPBRuAhvXG8tyjOxJfqLaJXOeNNprATEZTz/bgB2ZFei+nVrL5c7wKN3zwkRxs0Hnet3c6KoeQec0FxNQ49s1gWG/amUDGWaGF8k3o7eAtufQRlPU3wT7iItX+zCHqFr2hGpah3gvqaAMPMMZ1cl9WXFN/L2v+Fu3YJVArd+Ur57OvrQCszypcy0XKh9HzZk7IMJBnlaa5Y0s8pLuQDxqD3kDRB04d7KLnd+hdTV/e7a8L4ubIcijtgHXk7TO0x7tt4BhaPjfdPFKSyvFqz4S6NSWJ/V4ivxR3pBEbeuh/SJ5rko2SbO4zvQLFk3lJqnODrvOlV/0iI5QyEQbAdi+LbqJUoDDcSIexAakgt/w8fU02alP6T6C3UR+JJjwzXGCz+fPej9yYLw31yODVRKRfMXNrmO3CTW5ToQso+SNC0ItuVkdSqC2K7Eg3rb4hldDZv0wCWTgRIPkm5eH337D775o2/Kb1THH9gT2F+o0hSsUBe0oZ/FsrzEb9pm5p5k5PMLlcfShFOk7y+7pro5bDiKmMkVKsC3TGg04+FOP9N3i1cfIaCKCIBKvvZb7qh27GKkWO6W0cHc/jzx+mcWWiaWvgzOiULEl5NhRE013eodIZjGPt5zWhm5LkYLtPPWl1uicO0LfHpNMPio8/O7v6jo40SBD+D3f1kbbgBRQxadLrjbiJaJTqRLucaaMWLJKMtIro/e2iHjAl0JZm/qQjhDgWkZ28c7+Gyr/1EHMTxE0MvUYfOA4vy+kN5cPz4EwfF/ThMB+f1BfXkHyg8i4SY/m8MZalcmqFsB+cIDbp0HSoFuzW9YTJhwgS7pcXZZ7WQ6a5h+bsWy7OozejuREWYxjzpiER95Hhb3PC1dBeNoHlLXZBtt35k4m9JzNp7A0/+9mIoErbUdLZmEr8XwLaUvwHZ/AY3/ZDZ8U2WK2wasM00JGMWfR0CVJKgsRLeGHuRLLFcUT3hosrgLjjNaRXS7/sk7OGoAUprXIHvAftsH1sL1kiqYUIXgySKNcp+WdNu9/dMcHzd7j3D0w/NJRXS9kEqzP1nkcLQ7RhY4XtJcALXR+cwd0Uv1e8mGx3sJwPB3N1MMI0ZKF86MDtF3g17inb9Sk/y6/aRfcrmTmJaLkk6wSJ80/fpoUe5JbDNJel7JrN6CRmUJYPrvntJCmivoug1Ng8Vt1dd905p8UsE+Gjo4ZUv20H4F5X7x7/RsADCkcjbShi50gKOTOkv23C6KDHTozA5F4dm+0W4AMU+e0tFROfmEw0Y8KO2eE4vqsXoCCkryXo/qj8Hq+ATYCJEFrjNDTCtXzQ9FpYvq7x9WLpcH+fkj6HQuY3EseXTN/dypXD/9Mif6xik5CqOr3bFFrH7dihkasYi01pb8dPFRpIJrL22HYuiiYTLMONS1N5zQXBB7lVHALe9UZ5vQgHphiqCA7WBi/pX/WLdRF1JG6exSp5JLwEHryNqlwasngyxVyZa5wOhC6WxA7hJUkU0o6y9TDYuBooKjg2JwK2oCRIdGm7BPMP457g6dUVT+OL6iSUm9/qkr9GyzHCa4dExb2/I7/debflpNF969A8CjFZ6QDw5AVdSteGSVRsn5ugUPRt0CeboXBQqjR2bqf77QaWM6Pf1P4Ct4p7EoIS357u/nMGB2fi5aI7PhCrb3mCtIlYXhqbeeq7Z3ewwzb6H/TrVQ5kI6+YHAPdjKdGYiSEp5pgcHYXZAGtgwULmJ3Lnpu3epJiVnJjIAmmfpeh/31kgJUbS8z4PVP7IJ+M9fNzQG2PxWWQih/+2WLawwUE3hVv/eNGi/LSxHoehZ+zu1XM+uipdT9WNuv6a4QpGVkSKxnu3TTCdfQK/3TXpxM1+fxx5zaWvgQJg8tVW9yY7LUWgZ/n9ihJ9oCGVqnTGwqcnv/zOZ10i4RN3kLpfMXNXl2rwYmzXbfwOn1Prf8PVrxXSYFjRjCAdWHWFRc8XLnR719iFRdHb0i28B8Y5NVg8+a4VcjxN+5ovtqwLqccd8GsQMRkTyMqJ/2OMzI3sVQ7GkIDFRiShXDMFxeqA7bGp0lyITbqnf1M7wvcoFpf9Baue7UwC5IkE1Yi9nKK1c/nQZWOc9mdwim8bcXwZPSLF93p5YznZG9SeX5tbNmgkY1EeOlxBbq7/qz7blUXxH7PXqQOQqHrZ28IxKRMYxeloprPC2d1Dcgu4WW0KZfn2iF6VPE+0knqSasd6mq1r+Y+Q0Vd24KrRiC7DsyUget7pSAZTxfK8jwBo0WYCDh5dnyp/N++eJHpcVbfSLyPerVJKdbXsrvv56OfUVqAkXzZRwU/1g9lqE3PUXR3SZikGm6V9r3LAYIDO1/eC2OUJOxOPbLv2coTBhxpiFt60dwicI2CqkZXLJAAxndvdp/NCNavaE1MNZHMnJlQY9d4gP/oEhwoWjgghGrqaXVprkSbplklnGG3CzN3R+hR6gjuc0uguCcsq64puAWGH8EJ1vR83fpLFxcc+68AvuLIxAAxRUuzPK9uCelatqrVQ+g2uBW+laCV3mz72r44t7wCg02TtHiEY08wJ5iN9f6gFTQ0j8V0TjrXndNZpxReuTCDvaV0qoiU2gDdTET13kBNt0V7yyEzfmCchqL0v9A4VkirrjHJm0E+BnwKmxTO9gRGwG9iaiJGotyqWsGFT2rPGSMxXiPxlgStCgM8yRID1LUotzwqT1/XjMrCEqVCbh/zqkJKPkKqkfhKtEUMv0rW8v8w905q8l1hVLyRaP8mLqF9sZEKbpuAYjyHRK8v+qv1u97+lNoHo5YIo3MQVkFuZH6CjMPjrkgKvdBLkINx4sVzZrZX1HCZudbwhvZH0ekhvb2peaH4/jfpHarQgn+ub2/WErgiAmqzAwodNS4GksWqd3lYuerSBuuE2xkW1dBRMXbESABaeAz6xpDWjvSz2F/pADI7DzxRfCITE0RoH/+r5FzJAut6GvalkmoQFzRB6EaF4sdWSjyZfTmXu+XVDug25hp2N/dksYfDfIlQP1JvjnEF4K+emB6dcHxK0Uip333/IYyG8feSargETVN4YGTHFlAIA6klAH5qSmd/GbLY9eSRm2aQZRZuNiVAINTnL8R+08jNlkWNY1KO6Ekrr00EZO1oJ82zhZWMS8IcX0JO9n7jU70Ie8mqWTddFHJK2zT9DL6M/Cj65E8sePVYRKsBx74F/TTLX+O42RJ3hmbfn06qfkaugQQ9C5xkn30CFVNscFNmF86GXimtNmiWqqlvsHD6WfOV/KbyOSrn2z4Flmvn8KhaUpnumy7csYFSHR7knPfsqq7FcoGhbo436u7v+JXTj+6ablF5PPSALgRQtelIVftplvaQCvapyMOlVCJ95mKxmvOIMZMWbNzX98IXoTRy7t/hs+Ihv1DJ2iMYarH7K06akfLNj/oPLtvx2mHw92ylHvOJF05UYmgegg0pWsdb4F60sSuoMGkdYmv4PTjTy/7Os7tW1PMTR0YAuxCwcoIWS56akZ4ActdJgLaMAgEjy8l81pAUWMCedZIBAl/pjtNO5Cttc201WgH/TzywX3YqgVYjmhgkqWobTjtU+do1fWHZP/4PfevxrESmo8EDvlWleV/G8bYwkpTapLP6ie8dFbtQFkv9fur+9LECHlO+8sqme8C7RI9bqf23xyh2hzuB1PiX8p462cFuk28rH5vvGy/gPA35oajwNUaEp7axt83DCGlpLgg1Uz7gn9GPyh3SrkS/tyqp/2tI+cYX2HwmurkcXCxBfW5bsAADL234Y0290HXkSvoy/DtUyD4Q+IR6O+SYh7jpA4wo5EXVPuV9nTWPOh1smiQMrYtyFHRBjJAeYZP319yhe3yqHzxVnIe0UQ+6daQbBQ1/z/1A0LUNF1BguFTnebHrFenMTkrEaVhJTSMKdRA8HxNB1+i5fv3lcHqxhbEaYQN5fy6P9nMTcu5IKntr2ewCnsadVtanb6aOuSO09lrZIztW81d0tn8bU3z1gLk5PrYDmUTzjr571OI3PLF7qhO/QeWtKAAaqT2QFvymTqwD/mK99bnCCxZBEI6A9WjEcgBB9s71+mEo3Aq3kACcxCMjVhU5P65d8nwFbxHnEtbg6LFUQUoODgxSy9UyXbZBKUuSY7KJqfx5N63+adl1+vWr91+lbmkJq8ph1/isIoU0AiZel+w0xlfE74Hef0CIDXtMFdTjN80Sf/X/INHzH/xDP7e2nnsf/sSP9xlvMLiRHPAbEiI+F3247rj3hjIWHX3dR3ENSdW318O67/rp//H1HstO8oEWaNPM5cnAm8u8d4bAXd4J7wVT/9Tu7+ZOB3R3ZL2RkCRlWutzKwsEEUxYvMGMSCW7bZXALQbTFKMO4vwX+zc7X8laFm1g6d79ZGYOX8bIwWV/qOCRaXO+Gz4qsB/T6lkLV/SaxUPUSaem6GZ3F+SLKnsa5RuvUbXFk+J6doYYhIqtqBCQSM564A6aovtbYeZ04LPv2TodGfkL1OP7gnx+CVLzM1dlE5PwrFFhAfz2+9hk/rkQjq0MZcnmx/uoP+8DVhxABmi8DmWPAQ364Rx9P2OCI3H7mPU834czkHXUeRuh3mg12vHwBNXZEk7HiIhF0bYc0rfYlEfZodsndx3a1I0r8SqdxaMeeae3xclwNXu174IXjbufcT86ERWhOs4zBURpxfFM3/Xbe79eCCrlaWR857/18+ciH0ylgmk2h1galNhBcv9Xi/V7Ta8fF5SVufHElieH/8ff6mo06J7n/fn6jWLPRLEm6VzbYDIpPevjImNYK35H2kXoAQUgqSpqO0bNHwS3WeZxb4wa98u8c1Gl2quNk7k9G9QMoV2ltz7kHXM+2fh7wH4Xv7yOC2NZFFfK9EOcrR8SxowWX32X0ujC3TJRPr9KP93jmJdOzcF/JTBEL3TEPEumuBvXgPsZSAJJZLPVTGb4X6wGh3TDKkrprQWDBS+FXSRlbZm+F98tUHh/e9Fzj+bgJAMaBamdwgMGr4PkTT2vuyLUTzfEFMd0qv6a0BAXXfgkmJ8Q5BSFShc8ZvOh5f1i55Rbiy9IrtLapD6vUMEI/euYqkyfCqrYbbpvLyaJjEfVNGKmLsAH/a3qTdVOZ6Rdv/siNxkh8v5E7kV0SeX7Hsjf7tTogXGuq8ypTswd6939EzyhsiGsp01yFq9KuhOWv/2TnuF6/CA2ymziy1Lai8YwYIk7L3+Mwf1HsLfaCEyhrJANrNz+PKqLD33lQp6Af8rm5+U6YTNI/KzDOBEUbV9rYKlHsSKKS9IghDBt9jBDdDH9izgxQEBvQQan3A6qe/gODLLsnnbGS1KXWLd/lb/vBwFlK//bMyuw19KnXPKGxAD/nhBaLkazsWK8j8o/z/oiwGvqUBnue7lDT5BEPjfR3O6luP+//sIFf4H5YZbKqdXX4ESLOi/A1AY+nfI7997BPrv/dUWe/Pf74D1LeCzpmzr5r+vRf73wHT790H9f98N7PffGd8Xw82V3+//XsDfawRqi3/H5D9Po6fPOcHkArUDf1Mm8f8h/91W+j3Kf7/274Pt1UP/fbA16QxetkNav/+z6TaXObiwqr3L97vZ/37AF+me/g/K/HuLiPP4ummuDVnLvSBNqicwrKYXNEJQv68C5/2H9zgmBp//prXfGIaNsOArOKGLjRZSEPgC6YAuCH+ltWA56S2d3zPsODHVuEG1ATDi+BMFuLJov77YNEQg7pRMk+/nrqYIwdZcw7Lu0Dm1+M3iN93Xls4qm07dB2zIJtphi6uBe4D9GbqRhzNQ8Bop+HX8FU2UaXgeYMbKfPH5GSb2e37hEnh4iC/KXsC0uQOlD/1ZH/CWT2vWlgBHqbkJqbG469J9tfrOStFgHoVXFMmLEyT8SvaSTnItXSFIxu5nsey1/M69KCseO0P6CFHgzlwlfD8f3WZwr4FBpPhXpG0Yw3Bq1zNn+KB0ulsM5CBeXgP02lVcojuInvyrsFKXAUjUEOVBIJtf6CwicZObITDBxCE35wIS4gjGdJDECkW7yTvLc3J2fWKzsRAcZSQs7SS8qYhn6IXKrggYTFJ/qoUPxrjIPtvlMz4YCzj/67PPd+Rqvnm+cLXV3dxzKsG1G4cYVY20TUW+PtMepXwD9JzV3ncaP2/SJk7W/qmVn/y3rJwFMMEYsXVy1nql0+rkLXnRXAOiuJPjeG1jcXaXZRWdjofsDI5YWok17GSlOC3ZDCH7S1DWMLcq5GwWKvnUQ17CKEXSXU4Y/c01zUja8Izts/7AqYGY5rTXmdSbsNlIZGepZ0PMsKTJSjryf6HHT/uerTp37kv3rytWHCRWDCf4cpYUs+1xtRm8CZPomcNHDdLdf6Z8He7G8H5J0zBu92FaFtqL9qg/s1sXQoi1af1hFWf7zJCZHCaoZNAwbIFA1YZ4oCTz/gwFpUhE1DhhpHhsoQ2/nbcV3M0/Un5FCt9j709LATXO++Z3HlRg7J32nGou/XRQHcuLj5y8AusLKulyCSS/0qeGC4LPmq2n0kCRXYIcVtc/P+MZLyKgE2Fpnuf+Deqs76ljQqinOiHME6gBYlU4mZ2mJAnldP9Qygu02zjYcSheY3wdACYruhkl45NnYwF/tB/5I0KK0bCbCjkjV+FFfna6GfvVhZfH5cLPFvZMpWLnRpb4s77YMZ67BN9DIYe0nLLJfbia/NEJx39PVXSs+zB5rTX1wiWNZLix2IeiAXa10f4WhWdjFjHYRQ7QlgNFIe/fnhe8r/jXY5hUpKd7h74M8AD7ZXnqoz+ZQH/rbYJOjCsKyYIE98pYCRsq80u2G2mNjW5zE0x0lXr712+uSUXjgHEfAnac96aja1jA+IyIOV30cAmI//KTchdqUDlyVIacqiaY3MKKiKitiuWM3p7CZ9Ar2v9uArb49nxVSvzjfyH7Tacat7+SnJhMkwi+oszBZk4CNH4yRQPl0Vl6RJuw+22NyyEWFVkUx84WXfSKVdO9WjZyjsRqG6BthYjlXdJbvh8O5ZGfidbtk/1SIAmJkHx15Bv05GH5FuxeOD0KxHHVQsk+1esJPypcUw8OAaV6nmAGLLrQaNPszh+WvDYcekHQgTusGSiNo8dTJ8fOHknP4Qx22UA3b8ifYnMhvOoFDm3Xe9/pEdCWKOXTKxkFwaS6dL4uTEKlxMkied64LW3Nfdg7yJu9ZhiRwkmMl5ztBMpgvwPTs58A1l0hbJ8ziZLXNsQrgrJgZMbBxYiDlH+KDQ9+gBXoXv+5jOO7fdEzQUn1lAQD3maL2WRv1nI7+BVuVwOPM13sh+Mx9du+Ty8pZWXUQ7vVB8pszu2DlhmnfaUJQ1WvI3e6lmZALoq7IMNJAC/RO+LDWQBU5lhXO8Gd4fNIxN9M2haKiXdxS/XDMhj2yFVzrq5U2khRCKdr5pkgh+5xMo7M/N2m8fCK/EXB9rDiStbNYMlZMTZSX34DG/5mIdchyH1VX0+1jP63GMGNGLk738Z2DX9nrDYmqJ/NydVm9aZMy0bh+gke4z+tWJuM0RzUyluSwmOxVVvKtSoNr1ZVQvFHxEufOKvLtqKuddk4OMvxo3oIe/urDsAy12E6zHaxoiuSp47OIfWPHI+/Ry/8oM979n0/qcqq9ZqYIp3dTgEBk5K6zvWz8Iz4R5Bn05yHnx3Ejj8HbVRbTWuHrKNe8629KuHiG4NYeWp4ojLPHvBIj0opnBpM4EqkgtX6l85j/ta3tPVM9lcZ6/hZ0PqrUlAELU0lVMaewH9WorVUuzjVMlkc9NNqTclEleImRe/qY8up3fmhWPA5ZrQtkoOW+9WJyuyVSmAr7TO48DF9gRrhiddsawurgwkCjs8KecOrRasnLENs1RyjgBPZV/okr6x1+e+vMmaBmBvdYB1NX28s+txOg3+9CizfY2lPRw3T2kRNTaxl+JEPd9errZ0Ts1T6RMz2VBDrVyJ3p6FYedjytbb2XAEd+5I4tgL90kFvKcD+EBLXFGcVC2seJmjuJVn7uZywSFK2shhU2Q3mflei6LzcChSXOqjxvojYkOMLS9YaBS5gnLVVihEXtFW1+jB8fEQKo5Zwgo8HpV/DzMlSVuOHZ8gwVf8YT0Y3UeU8v1b8WXaplODS3Llww9uoab5ITmynkCvvP7IS/xWkILHq8PYgxO0XlkSvDababDZ61OVNVuBBVhhBjrM7tKiUP35UAXvKCxnc1XG8QNDYQmWgkOyGMQH7+vPoeR8MK5DCJsq7f3iDkZpgwF/AHT4sFouyaaniS9Q2bL45dRV1+tmU20ftdOYLvGJjiNg5SklXcsT3WdRWzb2aQtsRBoOjreq+cNSqwNJQtIpJkvGUL+yXTWUqavdif9SHeckHyavtfODvQSMGjoU9S5DnYaxHHbnGbDSSTd3negjj6nfJEv40tydT1ZPQ0a8ClLF3yJ57fdvzwc7fiqYCtbYr/jlQu+8X0BQjtEKLH8UsK+g1R5hDQJGz2LWPgixC6MQM0ZJ3xh8GRUyyVGh9Kim577t/vdZ4KfRa/H4BMls1e8riy+q6ZpCpTWzQPpJfE4A+kw/iIakowEv/WeEvJZFUhTGULw0lII1XqWrrXQwZP+FRXy958T3A8x8QmTRRf5kCYmWEVncewEWva2W9uouEnxxoF87bxsLoJMvsfD/hYoFeyJYqf9MXxGOCM2HK8KNpJCmyVLCP3KgsebDGB1+E0bHmkhuVxX7Fh1XZ+W+Tu/ZzYX1p2MwWMar7ktc/RZqCC8KhBwqXD6PkudGwSl2K7QsZNO83JY/GbE6UwJXcNS0/zBEBLlSQMn9vYCU+IHla08nCGiDt17IiVaCmGZ/zdexK91YlQdDjfYkZOcZUQfuL2WjYosfKCWazAC7i+29eIMrpSMpSqX6awD+H31QHajtYk8bBg4qoxo2mteivDWOysQ3Xch34X4PKUAkDZnfKOm6q/XXZ5TpUpYZ5/tDbKP+qJDd80VibZSRSJqWuvh3va40gxjCn9iKKq8mdqQWJgHQd8jj6zTlPKwxcZ4O+8LCvui3zvKMsuHHVYejTlohp40cpEslWpV2eO7C3oWQHJDf22VDJeUz+m+cJblRkAwoXuXgu1GhJPC5Y1nEkHpLgcnrfbcQE1d2FRRun+x3y2KhBX15R3KMGmq6nXsWY7JLRm2VJos0oF7zE+LEvJ9b+mt7rfzlj9okcIKBgc2ZSo8Tl6+FIx9PWulbbL/Q120HRZqDESPiqbZKkB3DWObJu5M8Nklx1WuRPWDzarefmIhWWib9Lw4cI9b36Y74lARmSD0gnGchTnmNmy3TJKVw45JEqiiCkwCNR/EpLOA8M9iLxJbDfB4WLTP/OpBNOH9Hfho/hYFMz66v9AdIND6tNcs1qOTkQLX2/xHccNGS+TOvwJMf8al5MKGlDLY2I/Uy/VvgTBDPgjvVk1LWv35KlSjHWfI0fmItNshWR9Jm4Jy3tiMTKbvszMhY+WOhyjQ9mlp/P8WLO92iPDR9Vz02gwX2nrtOcjkki5V+AW+ESupBnZj47jyJk6q/QSJHMCZfYQRKmR9XhgJOMqG+Bt5AVnqapScw/Z9jSX8KKiI/wfk3oXRnHCMHrLo6RQUUZU+TVFq3XOX8OYUULWeJEqi1T6ngqXvLQy4L+wlMpTU9BVeAtF1Dc1epdUzGsVlib9ML+AoSVmNvnhkdE7CUQ23/dWutr3lXhVGpdHzaUdiSmxb9/cdE2x/G3PCpJLxp9NQgsfP2WaVw7ogx2Trp0dWYweGiIiHEb4amAdRvVPM03ENrhcPyak5h8ujb95Hugq54+9/XyC+gJ7QvtV0XESZ5krIJDhCCc9sGtqDMEEy1ZHCdTAn9+8F5ZN69mLh5Mc3MnbZNartP6Kwu5bXJsXw2YsvhK03MvS8LkbMn4dn9ZaMK3RJxng3ZnWtM0EvynMPdhKcjVMWESTPcts8Fc4VaF/dTQkpmbb+JbztBTk5DLF9DKTPXq5peHfE+XGf2cm6KkzaAwUowT3bRrDb1WutvNPyc0VvGhcNgCglzx8LcZPC8tgRN/DMn5JhdSbiiSNe3SKP5+j+H6stPI+3BO/QPrvC/dAdYSdXDRNfxxyOwLdHb6twzMTFlC+WvMAN5FUlevMPPy7ZBQ1kZtZFl0WuZHJqVWzPq476BE4AGq9m7N1klMJNOEHMM1wdMFVTvyK5GCaeE/4yMYNpH+7S/Bag4VgwM9ymROLev24ftUmDLGlUt5PxUhfi+LzfQ9tZ0bPpC/WmaWpBaEIUJCZ4yVJoPNeSa9pe31BNEJ9n5stP/mzdHsO4oAKpFkUbbVTs0EXdxxi4M1HP0j+FgsFacdfzwPNnbNQFyNyqfgH2RZzDX5mmgA1BRnZ5lAPuxekddN2MbfkomxFJSAvRtU3zEhQ8NXyxcJtoser5K/zz/qupGN7qnAXl+6OmCswkpyJGQlaUw+Zsi4TIxC7N2O1ufSVJ3lZjcGaHODgQILzLfQ4mXxZw5LQmVfrn5fTxPloEa2OEAlQPUdmOmoYKFxTJjFGSPZ7+ibUBoyHnnuZ+cGAI9xGX0NKF95aqbZ1a/5XR9sYRhDyir/r4l4f3UZ1b9oO2AYdJ8/xafHnGg4ubOhCtZD5R27yrrcmkUEQU7FQ0JkBPud2FKr00oW1BRoeRHu/L6rFI0vlJ5WG0wxt8QYJxujYGqktcpDxXUL9WAEswlPi+vL1iKlGZQr8SRSvigpU8TDn3T7OXDJ8HfdkPaZ5+2IIGWaE4hRyt5Y+lpEhtk77a7VNs07/PMGEdJDr0SffML4axQtT72vH/UPQ8nZGre4jpOucxZjuPHsNmlboVp5+GiCUMb+Kk6/4bq7Zqkl+aKvyQkM6/Yakj/YnpPmpsEW2Kpqsz6c4tNYyGkbu4cPAMU29LUxEF70U3Fge77+OV1WJ8pwdHnSaBZnZFD7NC3Uf+jjPX19c4pFPcMyuoMpY5PaUyVnBD5nqedRl/DPspRf0rZx8WrA0cAPM67YacyqP8X1zzQZlpGpyBt0tocYQSkT6K6gj4x+KZ5rV8lWrhlFlb99aDO9q+p0vMEsaF60h2t0p7aTHR80hhOO+WyPvGWj6bRyOrljqT/R6yDc7oQRGtBWFH7dCnh69Gnw5vR74uw5rxPKqythZzfRG/Wh9Mrp7rXXBhInoNuP+ZVnTZvE16mwS/6BBkl8xQ2l8mLxt1IT5YmF+nKgad9DZVSWrxXNR0fkTC5LtTNj8PCQLOfFetEEElNkSaVfbuhMRuj53kUS1ikVLN4o2+PS/HWi92Zhrf1TAiqO1oxJcRBFHmaKekV3ddtzmIRx03a6zyCWIrdc/Ysg+ulavaCXDHWTCvOoquWmphVURTgP03upqvSwYdiZU6oottz2gi5UL7LdMPpdIYpaRb+5iIsCnvKGvvm+6rPptW223DWJ8aWloHUXN3aOYQN4EtcXY7CaJp4PBSrvzhqX80B+WAmT+KZ+D9oVdaNBpV4dpyajvHThHTRQbeLzJVewr4KFHthtF8J0xJ061W/xia2iHhWs01f/m2UlDaMkOxtTEutwdwg7DjvL/kT+yJ/7kMsOetQNCgvSI5dVT+VglRCE3IOEB2ZE7cno+40R6DC3M1KZl5gckyv5Y4nEF2LfawQL4ZTlFHYMq1ERJFPI4v1ehVls7bhWqXIXYSRF4ZwIhpfUA9tYgU+iTt0YLwFhYN2poIcn6o/18KZDnHh6cwLDGcTK8rluur8koWRZYiaxze5qjqWghHQJp2lUS2Wy2m1lTIa6xvZoo+gcRV4WHfS/k1ZDXA85UYpWdAVRu++4ovSl1pNQr/skNdj7HAcEu2dWdoMnRbAzaYAUgPV5/7SBJJJtHE651kmBQJc9R8fy95KRTRiF3KgePaj98RFThYa3Iax9LJk5Mt7OEUl0EMCuUZj6FTvb+xDh46ul44FrP+oWWJ0D+WfKoQXYIY+IGIrsSsfxOpHI6k621pcpny+iNgC6bdALU/xLOGLBWY1NHGrB63WDu5YVjl4Y8r5yHq65xf3EsS1i7it2OkqNgXClOPI6c7OPGmfKoPpApDzEDsXwoO5jfp08G1JN6bnzHuvOUskv4XqWyBHQ9jvGvbc50AWGlVIE/aCSrvAyHV+BhPa3YoA1V/0Uxqtz25DWLY50kQzrBNR8i9zGnnuIWAKPfQtrtW+P0ci0l0oJZ2Al9gomYt2Lepyldtg6+Aipdf8QLGkuowOY85lbZ8hFxGa/MZeetTtPrMgBq5w1gXDUBknFiBzMJ44VROsgCWl93lbbSjf4WFPUH/5EML8YrnPoFLaBpv8gWhHpiaMz0chvRW+Dza/bxOGQn2q+sy1w/HW7pEh65Y1PjeOyASgNkytJ9/7Cp3GFFmWVOZH7nvev2QJFdJMHj8bUtlw6ngSnyfde/vrd9tGNKojDis9FsO+t2Jpi4T3F9oqUMOeD4HdgPK7aNyYkDJvKfG783uksJ44AM0BjWV1fvc69r0xtV4S2LlnCFqC9NIl80cNNILWkG7xKf17HmOTyy/iFMpQ0u7l8nTiZ/gkP4JY/mPKOCCFsLVey3Sbc4XPR8mZ67GCvuvz8GMy67gbzt8vjsOeP7DVf7ytVVvil9Nzt60fSBVr95DconInNFSmc4EJyMavFNP4CyAnnnw1/WKeZuxJbv9cLzSjv4c9x5dq8Wt0aLd6D8zucjEOBcdfrsQDvLl24Jbq/fVxX7AUNWbyjVnKNc8S6HF8xZxUUru3LmM3YkBsM4nPyOTXW5D7Z3q1fOyyETsAAwjfvUSo6kfJrrLbuDxsGCe3D9r5CpxbfqeLODWhLbjO6egbpZjhj86XwaxowQhXsdCaO+NJToMy8MMFcE5bIegcUkftrHf+KU0y9Jr+LoHq7UExReG73Fz8DxmaUjnmm6YFfBuPVqSOyu0LfW8dQWkA87hOgRbwIbZfSJtAbugmBGNGFZZlDhddc7V+DQv56liT0kDF3aZ8Zt+7m4x9nM6tqjGQU9xMEABOzWduM3Q40LQJZ++EY2DIeucfmrilqXkVWxwVbDp9T7g4jl7za++WOd/sYq7bdzIXD7fNhjckrrCtdfuRdyZ8AoAj1t2oU3H6mlSHGYmGgtL1XhX/0X28K5NfOGtEiihDGw4KOuPrKLzYOsOAZXjDwjd40MInjrIrqfxoPgRjLo2FYROgEIrigVDmv6INqbUA57fWyv7xkbpE+fEjplJZVlrU8lNLnx2JHWgs2RHCV2+Pfehlr3ghLCVQDmel5GElg//Y5gv7KLxpzKfuO61Pdsi9bRkcQDJCiitjRJ6+LMOicnhBFoREO1woTikM9f69G7TgjnCodoEdiGNp7Niepttb1QqOu+uV5BO0q1euuKieQxabg66/ErnihJmQSYuIlxrz6/LVExQrPXDtY7XiMHSe/q67FWC/99vkMUPB54DKrp6x0FuS54DWVsMRPVlNAGHdlqlgFLn2wZzKIaAX1HzsI9hGvJEvqFX2hq8KspCunc+SWeBH3oHQbI06IPBWl4cOvKnKvVFhkbX535XEY1VkvLKZov74ky3BWsF+S1/bRrt5lkXfEFRcv8VQP81QjaIHP+Njoi4/FSour3BJhCM0QrfpX0phJUfsARUn5sCn5eT35X4vZ9++CzBeJ0sfmhvECI50heqfPx3x+yvwldYWUjDQRJPAIcXUK9CGriOK61sQ8sLghSF+MLef7r1h3Gk+xCCF2Cu6H3dhItJTwoxHcA9GFTyuqI6UaxUbhxwDKlbB8vQBteOPOhbLs5bbuGnZz93Vbaa1gEbYuvYlqjmeETVKISwKByOXw2M2iTa5JYwKMrebIEHuENesfDHTYjaVjCSAglv1rmivyvRasLr2T9Gbyya4PS8MYftcOTa8T3MCYg1mVu0D4ueRXrszmGWN7S02wkVjXyJE7FNW6EOMwY7copc5F+fX85mrXxlgdinfXJdQnZO0yIQh59WIDY5JBO6RNSSLDJlL9Asmgod2l7kV1qXIT08CRq/NR8rE94YtLdhEdfB/drjTHUy16GvSiST+cRYUwWmMiDBxFebQttu31Xt9KHksgixTmux0EtDCwhWjeYw7xFia3g/StGES1Kc4viMATDurACQQ9ZH2Jx3DlG4eDbl2rgPph8vCYnNDjEur4MKa8vQJ+vtHc/uT9DOcrLrOJy8NYQvgozvcBYsDlp39+lHPXh0bnoqBj3PGqR9yWQLrsZUYEX/GEE/QJxgW58TCeVeK8kVQk9BfjHyKLs7p7YZpE/qTynuNlO3Hl+Ht9990HnitI4t21xY6hHxzfH5tLGF3+FsUBAjCwSRQmcumcWC3ipHZ1qsPXwQuSPCMwmJyHjgsa12Wf7G/pM5znT9/7oW7p53vU98fFWm6qbILu9hNwWMQt8JlznG5ibPj5WtJBdZNehT7g5VFXEf2rHxLooiw6NqS6ynG6x6ZR+QYQp22B+EmMzWFZ/jjo3o+rbnOGHiDVCd86uTutYa/2uisG/CXQQKOr+MtUWpXWofmLNoNP+ixdiPlSS0zoXs+HVKAQZjowCdLQbb1c570qSYH4K+TDIeZvvwVRYBFS4JItdbAuYpgZfA1W+TDA+EiCGyhSnJB2hv3pKPLKxNtxDuq+OzRLyAqZ6eGptPaDpoICMbsIGB9jz7+E8S0xn+6tCDk/dIk6WviqzMi/Al6Mxa9Ar2oHDmYJAMh4vPTe75J7/JV3KarRkLUs78cZhivSL4cBB+Cm0DKCTvVGZdfGu/qGgf6rBYEOZRiEooh+qK9G0gw8wIrkO1O1FcJ/3CEWS07CeCdMZVvj8wsZja2nzJfUY5np8P4PfoD925IA/mOsJI091m1ussiLYWG96a8stQ2WePYQqkt2rY/Hh68EIBfUFFkh9WaOrw0CxscW9QFVPCVMxhVyfj9W5lt6XX8QIf5VYQyw9MYAVBEj1dQqQzFEdjao0f5FymSeRUApjq5ACzH5Eo9lngwVsAT2I3XDjzLD32cMPLCOgK1INH2U+L6HSeaVIiopF5HEjWJS12t6wQ23Zc2IJdthnRKpWZMfqlS4/qiciRarHFyu3eiOeo/PVnhoi6crrqw8fgr1ouVa3osGk2GDg+7FEv8XkMKqENvPkqFdo5NmzOB4Sq7bVFE5nkULiK7zxsZ1YXTIshwdxo8UD+XAQ4/P8AlgRNMmErWt24m+WNBrYewki8yozm8URbNKkZqfDQn+iDeFoEN78p+IjritQE/mI2j3C+MUisweR3l94iRS3CX8DExTGW5I4hr+2ty/QCf315yoE0fKAN26xBtcfvaRnA3WxAT6EwkIYXl/0XXw1H8f/O4HF8QZ+LZBXkYbLTOmNXd1BQynjPgjC9w5xHUjc4Ss26hUGSHRoDXptbJgN+c2vWw/JmgxDP+MVfFAd9SeOC3E+XXusi2Qpmjsc+YJ7XaXJq2YfhvIeuMEi7FxYzYBlAVcx+IVuhthZNLWhprf4m+JP3mjaD/EWE/LQ2lwMZcBCsonIqdhP3L3GpirPxu6KJd6I+B2PJ6jPu2jQsEL5KiZUcfMp98z6ooXLnbGEqITZas8w8DFPi8OY8actUmRMkKzlIw1yZUH8YbD731gVKau1ejZZK8/cGSHmeOwA96RB/ktpnfEAoemT0eBpSLsY3ftQTut2OqiQgGyRT2pTG8a7CnrLfo4zFC1KBZaVug8esk5sQ0Ct65GvP0uDXqfQD0fshdbFwHRDp8Bh/Ls4lOrnBTypt1IUom5nSTRkMSowQsJqX4T2+S0PwjvUhC7K4hf1ljbRLQwG/iiahWLoYaCQLCWIjdc0+HDPDaUqnETpoeCFROvpc6ReYIZ8dd5O/mk3afxIwlh9m0oBpS4QINk1vkl0NO7ph1lCNNpgmPHv+KjbW00D24hlTpB6yiXvVqt9uuZJBhHLvSgjJZCYR08I+uJTloMVozMdr/jzU9FtmH6pR1Q1Elf3gBJhl81cueLynrESLcXOEnD5o2EILKz9KtCAlagGDPNBa+iS7ibYwziEg3s95HJdzAWjp2TXqCxMcVews+kBrB8AZujOuuyv+rD1Ae3EirNmoCpgRgIeRmuO5GkuVlXp3IUb2hBRXhsObWMcYXY/LpGnCYyUbamb3FNcEQjzMtG8lhLqXvUuIUnmUklP10kWOVYxzigYwvzxYskMi5uOcwvYZRwgqpspU0lZA0YKLklpwhZgJoFsfFz4RnpldJyVHKIK8dpU4/shQTwGZNSFFn1Cer6lBd2xRAavm056xeXMSVlDd7TxSujpnYSlTGZV2SNfOwUc+uKLK/nJUTEEyXpkyV+iJokIzqoXBEytTpH9WP4Jufiuqqpn2VD0yvqNIXDPz5b6UybWE04/OchOlmSB9lzTlAAVls79c4Oaog+hYPrTivbCp0xjU+HkOp3KSfEkcjokFDv49d4WIQTjYr/EfCXlLmXoxByAvwUt4Xo46tdEv/Ibmy0GyGeIYY74eUuWfnElppN3CpuR3luhvmrGJ7iZu4U+e2zccPCx8kZIyUnkgPyoO3X0f0mv86m6kpL9NE0G7GCwHVZpNTJvfDXGkQv+u7eeiNgLT1gUCUbuGtLQFWmjz7XYV6wTj/S9jlB+DUDJiysZnAot4tX9NcGJvvRSVPOTuywbXRX6b6YexV0Wuz6wUROlNybjFzcE2OJX5D3WWWwo/dKwwZjjugHc3iys/hfezo14zIUhpfxeDtddsLk7bXeeBODJCiC6h55MzzI68rf5weAavCyxey1jpM0qlUh2MiHo9eExG26RIom4UfB9Vdnocp2olciSN45GMi2VmlLrSREG0jwc7kcFbzJZMF+KiJ8DCmmFcxiaqPXaaQ4hyJ6brn6pzt2wSiQqyiSuPA+8itaz6w254ERMiM6xK6xGLXP1XENiqdi8zCqzQ703xB13vdh0WJ6qG5A5mffx79FKNSXHaP4PgYC+SsuALxLyJj1FY2Yee0EpgRPnmxNpTHy9y6efnKZeBsXpsQD1dkqzddGeTC/eRphDvEZpcbAPgrX+sISYyuxFKjduZ5ASbvwjadJ4mKnEZv5UOKuvsU+SsSgVIhFHGA9vb1JUmJtAHvHiQ17G7+2/fTO9Vtf+WmHD7u4Cc/tCr2dFMf61eKy48VbCk6U8pUj3CskiwlieWczvx5aZrLPi/xZ4rVDhdhtMj+WMcj75J4YT+gSBFZyExpC8WrEb6qEIv7Z6wtjETVqU8Z5nVb70apTs3X+zgxRTgd0gAX5sTBe8KAQyr3h5xa1xQR//f4vjhNLqy+5b6UEv3V3zD0IqzCsfVWwSWn9wJ+Kw6y1SaXJ8+KvPf7FAVyLVpvKSjj+ohmiAdKPoQJAHIUKBqnENLtCQ58+defy0/Yh2/X+a28TKaX6HUCuTndJo60eKedjkVPokJO9VkpvJ2fQ7fEyJw0l1WnrRCwS1POpdMMJ4HyrMyduBpHLSYAFA4ILot9ea5TEfgiuSBHqpg4EKbr5kdHaw0g2OZ7g+qfQic7RIuuEChvkL44hnprG5sTOmw99efNMM+aSCUEnq4cKoitqOQR7TsLffRwA8Qiy4LdEOuXX1yYpXPMsSMPDzIUWBEmZmLF70MxHM2Fizl/9XzTrmdBdy1jKrTr7wDmdl1UT2wZKr5O/7ndCaE0FIQTBr3LJcZ2Rqm3JGJUq1HDcR7UronXTSDZn2I7pvS84UkWGsDx8CmNXfaDvVlTaWqJjeP+Kr6g3b14AqQB/lWpJkzWGdKiUCpiGbeyqoGr9+2ntV3MUv11gTDyxJo+zlbRShACVl1Pg+w/mZXuLmlvPok8Oh+jA9zm514ZjRed+1NTp4HtB931YqCWJILd8OEaPccknIqkk2qRmy/155W1+K1ZKDgL2elE7pejfomzOl/5RmuvR188jlbBXdMO4GIJ36VoRfNp2rAAsy1cAY2/G78TG+zEJ2pjOqEYG0T4AQnFHH7v6Wu/UXHE2JEnLuTU5AYUoZ+RUEvkvJ3eWv4p04pKd6/MYAtFjXJ4/NgbytXcia39pouoLrFZFR/Sb/e2O8OpSkMStzUIuM4HsEInuaxQP8whjSo0Kt6kHSuPAXMKxhaDZr+G1BZrhmQdsZyOeE20O9PPjJ6IyY6Nm9hfG1pw/hab1K8Wj2k4zfZikKXXAf74Ba/ap8sokIL75UPK3PHCuobPjnWpxZd8r/lrr37zyjSyP0ynq4Hl8abCT2D5gBPYdrw0Vw/437ACK2mWNO2z9UpiAuXxRoJ8He4p8r0RYuq+/WOiJxWvyO6tBRXvx41WyW1qaf002+YisAOM94VxttrzUC2QbjwyqzbWNGP+CDA0MLVPb7WPHWouEeKbq+AfAOe3CvuX48DbxcXrSD9yNEJm156bYt5mOqLQpzk+CFrMqw/2IpvyvNTN3BRHZhY2MuE59v26LAVABf8V8ky/RvZ6/Wlzza+15h6CU5MI8lOf24ynxSzYbdDmTmKyaIjm+8Eu5bD4rd8PetmQ2gg+Ykn7PXKe6KWPuTqi/t3Km5cvWXv6TEoa+3s5zH9+1PQP5kq5f7wgUY/KPFZK1B4KzLN8bOkhmB5SFbGaN/LD5si9eofGF11zYQioLSdimAU3CqXeOIZtgUtoNam3n26C4PCtBCoSMPA2rlxqMqBAQsoNF7SZIjXPWMqfDQdyuJN2ox43Ywsmof4w0PufsRiT7vYTlRrvmQgyxeL6QLBhWmwVPCeTYIouCDEJs/VC+Xmg1T8eJ4XJP3Ux5dFgRZfHT2UlyBlZg86ETr+v1k5iba7IsNrLv8QPfYkRT3ODBsLgX0sQdNBX6OkpF7DfdxOdY+IjuQTceRaiTcDPhR4zqGN12OvDa0TkrBgrAGB064rzw6iZnK/zFVYPUMynwQDd5RNvamaSdCPh4HnIDzPGOIh9akHndNhgHALpM5a+FdSo8Lle3WYfvNLIsRD2VFwXo8SLShFo8imX0rxS6X06m/m0g5ULLKQ6BcB/pHBpHfkcx/8qh18vIiBLMNJYHhtFSe2GNwCR4M3HcjEQa9BTLtJJGxXZBzRT7t0XwnolsDHwDKxM8kMFQt3aUIPhxdnMWtrmvjWJZZzT+HJ19qslkJJLGiyfmBHY9Yt3JwMrOQ/GZu40hQEtOKliWiPiXio/Dge0wL4PahPwKgWQeYI81zt4krEr8EscDcNUwT0RXL639fGM86J4HXA3rcL0iz6RjuksC3UZsNNFkr0UzSFzZxOIHM9yRbOyv5f1ebHK9uUsmI77sV33a22cKawq1J7WmZRmtzrvxid/JCHVICHxjXde+0SxVoQj/Ih7XrKnxXG08XoziYzQxyjyG9BSmn0H3ESkFrwtul7aDwkr3zGfQnyt7CSOLZRmdZe9TXMBm8KGyHMIULrVviYO7vIjHHCc6OGfmGsxF0J9irz+M8VI014SfCYJR3TjRzR0N5tmTexGK+mV+Nmoqcq391PeeRrlivcYrPx/ameExy5Puos9yMrvWqhnWYXCJfac9SZM/DMi+cqJMt1YFt7ahbHKbb4y5LwUx2Nvt+5bude2UdPGjTQITNzct+LeGCrgu/oyWHWLpiZCznzPH52ae/VsrOZ7fBgHNG0aL5gEMIEAbLw4QXuxVtbXPD8SozXKI9Zdbjxu1GIaoKTHCKRIgPs/f3oHarhsrMX4/ettQvdEwmf4IYdk0pNyihZFc6+Nz3/bIumkq8YSzGJ6jJwADGCKhUJm9fw/BLkGJlBydi0uixIEzyCCaf3HoE4I3fh13MPWugWHbFmEWQ+9gvqfBamtRnNaTWRQb06D0uiXk4wXL7gqL4/Au5laSbf/mhE7OW8TtqX0deAnfOBfmPBWbDVwcfY834rZPbpdn3yCkNfKlDVJafrxRCxEasRa6ciHmPozNkU/NKMLPe66RZX6kgKUpM0XlkDg1N+tswZ6cmjkRb+oQVrxSdbtry2vkk087g+RxlycL2sYd0J6zLcDYC/ikh1x1v44fBKBKisxAWlJGMXohr6xnrRPo35++xUEZ3oQe0RdW5K3CzyckMUArBDR5Opn4UVbo5pI6oEruV80YZC+f13ctQvktezsWm1NIKChvOgIh2MLRh/tz14T8t/I2XrlTcFxE81zAdLUah/EyQqR7PijZ/3x05G/3jh1c4/bR0oB/RyWwiD7QYvEHo50t6BPe9erNBoI4OxUkKhLZ/bVLOy+tD3DR+X6+t45dIHN7R/u31DrXoZ0wBiJoCUi5g1DHgQkx5geJ1d4ZeRVBK5lI+BLtzld7liP/tjnT2K6IO+e4dCYeE1atSOH8ifAClc2Bn6/2dzuXaGLAcOEYva5ig76o3FPqN5Qs9caPARut18uONc3zrb9CnBudpqx6qiO2+Ll2ksZxckzNXl23vFxypPOzOMHESLtAlBCUs4kemnDOj5xOUOeR/9XU4o5nGHCF8Yo624wBB9ikSoKZvLSOKXxickVPRAIL+py1IE8xLd/RD1bKkF5s8m9f+57ti2DADxK96orTSo7LFwvD5kiFv0Mv/mSsO1jCMnwZnK+mEBgGufNiQkw6VCbpki8jnIwkoufXT7f15/vRHlITqK9hF3+NQkneG4PXWrnRB6kylq3uv6a5+JbHfjrZ6PQFpBENvTrjf6Ua2hjvxvz+gV3GynaOi5PN0EebVGj4+yPkyesQHnN+DIKkiF1m5DfWAq2TcDBVPV5op5/Ns+bryEKlwE5SEoJIfZ0WY8QtFt3Oh7ZCbCcp6qYY0CVCe7h4me/78wyfAps4mM23m/ZaBlz84upijxiXdtaMKJcF4y2bKB9XJwvNhmZ5I0MUFMhYa1YXU8bWtccNIr7uZSQoLEf7J8WPdKeslZ4u2XoJfmCI7tJkjSU66ouqoAgJubZBvhesi6Rtb5S4aBh8FcHYsXyX+IfeI5ydc9f98s5SaF4+c5bETIZ1kTQMmNOHmDIgpSDjsrVZdxLB5VZF0jdqmUVL3Ep2YAtQfmtEP6/crxHVIKbjVVLIdMs5n4fBV+pZd2ZGxrTqgG2Cl6gGz+vdUF5hek2IyENN5dCFTG5qK0hznMgdpH8KIfKy86mNwXxdysPan/xaPHBRaNklgyZxKopVedQ00v0ivvV5UaljWjlCtrpfGqZ4iQN2VKHoiBq+SFalx0zrgPpvHokvlWODo3G1UNs5u8Iq8m9nXZfs41JkuzwhmxCuUPf74X7TLRKyTzR5pnjAPeltuyef5x5kJWAfrWCd3dT71eV2WVLHKQUKRSoq4iW5Xq/khudtVFU93DP9pZ7UO+CgmJ7JVbYrLTwRYvvb+LG/a5OduFkKsSItWx6v9bw0bj4c37tbjvnrjYrmMslrbSxIz7hKBEa+8heam1KiPfZAgn+oyIefVBbvcbLITpsvgg9Ysw44luJabj7wPhmFH0KPgvJt6nzYHZCfyamGVyiL3kwRaSuauwA/hERr8a8nQeTXxIUSfwKPW0TuZUTsboPl02yuoNg7+F+addko9NQzvh9eMPXm+YgDVYgNK7gVTNTeTEwJ1ei/17WPMqjCXn8rzwmwntOWJN0QYX0xc2W97qBKAMRnJUSiBrCwp7siFdveW0KLqPo9OmiIuTJGh7ufz0wke2NL8qkdfPg8nCH8nOluED2v8oLumKT5RbyEm6Qmz5X7IUwslmcnREbiryW7yFJOr/2thb18OktdiCTAjeM2uLmKl5V5gIdyOr1fhviSi+RL+wMhvv3gUktFCfsPyfyNuiduwgKU+is0nim3m+uD0SUYZpX4+MRu02VlZ7eEj3zrQv5g/eul/WGSzHuu+FZoMW3PJ6LXn4zn/dq9CWCmyeYSeDVCVfgiXZTFfpF2Ev/AJturUEBUWqCPUneZP/6K20Yr6oRrLHnOXtDNSObYc91UQ/RQaYpxYvrMTSb2JEF+FpYsFo30pSuXRPnrtPQgEKM72wmXEQZYJIM965cWXbYKlTHM7Hufnui6RJvXs302GJq2WTlPcLoeHGik1dPslqKUQWKJZyMQ5MX/9k7Viv3eIFRuV8YRw1iR0QiMO4TYJkpxt+sFL2uFiwRSm5cuORAfpDcAjqnG7tZaHUX+Ddy3T7yOa43eiZm6xrsm7AJ1WL1VCH0e2j4J8jjNj2oKeKvgNk7NA2Gw1X55K/gmWLCSolul81iFFs0PCyiJ1wYfexE43DfXTdGR2z7a+xbiL/yMbA7WDtJuKyVyUqhDg8jrqxuBvHGoOExUr4WhZpq2dVtV92ZY9mC/+t/OGOJVNk4m0I0VZgH1Paejuz+vN97MTvTbpq8XqEWnRuMJ+AC1Ta1QwGB7DikeiZ0hPyslc4oT21+YWRz4MDCoUc6m/2iuT3Ye3xfRamge9zIddCVz/q9EnD+wWw+w0Z7rn75eYro22S0TMGxwNEUZbjRXycAt2FPfchbuUHR2g6IwMksTq7Befqv9XuJec1vv29HHG8LzGLOJ1ZMiyGDCSPDi5QwSv4ES/3gMLwtd+R48I6I4VCN5kkcCOinAlknYNfULuV9fkBUilQjm3qtSmu3QKRghwbrX+mgKVWj1YQNi5OKP8o5YxGi/zQu/nW3nChV/scP4+outFcVy4d29XeKdEvNVPgl8yChgS0QkgkgSTcMs+lt/bgN8oZgcotRvgD40h0VViQNVH03yQ+yMpGoiHIteQ0s9g6nZ4CTJceLXCCpJV1Jc5bo/WG3hkHYUZCvxe6FYMiRs/9stFvh74Jy/9b0JXCdCHzXEH60a7UtuGeHVKoGwh81eoqGsGfo+LJYXtHZPDztBBgxF/e2OtWoFc83AW1P+nJqM6prL5KtfD+L09i8X+5PUdYwJBoMdVgwG8kLvXNVP2cfr0wQ2BnwUgh9rx6wuKBQZNUit5vpaswPW8p6AdgtPzyIly6fGEOtxnGLQjPxupet12zHUdOA2svxsFmWEJRX7f819x5KsWrDd12iqwJshHgpvC5go8FB4b75e7L73PSn0RorQQKNzuqobs03mWpm5c03NLvaQMF6HnxGn8FKiDQWRoi+vzJ1VpifhlI2sgHvhsnKalRuYiP093eqRs72zVf8vPS56fxlRnZMX+lk54ePXY6fi8nnyz2AVwjUeLwaB7Q6SxX7g2DCJjwza36cXGOvFJJI0g6bsl6sdLSXRvgergLQpm3QBP2HxMbkfQz7XLyYGMwLIuXbqUh8Gr5+KkJ8rbTvdV2G4lzWyek1BWEhoskwi6y77uw2E0L9zkb14IRkEUGLGpnjvpSqc0rVaRpD983Q/LD5bjlfcJ7Owq4SBFVcReoKrR8usQxUj7ul2pr+Z0tK1YoonHKy6IJpIlrMXx+lRRlAnft4iBcD0+QZfNxMd2Kr5svG/QCATW8xrsXVuTOOb7nmQkQErWVbuJ/PBLUH4KWZRd+PlaCFtgej/DXsEsvJXG+ZqMzYOsBq/Hv+Qz8ge7Pn3JZ4OGv+OTi+/u5jLjScCOdJGocRyd7lM7AWhmRsXXzE15W9O2V0ztp+ALauLGUfx3Rdf5PdX4O3/QhBWc5CFRczJuSpeJW5gMavenPHBbMAZG7R36b9wOvI7v43d2/bTcDHjcPuEO3VadnImmTkL1UUZPb/zbDp2x5axxWqyP7M2y6z4eZlBWl283Oc/Xpqp+4DN7SzfAf3wiqGmEn7LT5lhLMyzCy9ubhiS/g5jT0kLYCBsklNr2gEU/zM3qBcFizv7UQWQ6ksQLZpm60Z2j+aRfVNnHKav0G1mXlfzIhkT5Dm0qDiNKikZfex2CGQ92BoKQucrgR1PYV9hNUcuDN1OmJyG65/7A2I/8Oyki1ZFFXLWxxfHIGeJfgRmRZacF2P5k4yPy65ReQ5+0WLKAqO7hA6+dZe+0uaNDM7H5ytR+ZV/vjyl/ktHdyX/QkAcEwy4n/z0+WYcyFUuLBn9WofDMwlzD9BT2s9u7sv8tiBhQfwEFX16wcCiOXoykJSLIUt8x1jMdwPCDI8Is7NAt1bl4tpIihYAqcG6XLcuwnkhEUCewiblgX/WnNk06/XAOiol3dLi70jZeTVx1ecW8XV0JSqs91V3zCL+5SLj0iOoXqP+mqK85hxsq0bllFZoBsT9xFv8qX8q5ftRS13FiQPSvt+wB25/W+Clc2K9zoeoa2WfS2RJdR6PyPJPQfO0bqzhhQK1yW0A6HRVdWtGcacEpRuk3NvTJndbmz3CQ2+outqacL+OYHJN+IwqTRN4j5JHHaGQGTgulqzEihfYVbFeuvi+1K4Jx55dRdjhTy/xyr0LfmdqERF94G9mqPNqacuhrubymP3pw/OX4WgPh/toF+lLekKSh/qvyYyggCb9OSd1PkAQHKWOT+7N0Exs/ABVdX4NMg9ZeWGgH27JmAteNgy0algyByuy8ODzyGhIct/KZbtRsAMxroeJrxu4ew3sg61vDdx5jUceelqp6kpT90NFHmUPTbdFn7m1yEJTCMVB3iHCR7yw+r8TfcHxEdhamKdeaYcSVZ7owDpRWvUYWpEmVib7IpaXJbRup58pqdgLDMxtZLlFQbkD7fbmkIYfXY0kt5meMSrCldX/5JlBVKcuoutu7Y6vaCMDN1S4ODB2/ficlAbSGIazottpmQg/wii8GgNmGYrZwPfGQmAWaSLsZoN2Xfdnp6FkwWGjAUNz3k2NQqgg0vCQbGJHD2iANpzS8MZvnTZjhmSmNxy1lsqF4Vli72++L2VrMS4LkDPRZ6L6YHjRdoSbl/oUHAL5aIKZnH+SbVJih5r4OWyH6djm5QEIKJav9XOehhJxVzyZ7vNEGYS5YR/bT82ULq9+aTNhUsm89vrDhpa8RFE58NGY/jXoL0arR5Z8hJQx+m1Ql/+IjASyNbpqwYm+qn+NyCCQg0CHDxzah/apvg9Cw3LT2O63aSgpyOEJHVnesLlv0w9sdFYosrx+lfD7I+Lqo2Q+aiZk7/LamIBlqObKbEd8/35y97X5WX7Bxgzk8QUmL1whYMNHIVDRzoWxNqOo8rRPNz8iqbluHOFa5n6fnkplrCzgXdB9x66JkA4W0DCuVtH9SpxpPY4Zihw9/YZgEyLSLfcA1hN/J9gx9YBHnNLTF22zGjIxL1cSl4s/6p67QXqQ/TV35CV4G6z3mv4o9vp0qigyehfoYzgjxig1lz1FhcrgUkhG1zD9AQqWOKylrHQzsrxFYRtrHEOCUFc1ObEBlNV5WiWcX7aXDucM+KW7YSsSGvKemvGhAXkERxdCK4ljRrKQ6nADHmJ0DBt1e8A6hxzMjqFWBV1lu0R4e/VKdTcxLO/5xjioKs+xUmeOXddgxtSKyszOXAxfy+DPnY7y7ID08tG1C6hdLOw7JUz1GoyqbL17EFQCe+xSVmFULOrkE77UOFLdWKsklYQYVjLJhJyjsZI+szG4qiBSMvJ1eGJb56dqRJmPipzhFTqTE52Zc7ZA4s01xz9Jb8dS/eAGXmj966/9ET5RHGMSTFVKSkx6MHSRwb08np5t4kQ2i4z+sBlIO6IysoX1tEXG74AxgqFS+kRnHUYDJBcPNPSR9eFZ6K/QB1pAjCUGRPgTh697E0SollRNUtlxDdRwLH4DRhTzV9Mk3l6EX488WYJmItzm9gDDuyRGaUopzhMgs85guDKgc6lG/NcC9TdsENrnqexfbj7m3rA04g95CDkz8+2HspmfKdKOF9btCHXbbhI7RoYTo875YMCNRbmMRn/HC2fGCN7Y914Wwm+VdM+6+tD1CczTskTrR2R1VQmiZAWUEzEwmAClwZ9jfdTfqV3jfuhcEMpUi6ww2/rXN/DWLM54a2jrJYyg7JGMKHvM02raatsIiPUV8845FXOunlLqpkTRMNSIdmD5j+5ZOqT8pCzPU+uFyMTOkI3vnvAZy3lSuvGaf1URuQ8KImowhXIPGP7wCWNE6Dbyx27JwR2Fn8OwJ7kdty7d5/JHf/hyOLPmtObGlEHfDCBbzkBRQGhbXBy+tD6kEl3MJEqQHl8pg+Apa7jRGVR6hWxyVLaQhxVP89JDkEpkXCluylO/ZFlXnc0i8qu3Ptg7Y7cSe/6daY9yeyudHczA35A3FbFEvJbwSUzJwyO/VDClUhyOgqw/fwEO9Ims9XwIYhgjJCZzQDOoZpgpKR+jugOSIUY1LqqR/Pgz7MD8aAzfcbW9/zVy+mBqEF3VbqgxWK7IUIW/gFOWOL8ca3SbQpPi31BGg+jZEnMpcl3pgI3G49fETzRd6i+8GctX8G1qmpjCjKWInjzJUK76ArkAPkpqftOFxcbQFVAIocuJcHIHWZr2r+J1Xqubus5YLkRU1wXlGaXBgvUrOWRfO0YTmNLkShsV8+B3+Ip6cOrL0XZuGzQk47OqeGxUvn1BOMffJLy714ddhPOSZbfscjUb0GCXubhjFv5y1qftn3TNZzvKei8zLuZcmiPaNpTt5BlUi72RjSqq6TQNfqm+L/B61K/BGV4rtRcIFk/p6x/VHaluAaBxj9+AHKFogwVhPdDUT/Uo26zSQiCDr0GFq5eCnP6O8FS4tdSN27i3UEdwo1y6dKck0GOVVb9aNsJA5RbN72pRW4eq0RxLvwiCMNI5prQ5KqnNnS+UpEEp7stJX4oV5U2VUMVrHr58tvC4YNrgakh4Wjwr+NQ0bzzX55at0WalKoqGUUvUVWyb0XX2uYkXy92xTH/bQqnbOxH3ZQMLSmPrDqAEd2MRYVz02+SN8NujP4tMXGIQJEf7ofWZpcyFekU+IlKkPJf8WTMm5VkFL7YF0bEEeKFDxa4Fsrov6jpSNNeZzGZWBCUeY93byV3ffColZc4o0R+L/pvKNlw1+fjup+gd9zAmq49liVUaasYx0L72qTm7fwcYguDgo6SXJAtFiGcuRWvLNTg4VJVJdNyuZ2PrKrOJ6BRm7ZO67G1g7zK3H22flq4ppsNEhXbQvTljvpQ7wtiJ6ajWWOTZzaTrYVlEahRf6k7Ky0+H3SzRRfTZwRr67AzCE6+LsuyvbbcdwY84NDQP9jyCTli68Hz5Dnq9oZuEcDPrlRsuZ8g8I4dxrbWWcSWjn6A9yNKgZK42/8KdNxm7Hd3hzVmF76zO8TrJ+6g8Pj+1Bfv6b7GAkUM6UnjDT1CWtkYf+i9Cy5bDijPzx1kK4MA9LEG/HLQyo9MKo4GLMhVgm1bc0F+vK/HiDa+Dh8A1w1GqmM9lBjcDerSycDAEx1YvXzbCnKcWldA07NiTPtMk8MoZ5GcgWTGNpTJ9rkareLi2/ylXmFQWCFeWIOtJHkLmcNZ6qSQozQEP9Jfh8uUEPYDugJZQsU6UT5tEWvWrRWkSpuuKnC/JLHfVq0OGZLowHx+7GsIs7IlH3ZcFlrFU0MEUGtTAIecl6gQaCZ9GQ5zT5zHlHWjBElcwPbJPJKF987uO2tivEg/coDnB7DgeMRaODur6gwKrkFYniRZnJTdSrTc82FUX0v11Y7xJ38pX/648hREQCafae9CyAnNcSNeFmib6k9ll8ZMxFrWmAbR9EblkofzpsUf0io2HopF7Nx0QExaVd8SPClGZKI2RRvs7+QFmQi8+9cNNPFQav3x7dPFyNNAL332Cly6ThKybuClhIr7ghwtw8mVx+9AmxRahzWiQX4MJ5D3WgCtbtqnQ7Y9cKUW5tcrNgPBgiqnbzvfcVgPhrHynciysXqK2Mk7dmdKLPULca3fkS4nY1NHsOPXjvRnUZ4sZzvMyYZKbfs/TgGH2peavBjc2g0kgGGGVyGwy646euJY6xy+133f6Qd+qbnW7nm5cGj91TgQEsYrzRRn8UrbUwBbmIFYxs3DHZZjCbZc7iGR8FZ7kIEnE2Hn6RhraFgce5fgJt4AyBz9VUkqHZ9WrrD8qkrSSTL37DJhXxhIaUYfWHM/wZ6IoqfTuxfKp2shhYqh4xagidu6ascIxtnX1Axxf2wUGr1pFj4n6a8KY8kEKhe9qCyp4f/5S8ZhJqqFdvP7Um2sTZRI+9WFVRp9rZ5yq+HbwDKJ2w2hzPsG/6BoXpHax5p7CbP1lDl2bIZQ4+9oVy4jJHFVvH9euxB3FOQVhq/a1laI6Ge2SagF/ccIavRxS803eDHwfq+hh2k0rpgKu79xzuGyUJJrTCb/06FLTUPCVPfPLqCpIuQ5KYnXEOkPVM3BbLGmggsJELkAkiigh6qbu6W/wKbfe2fklwb8+zELz+WP7W5Jq45CJ0pHi8FLlAVbf50SUeKN4xz6HSnB4taRvPKbYSP5kUwFCM2a1WVirvoxGII2/2dHl07YVTluIkfeuwYWX+3cvGrd0rMQ0Au+9G/Arpx6Kl61G5VnI/546UW9L2DD/h1B/5bo90ZP1ru5oUWc1rvrTWP7Ti409N3CsbxF1WH+OBkiiiVTFMxmKHJrU2khAl9WSQ2Ngp2QdEIKMefoY/5h15VFDh0yH3r55TlfnIbJbYET0b5QY3NWYwZfNZ9M2Kgcn7agBAG4erRTgyGmh+zux4BxoseEx9lvRtdAl++94rRqqbXE1uZCaWc27IOHCZ5HFYmBjhyy4zksDDKsKxBVyc52zvc1p/OjZ4Wr+3hAUJ/X3Yxh17zId3uMwk5lVkpdPg97vE5NIfrcqDnoWqniY5lai4aepFOzC2NXxwzTlqF/GpdiMBoIrGhgR6dhFyBKMJD8rBGwpUUc3CgWVb5BZEz9LvQhWXxGI9Zj0J6qa/akNwIo7mXQhBiquXj3pxvsWOAe6w3x352zHEL3RanSuJLANcWUieyvtmOf8NLpIs0iZas4nn16gfRX1Qi6TmfdxjOSljtVNEuCHuTxYKghBQLkRDYYHeeXhD1eAKgEHVONgV4DxuVaN1GN97jY5zJYGTSrG5gXXAUezDs89AyX/ZMpopPSoXLXVbx14ll+0tEOkV7QcNFs/iqyfWWeo88roP8gC69VPPAYmZg/209W9WsjE8TxK+4ASZMm+Ct6t7TzUBp057ViRtyyv7GrPIN1afEo6m/KvgiYX2T2IXPq3+WvZdP6cg/LhTjpUE0/26xBieLKZWOSs0v07wdkL9F8x9LFpp3ohqXfEoOc+FQUQ3mk153EPhm+lkk8VJ5FQFjYfUq+Oif75XUIzuRCOfL1q1Gs4EKneJYyC+xKnmyfPxFka17zGyeRXKpZGGsd2BmP9k15M/eRBxczSUcHg/BI2wzslggooNKpLQ0vI+7AiYvjrgmL8Ci23Jik8C4+c/fvdjeJmd+WXwadeJSvq76wXmsXDfZygByerXSByr9YsqtNbUXGa0FOhKYuu2geuJjkN787Opj9/nYqzD67a+cvoPx7YL4iFcO/AQYKCw68vzApDSsOC0Q0ex1x3i2YkooOCQkWh7lk7YLCM8U3ZAWxRVmsBxNXlswD3R8EGw7qsWVvr3eZgDUkZ0lnhb8R8sJonA63vT6jh2Yc6mnYLZqJZ3etBRrZV1z0nor6L1bUFkahCpBaZ+P5K+OHh5dQcklX8X64z78Nr3rfvFeEGBg/tItWLNSUcWFgGTetfDsvWyHedOoRoPz8kp3+sfe7eXi7as4WLrDIYgzDqtaHET/2rpHjdPSNj2fPyavA6yL8V2KhQBeQcbixZombAbXi5XR271ahF8QPx+43es44/Kp1k9LIAe5x4VocP5qM8zEZKW4j2UUimUM3jOJdXf/pult4OJEfx0PeMm0E50WaPwm/zRN/bQYuKrWVMry7ebEqxN7OrWKt7je936zg6Rq/IUriF3Ucdod3KutZ8cqriyF6VvOF95UcHvFAs1EpPqy7ldUxtamk/8S6KkzfiiU49CvPol0i/zxo6785y9u9sVHw6RlVVmsCia7ux7gq/+B4iaBKWavZJ5RHd0TiiFn0GUgoLiLzyrf1Q/3iAYYAQqDf6P80kdBISZRgt5+dP/ohtouryzVUnRVubuiiIxKd/WXNfcI++pZEzhQskndk9MLa2LAbZkD+clM+D+uII+WzMj0u/pfhpo/BguqUhj2hl3keFy1kOLfgJM+tolBd+NNTUPV3QvN7b/YUv25SiNSflBFpQ5Mtuf506k3z461DpQSiqHQGuSSUAbX/iQYILFrDtqnFryif66+xpi82NHBPHO+tfhYlHIYYJBeHMbSMpFpiGJkn2+ZB0PA7XGXo6duXxYVwTnXBHZcxzpJjaqCL0OlCCkI/tRJEgZypy3k1cPsWeB3cxf4UALGh9IYJuKWLKZOka4soM+piHz8HkpwaMFuSH+3K4pf7S3HQSgmtZTBV4FnRwT7Q4uBdFSgVysgWw4fCLASblM+XV/InJVpVLRz+Vg2+NExNZKT+lUoHint4/F5PRDcyQ5y3F9TvRRQPhfu3x/CSaxWgfz+jLXSy4YvCn5/Jt9LtqLHYaO2FZbdkbt9TOK53iAyWUuGgT8+zXYdq17ihMgY6WLEqDLhBHDGp7BlNiz1Kh80j13QGQBWAxwInQxurcd99CfbjdxrC5hx83wYulaeSdJZL0V82EN996gCASwgv+c9Pf5rBxaT1/8uzPgvFx2rTuoXatLcZT8dP6GN8GoK0rWY100f1mxuwvvlRWU2gqpdIV6EQNv3fdsJQWmWlvR52xPPJFziFzyjyogMuShx6e+7IP7+HR0H7G8hCP5LJQ7lOm56kKzj3ilHA1BRtkuIvhyz3qUeQ7KLrVRGqdjkmSFr6MB6K9Nix9gWm8oE/vIgTk4Afn/DZTjY+sXwMo4hoHdFZhVTIXOYZIkZRsseF7p1BDIo8gdU1EKMZv2dHvF8IozGSba32RNhogkuEuAE38dJxFPl59VM3mmzbw4OrzFU9k1sydqxFlp8Gk6PKxGgfxmwsQAZox74leQm1+UoxVv8q9cCRkUWg1gNXZTzjWZ5IdFj+hjBP7BQpQSm2BgnG7zC8FYFDU/HI/4kX6X1hVuOEfu/Cwn/lPSwWw59MQdYZXMutzvcv08j+4qCnCZRqV+fpq0zMej/nRYQWOH4vpyCpydWdSWQtdELVG/Rd5naz8ZO/XPl0LJBIDmTy6YsdS7oPG7uzryO/VyH8kqAcwIeBJq8H3OBkZoAVk4Py8A0ll1gGrYkXASPfXZWacyL67o3o654YEgnmfKkF9AkMPKTo0mFXmASTkgAVB/KrfBjd5uh7ElLicLDM17r/AcEL3wKXrbjHIcQ6qslcN2b12Kv4WHoAbjevB36ZqPuzKvfxkxq4Xs4FqWf7cS5t1Ywm2s7iyGFPgwyLYZJ7WQ4ro199s+/oAfhMr3bz8SyV+PiAIwd68lZzIA6VmH/8VDkE8R0UTW8o3P4UbNdxWuR8MLetdR2qvYVGuiZMS/se0uWU/U4bpDziipNTFHryooBWc1U+3DsxZmU3IYN0lXHm87fbED0F+7uQwH64iHpZT61rvnxPYU18qafw7svZMLLximy04Fy32klz8RT/+eOCDPhpmYmiuPtFrLX/6WC+iWEsrd3l8vFYfqerFsHy+Xw1OV1wmMVq3iONkhQ32Q2S9+3Txju22jEzHPjInEp61floiw/+KfD40qMqa7aqM8PTlmIxMvHuzmt71kbgjI05RDitf3B7dy1GKZnentQPVWOeTUhDoaAiiMj2LRbqPHREVTxXPS/mvkau8uTh3VbUwY6KTqV3ZdWJksHv1y2NFcF0FsUaUjaOwrus2Njn5vuUmmVKCoxmwSC9r8ql1zVcSjy9a7/vD4CHJQvdIj3gC3Vd7imrXYGx7ZBeDLm9M5xsmQVYWgCCCYijMfsrXMbLkLtyO6ViuVYEloP8JoX6ZQOHqDqqVqgHSToHDSVJWadhJlKAkV/4mYQLcByjPynHx4bKfuxN+/4VdmdHp6DWcDrVOdQ52HSHE4T4BaPm867yDnzFDvyfzm/whj6cgQLq51/gYzS21nDT4kaNvvR3eSTEelx4ee+SD6zJ6iUw4JYXSb40Uo0HOXucFT6kG4woa1jdPXWEC15JmGUPAWWrorKNlmJpEU2uYSTD9+zx/tZzncm/DwMWOJ+I9WhB6kTstWJri7zzjiia4TaZIspYP6mA2eWuqbTNG5rIibN4DtQm+ITGvAy9SJP5yDYOvQWz9fP2bMkSCrJg+hrWcZJ6g6ADr1SrQoP1dowhVRuNaLxxX5Bb5+0gQ53SFqCtflaVSdVysGPMCalEZ4sDO6qxBrRqOFdqv8cwXcY825jV8BEomeLOFjy4SMRWcfAbVQBIwJbHLJWwajHmF6ERZ192GoR+sD5eZtorXS7aH3TRqdj6i3aiQnIWvb6wgxQdhyOueEuJBSrEYazSMyhIKNAcI24lxmAysrpWnVnnJ8tIf8uDVetwXKWXYTy+xF56lM5BuFiuj3kJwJFTs/koU6iw8JvSFmKqvxLy+86TtpdHpZxQoEvSAY6FhRZtyKdig88SxqZ69fxQUumt56ki/278T+2DDo6pg7nQ/Qs8zQ88qLDqxTKhmZdxsGJ/OrgZNY7bRJmXztB23ohTyCrKXoCllftufzuLUp6z0el9tGK+ehCCK6q9xl0jirzFeFXD4yo/KacdWB+YHHtJ3zqgcVyjWqD2pVAu0my2Makk20Zj1WAPFh78sQtD4qL+CCDvNJElUuMqvm3nYUw73l5fTqYMGDwVRso498YEw/p+uyyn79ovJASFWoVtd3Z3bum5o71HTYOK8QZm/sut8f491jetNSi6hHvnwKY6R0qiU26JCyRNa2qzGX6NhJm0Sk2Ki5zg/HliRUPQn0Ho6GEqMKus4g4yipxFhU18JUezHP2THEON5L6KZJMSvRSwZ5PhdpglygeNTarvkK0WOnvOLBO9q52V+CzTEX/f1gnHME0DOwZlNqStaKvOZq3fT5/pK3MNRPnrSVAdf3ne0AZMU2K9uzXy6NGz/2YdTegkUQyT7cw4xq6dYbKHNt2R6qRcaOiulL/cJvyAWcv1lUyymbus2usoXZifr9xicXEUfaUYJvyCOa6OW9PMjm1FisD7gP2VN4PrSNmP6fqQY3GS9u51czOtinF/1g2bv4987ZQTqiQpsua7vcPnk38HOVLsojFbKZER/TnizAVFEUpvH+0hgj9hfaNPMlEvzDnyk/umc+k/kHchjDJtFaKcuIz4FcqKiv2ubrovOnu3Rjt0r6i9dM6y2i1Zw5iuTanzTVLKdWP2mpfe1lbr6C3Ili57E+pJ8sJddGReaY6Kz2CizXzk9RHbZKl3Qri828BYNdCYByaiMOMszE64LvVahimz6ZhURjnYZEF4OTmBX4n+IjE0pQwln/jRxYAlUIX74Or1DM56oQPxQ3tdKrpPIjHQQpGvv2rsLYELaMFNG/1GxJygG+/NvGlwItx8M6iBiWbxVX5HZ+gr9ba0uv7Qe0bGynmZhhN0OujaulknGiEwAyoWRZSSJVe4fS+FJEuReIr3o8dNM37C0nR/+MkwwDNfHZBR9fY0HnGbuA7/QFiA1m5G5nu/z75M9QK34kDDZAZKTIkcFG1OoituG0LIsFr9qEV/Bs5BR82rwP3yGI17kXtS04Ou4G0hHNq7dP9rTMit7sdYH1u/cr2FsP1gxgZjPLhDK/bKQ9vj97A5E15IDsA1nhB9Q8Xa422ji5d6av+tFkNZFnZH9F+UCdQOsAPdiX2Gu/Zl1p6a3lml1uH9HYRKURfz9Xpxym61zVnOd/522HmVVV4SGGfhZPXR9xVi49rlZEPqad/SN1X/vTrnjubrmQB345iXCGPZjlyziFuLHO0BygtXoiyk1vct4jyx8xjSoGq1Qrhe69tO2k/5uKgWG4dE88x+oPQ3haPU022f/jHElhpa0sg5Tiz7kpsz+yX+llzhVNV/4XeBtwNRmfXnB5dccz2TWjyH9F9snpYb7n16dvEzXbD1bwwvpTNqXg0gdutds7loo2fJgnsL0pw8zdAkfTwQ3op3VhqQkuE4ys5+pXENAV8XPMiwltfiDChAqGi5tTrLEBw3RJ4fy4FdtMbHzNcKNzIiyHMdC2c5xnRyEASutH2K6eLMzhYdrpXWDEP+kZWThaIzntC9AMMMWyH5CVmVN55WmPIZiZpZIlLRyq59JNOhx/TV7csfdUL9cF0RvDOTh8VSJQ0VnlPMFSvTfdbYDZqeIuezc4hAutmk/yqh8qJC6r75RVs1QMhvHhTvaYasKGOXGv6p5+9YXUUCmrnuXDYNEmX3BX+CCqsX4U/d9VoLB/aWFC1CcN8Ove8kBA0oawTnHehiazfEF2aFXU9TKiNXe3eMrIxMpaQMfWqXL9tntEM+OL4MsyYOrbP/5Ktj8ulFFdCSGmaV7ZmyB+o7B++pmTDAZyjkPwilFcv2GH1r6uhM548L/nTCM+cwrE5QKCu5A5n2jHAbxNEGox2bblOuvsWaqcJlxW3f8PgMN/BLch6Qp5az6UZaQvP8qyQVTjhiSCXmMboTJsSysKSRsUA8+8s6rusz3Xcy8n6ECU6br8/D6VcGOGMQJhkEVQLB+l39r5B9V1oE0rB9XzG4ce6quTSBU9xsz6WIk3Em3EnPzsWzEUcLsGCNEYaXCs6x5oBxPdxUZdxYx34fLfRMvt+PTH+dDcER1CvDACc2rgviH+SEj5GYnWLrGn8puV+jjp6Z5ThWV8TzmwUjNOFJfF7M4YLpC2NU4biZBbRU5NShnQeUnwW1gGerGLNqMz92bhJTcPh+eMRmx4nBu2erP0Qo1q0o7eLmHPLYzDcUyXiwa9i8s8rK/PqHbZ39mkP7jFP8IzHf3FDz/vgjhrWHoiq/5XDLixZHAZP5Jv6VRXla1zBLJAKLsaPZaVhPstMxa7kkIz+Nc1EViYmVIAkKMgLFVq0YrWWLBujQ6kBMoL5PJUUlqDJhCu4gVXDUiJwRBOZVoKgQZQPsyn14/breeyK1zgEo3CZHcGuWnd1Myy1+1GeYfZ8fMDkd505/4tHhIUAilml6OQK2VZo+tV7WZLdh7T+yBef3dAd6DSTuW4v1Kdq4dfmlIqvpSUNqcO9rgZR5m4/ZZUAtIa9WXwWYZ02j4sBnKhgMd2/BbmDbKgejDX5Boegh+TBw/6yPwWINea5liBVBFJFPcz/Zo/5ZP1gcpt5v8ybaMFgYeV1RbKJGOH9XLxVMIyCgXdt3DnvHHGnXpHw2sK0wXEsbV6K7uXYptg936gV8zMrFBwY3XMhHL0KrOT4jKuhVRKTr8woxsmImIS8G6ANGB+Bx37st8K1wQkL8WjXb9nLPE6nERGwHmRpb58iVww8XFSaDzsXZhlp1rbevLzGncXQUGAFHYF1O//gshGjnRe2+326apNVVONIyQmB/V8NylQKBSmImxj0arX69G5UrHeyiu6wDEPVjGzjJQAv16XkDGvYG03R9C21fywdlUAkPRUtgI43jBZYkpBQDBjYdEsb5owYnm0pn4lBpBaC/ElXort53mt+KI+I9S8kYvJGY3ZLsyks8CfTu3rut+S/uKVXP7WUdixitB6L5sXYjQS/Z53f5H/d2KPjVLSec5pXPjTgEwzqjPUaRXpjEhK8jZ+k9GsjkTwWzadBGuw+GyDeUfeADa79f17uEqTXnlAouhOs7izvQg7OXyG/2rbe38gZ/RqEOeh/WCiPyWrGJFsx5PEAwsYITA7YJMESods0M4uOfPx8Q+eYY1/sFKjHr2GXeSIFXDaivzr/J3tUCKEzT52ceiFmPTfrVVQFEDdflctlm6wVx8yXyE9dKD/PxYv99Kh3gUCkVK8VRASufzj1Ua8pWJ5rNf49yP1jqtJT2A0QYpdynA2NEZ7lF8EamqJjinl11EnRDNphkGQlb8mLf/Pk+3QNzlvF7/UEBxJYP6c0KKp/903pFMeXCK/AkCXlKsopvDZTPWQMtZ/McsLvQQw8xfAzicA0eD4kBXR4zJ9inBnoGr25QG+OnCHFR0gvd6AMOK3n/qvW+XWqMp8bzXwEQMAVMJ0eP+1Ul/pMEFfWUH/zB0pgovD2fSCRg12Pg2rh6eeR2qNwfJysIX36okiWwdKMsGDCktaTCFmK3JBQi8vajGUrihLkyJsK8BFJHMsPutnYub2w8rlvwC3X8r/9QfoJl9pCgZ+OEXqJW8WPbqOZRvCZzsorxWRnLqBOZhuci+lc9E63Yt4ouB9cWfSjSa/oDOOUgVKNv71Re8QxWeUoXIZ/Zv0iP8qI4zobiynb0vWTlunlmZ6neowuQehfjQayA7yRN/bwOs9BKdy1bZD6SdurPhRV3orbb+q/V+6QC6jNI/au86eZuzfQHHARBnrO0vnwXpo3iineaG7zy7L/am1x6GmGhKShGBGNQF5kqovdSY716Rz65cmxJxSiqm/rHWDRlxTmosyU/SjpPXVFSk0esgEyYVcoMOXtwmzF+jWwGWY//XOkvDnf9OHuk1f8c3F0sIEAZDyuLWC8+kO0RAu4JbuRJfDzou/1PnXJvfwXuXnk/xGlb7SnseJr2goqg9Jj1DnIYdFmp9vT/3tmuYWSP2P6rqGElX2Pkc78W+xl25lKzkKycpJR1QGdeeWxaMQR+eF79eu/InW4Smst9L9SdnG0N8GgYDNn4Ww2xFYp+SbFa+uIYNaeqAwhPntn//DgrTwN/6BuF1H34ikIsI8k+58iBPwxtMPb+4fONX6T92DRgQ5HBMfZN1MWl2u/TCjzKUfca/BjLuUllVQzBrXj7X/PbvSkK/JNYm5GGbZosjB86FUVDWmSljYtXLoYoOuOA0yT+WMEE9jL2ItAohkkhIXlezdCKNEimZWU5fdJem44Fw9W+quN0zpgLsh69KznXctT/qYy9lCTEzUB9M311wQq1U5gNdPpklxA3dH6hrci+txDVKqiXsz2ayJJk/BasXIXBn0sRQ7X42K00qfx+ImL3/acwjSF6uugQ6dbBjsq8kCzEyQ9KbywJXTSBblr9Ym0YLxi/Qh8KgkvoUsldQ+cvBeEhFgU96nwxHMLJCyMMDO7S0DEy0hkOzbPafMdtI1Dpq1TxoEJw1Kg+EuNp9ey++oX8aYSrH8av1PILgkR28TBtQ02bBLoEdmyEK2qNEOwXZPZgDY5EciBcD7xD+lYSAw9OMB9T0QPjudWkQ0ldchyAFQZptdg+UWq7A/suk9Uum6/Rcsf+Tdxc60Wvd3e457v+NtjsC/3eE+v9N3R39r+ruBLT8j/9GsjDy30j+/eG/qL2/I/A+Fltv/XsnHn7/u27L2Bbc2I3L+8kwDkAEvmy67v/4KOmaanh/zN4RK97PWTCeTZZ0zL9f9E2eg9uwZ91shTslGbjnuSTT+9ky7kMO5OR56P/NhMA4+n/OCIwi/51E/suk/Mco/O9zgv7fTwlAheO4/W/fSe+r1fqYF+A3/ic=</diagram></mxfile>
2309.16585/main_diagram/main_diagram.pdf ADDED
Binary file (82.7 kB). View file
 
2309.16585/paper_text/intro_method.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ Instead of directly generating 3D models, recent studies have achieved notable success by optimizing 3D representation with a 2D pre-trained image diffusion prior based on score distillation sampling, as proposed by @dreamfusion. In this paradigm, the scene is represented as a differentiable image parameterization (DIP) denoted as $\theta$, where the image can be differentiably rendered based on the given camera parameters through a transformation function $g$. The DIP $\theta$ is iteratively refined to ensure that, for any given camera pose, the rendered image $\mathbf{x}=g(\theta)$ closely resembles a plausible sample derived from the guidance diffusion model. DreamFusion achieves this by leveraging Imagen [@imagen] to provide a score estimation function denoted as $\epsilon_{\phi}(x_t;y,t)$, where $x_t$, $y$, and $t$ represent the noisy image, text embedding, and timestep, respectively. This estimated score plays a pivotal role in guiding the gradient update, as expressed by the following equation: $$\begin{equation}
4
+ \nabla_{\theta}\mathcal{L}_{\text{SDS}}=\mathbb{E}_{\epsilon, t}\left[w(t)(\epsilon_{\phi}(x_t;y,t)-\epsilon)\frac{\partial\mathbf{x}}{\partial\theta}\right]
5
+ \end{equation}$$ where $\epsilon$ is a Gaussian noise and $w(t)$ is a weighting function. Our approach combines score distillation sampling with 3D Gaussian Splatting at both 2D and 3D levels with different diffusion models to generate 3D assets with both detailed appearance and 3D-consistent geometry.
6
+
7
+ Gaussian Splatting, as introduced in @kerbl3Dgaussians, presents a pioneering method for novel view synthesis and 3D reconstruction from multi-view images. Unlike NeRF, 3D Gaussian Splatting adopts a distinctive approach, where the underlying scene is represented through a set of anisotropic 3D Gaussians parameterized by their positions, covariances, colors, and opacities. When rendering, the 3D Gaussians are projected onto the camera's imaging plane [@ewa]. Subsequently, the projected 2D Gaussians are assigned to individual tiles. The color of $\boldsymbol{p}$ on the image plane is rendered sequentially with point-based volume rendering technique [@ewa]: $$\begin{equation}
8
+ % \begin{split}
9
+ C(\boldsymbol{p})=\sum_{i \in \mathcal{N}} c_i \alpha_i \prod_{j=1}^{i-1}\left(1-\alpha_j\right) \quad
10
+ % \end{split}
11
+ \end{equation}$$ where $\alpha_i=o_ie^{-\frac{1}{2}(\boldsymbol{p}-\mu_i)^T \Sigma_i^{-1}(\boldsymbol{p}-\mu_i)}$ refers to the opacity at point $\boldsymbol{p}$, $c_i$, $o_i$, $\mu_i$, and $\Sigma_i$ represent the color, opacity, position, and covariance of the $i$-th Gaussian respectively, $\mathcal{N}$ denotes the Gaussians in this tile. To maximize the utilization of shared memory, Gaussian Splatting further designs a GPU-friendly rasterization process where each thread block is assigned to render an image tile. These advancements enable Gaussian Splatting to achieve more detailed scene reconstruction, significantly faster rendering speed, and reduction of memory usage during training compared to NeRF-based methods. In this study, we expand the application of Gaussian Splatting into text-to-3D generation and introduce a novel approach that leverages the explicit nature of Gaussian Splatting by integrating direct 3D diffusion priors, highlighting the potential of 3D Gaussians as a fundamental representation for generative tasks.
12
+
13
+ Our goal is to generate 3D content with accurate geometry and delicate detail. To accomplish this, [Gsgen]{.smallcaps} exploits the 3D Gaussians as representation due to its flexibility to incorporate geometry priors and capability to represent high-frequency details. Based on the observation that a point cloud can be seen as a set of isotropic Gaussians, we propose to integrate a 3D SDS loss with a pre-trained point cloud diffusion model to shape a 3D-consistent geometry. With this additional geometry prior, our approach could mitigate the Janus problem and generate more sensible geometry. Subsequently, in appearance refinement, the Gaussians undergo an iterative optimization to gradually improve fine-grained details with a compactness-based densification strategy, while preserving the fundamental geometric information. The detailed [Gsgen]{.smallcaps} methodology is presented as follows.
14
+
15
+ Many text-to-3D methods encounter the significant challenge of overfitting to several views, resulting in assets with multiple faces and collapsed geometry [@dreamfusion; @magic3d; @fantasia3d]. This issue, known as the Janus problem [@perpneg; @seo2023let], has posed a persistent hurdle in the development of such approaches. In our early experiments, we faced a similar challenge that relying solely on 2D guidance frequently led to flawed results. However, we noticed that the geometry of 3D Gaussians can be directly rectified with a point cloud prior, which is not feasible for previous text-to-3D methods using NeRFs as their geometries are represented in implicit density functions. Recognizing this distinctive advantage, we introduce a geometry optimization process to shape a reasonable structure. Concretely, in addition to the ordinary 2D image diffusion prior, we further optimize the positions of Gaussians using Point-E [@pointe] guidance, a pre-trained text-to-point-cloud diffusion model. Instead of directly aligning the Gaussians with a Point-E generated point cloud, we apply a 3D SDS loss to lead the positions inspired by image diffusion SDS, which avoids challenges including registration, scaling, and potential degeneration. We summarize the loss in the geometry optimization stage as the following equation: $$\begin{equation}
16
+ \begin{split}
17
+ \nabla_{\theta}\mathcal{L}_{\text{geometry}}&=\mathbb{E}_{\epsilon_{I}, t}\left[w_I(t)(\epsilon_{\phi}(x_t;y,t)-\epsilon_I)\frac{\partial\mathbf{x}}{\partial\theta}\right]\\
18
+ &+\lambda_{\text{3D}}\cdot\mathbb{E}_{\epsilon_P, t}\left[w_P(t)(\epsilon_{\psi}(p_t;y,t)-\epsilon_P)\right],
19
+ \end{split}
20
+ \end{equation}$$ where $p_t$ and $x_t$ represent the noisy Gaussian positions and the rendered image, $w_*$ and $\epsilon_*$ refer to the corresponding weighting function and Gaussian noise.
21
+
22
+ While the introduction of 3D prior does help in learning a more reasonable geometry, we experimentally find it would also disturb the learning of appearance, resulting in insufficiently detailed assets. Based on this observation, [Gsgen]{.smallcaps} employs another appearance refinement stage that iteratively refines and densifies the Gaussians utilizing only the 2D image prior.
23
+
24
+ <figure id="fig:densification" data-latex-placement="t">
25
+ <embed src="new_densification.pdf" style="width:45.0%" />
26
+ <figcaption>An illustration of the proposed compactness-based densification.</figcaption>
27
+ </figure>
28
+
29
+ To densify the Gaussians, @kerbl3Dgaussians propose to split Gaussians with a large view-space spatial gradient. However, we encountered challenges in determining the appropriate threshold for this spatial gradient under score distillation sampling. Due to the stochastic nature of SDS loss, employing a small threshold is prone to be misled by some stochastic large gradient thus generating an excessive number of Gaussians, whereas a large threshold will lead to a blurry appearance, as illustrated in Fig. [3](#fig:ablation_densification){reference-type="ref" reference="fig:ablation_densification"}.
30
+
31
+ To tackle this, we propose compactness-based densification as a supplement to positional gradient-based split with a large threshold. Specifically, for each Gaussian, we first obtain its K nearest neighbors with a KD-Tree. Then, for each of the neighbors, if the distance between the Gaussian and its neighbor is smaller than the sum of their radius, a Gaussian will be added between them with a radius equal to the residual. As illustrated in Fig. [2](#fig:densification){reference-type="ref" reference="fig:densification"}, compactness-based densification could "fill the holes\", resulting in a more complete geometry structure. To prune unnecessary Gaussians, we add an extra loss to regularize opacity with a weight proportional to its distance to the center and remove Gaussians with opacity smaller than a threshold $\alpha_{min}$ periodically. Furthermore, we recognize the importance of ensuring the geometry consistency of the Gaussians throughout the refinement phase. With this concern, we penalize Gaussians which deviate significantly from their positions obtained during the preceding geometry optimization. The loss function in the appearance refinement stage is summarized as the following: $$\begin{equation}
32
+ \begin{split}
33
+ &\nabla_\theta\mathcal{L}_{\text{refine}}=\lambda_{\text{SDS}}\mathbb{E}_{\epsilon_{I}, t}\left[w_I(t)(\epsilon_{\phi}(x_t;y,t)-\epsilon_I)\frac{\partial\mathbf{x}}{\partial\theta}\right] \\
34
+ &+ \lambda_{\text{mean}}\nabla_\theta\sum_i||\mathbf{p}_i|| + \lambda_{\text{opacity}}\nabla_\theta\sum_i\mathtt{sg}(||\mathbf{p}_i||)\cdot o_i,
35
+ \end{split}
36
+ \end{equation}$$ where $\mathtt{sg}(\cdot)$ refers to the stop gradient operation, $\mathbf{p}_i$ and $o_i$ represents the position and opacity of the $i$-th Gaussian respectively. $\lambda_{\text{SDS}}$, $\lambda_{\text{mean}}$ and $\lambda_{\text{opacity}}$ are loss weights.
37
+
38
+ Previous studies [@fantasia3d; @magic3d; @latentnerf] have demonstrated the critical importance of starting with a reasonable geometry initialization. In our early experiments, we also found that initializing with a simple pattern could potentially lead to a degenerated 3D object. To overcome this, we opt for initializing the positions of the Gaussians either with a generated point cloud or with a 3D shape provided by the users (either a mesh or a point cloud). In the context of general text-to-3D generation, we employ a text-to-point-cloud diffusion model, *Point-E* [@pointe], to generate a rough geometry according to the text prompt. While Point-E can produce colored point clouds, we opt for random color initialization based on empirical observations, as direct utilization of the generated colors has been found to have detrimental effects in early experiments (See the appendix for visualization). The scales and opacities of the Gaussians are assigned with fixed values, and the rotation matrix is set to the identity matrix. For user-guided generation, we convert the preferred shape to a point cloud. To avoid too many vertices in the provided shape, we use farthest point sampling [@fps] for point clouds and uniform surface sampling for meshes to extract a subset of the original shape instead of directly using all the vertices or points.
2310.15517/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2310.15517/paper_text/intro_method.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Knowledge-based question answering (KBQA) aims to answer a question over a knowledge base (KB). It has emerged as a user-friendly solution to access the massive structured knowledge in KBs [@lan2021survey; @shu2022tiara]. Among the extensive knowledge stored in KBs, the quantitative property is the 3rd most popular property on Wikidata (only less than WikibaseItem and ExternalID), and there are more than 95M facts with quantitive properties. Given the large amount of quantitative facts stored in KB and the ability to perform precise symbolic reasoning through query languages, it is natural to use KBQA as a solution to real-world problems that require numerical reasoning.
4
+
5
+ However, it is shown that existing KBQA datasets are insufficient for numerical reasoning. We find that only 10% and 16.2% of questions in ComplexWebQuestions(CWQ) [@talmor2018the] and GrailQA [@Gu2021beyond], respectively, require numerical reasoning, but this part of the questions just focuses on some aggregation operations and lacks complex multi-step numerical reasoning. The remaining only need to match a graph pattern on KB (multi-hop reasoning), without the need to perform numerical reasoning. As a result, the questions that require complex numerical reasoning has not been covered by previous datasets (e.g. "How much more VAT do you have to pay to buy the most expensive iPhone 13 in Russia than in Japan?" or "How many times longer is the longest aircraft carrier than the shortest?").
6
+
7
+ In this paper, we propose a new challenging task, **NR-KBQA** (Knowledge-based Question Answering with Numerical Reasoning). Different from traditional KBQA which mainly focuses on multi-hop reasoning, NR-KBQA focuses on numerical reasoning and its combination with multi-hop reasoning. As shown in the left part of Figure [1](#fig:main_example){reference-type="ref" reference="fig:main_example"}, multi-hop reasoning needs to match a graph pattern in the KB, the difficulty of which comes from the composition of KB items (entities, relations, and classes). On the other hand, the difficulty of numerical reasoning comes from the composition of mathematical operators. It is worth noting that computational tasks entail the involvement of multiple operands, while each operand may be obtained by matching a graph pattern on the KB. Consequently, the combination of multi-hop reasoning and numerical reasoning will lead to a combinatorial explosion of the number of logical form structures, which poses a huge obstacle to semantic parsing models.
8
+
9
+ ![An example shows multi-hop reasoning, numerical reasoning, and their combination.](fig/fig_1.pdf){#fig:main_example width="\\textwidth"}
10
+
11
+ To support the study of this task, we construct a large-scale dataset called **MarkQA** (Co**M**plex Numeric**A**l **R**easoning over **K**nowledge Base **Q**uestion **A**nswering Dataset) which starts from 1K questions posed by humans and automatically scales to 32K examples. Each question in MarkQA is equipped with a question decomposition in the widely used QDMR format and our proposed corresponding logic form in Python, namely PyQL. The QDMR can be seen as an explicit reasoning path and regarded as a question decomposition resource. Meanwhile, PyQL not only acts as a reasoning step but can also be directly transformed into a SPARQL query. It offers a more human-readable alternative to SPARQL and can easily be extended for further developments. We believe that MarkQA will serve as a valuable resource to foster further development of KBQA. [^1]
12
+
13
+ The remainder of this paper is organized as follows: Section [2](#sec:related_work){reference-type="ref" reference="sec:related_work"} reviews related work. Section [3](#nr-kbqa){reference-type="ref" reference="nr-kbqa"} defines the NR-KBQA and introduces PyQL. The construction and analysis of MarkQA are demonstrated in Section [4](#dataset_construct){reference-type="ref" reference="dataset_construct"}. Section [5](#experiments_results){reference-type="ref" reference="experiments_results"} presents experimental results. We conclude our contributions and future work in Section [6](#con_and_future){reference-type="ref" reference="con_and_future"}.
14
+
15
+ # Method
16
+
17
+ In this section, we present the formal definition of a question with numerical reasoning (NRQ), which is the base of NR-KBQA. Then, we introduce PyQL to showcase the reasoning steps of NRQ.
18
+
19
+ An NRQ is any question, requiring mathematical calculations, such as arithmetic calculation, aggregation, or comparison, to reach the answer. An NRQ essentially consists of the descriptions of value and the computation process. The connotation of NRQ can be defined recursively in Backus--Naur form: $$\begin{align}
20
+ \texttt{<NRQ>} &::= \texttt{<Func>} \,\, \texttt{<Arg>} \, \{\,\texttt{<Arg>\,}\} \label{eq:1} \\
21
+ \texttt{<Arg>} &::= \texttt{Num} \mid \texttt{<NRQ>} \mid \texttt{<Des>} \label{eq:2} \\
22
+ \texttt{<Des>} &::= \texttt{Rel} \,\, \texttt{<Var>}\mid \texttt{<Des>} \,\, \texttt{<Des>} \label{eq:3} \\
23
+ \texttt{<Var>} &::= \texttt{Ent} \mid \texttt{Num} \mid \texttt{<Des>} \mid \texttt{<NRQ>}\label{eq:4}
24
+ \end{align}$$
25
+
26
+ In this grammar, `<NRQ>` represents the intrinsic meaning of NRQ, which can be regarded as the outermost function (`<Func>`) applied to one or more `<Arg>`. `<Arg>` corresponds to a constant value (`Num`), a description of an entity's numerical attribute (\<`Des`\>), or another \<`NRQ`\>. `<Des>` describes the relationship (`Rel`) between a variable (\<`Var`\>) and the entity being described, while the `<Var>` corresponds to an entity (`Ent`), a `Num`, a `<Des>` or a `<NRQ>`. Equation [\[eq:2\]](#eq:2){reference-type="ref" reference="eq:2"} and [\[eq:4\]](#eq:4){reference-type="ref" reference="eq:4"} allow for the nesting of numerical and multi-hop reasoning, thereby enabling the representation of complex NRQ.
27
+
28
+ Based on this recursive nature, the query graph of an NRQ can be modeled as a tree and that of a sub-question is a sub-tree. For the example in Figure [1](#fig:main_example){reference-type="ref" reference="fig:main_example"}, the right part (in red) is a computational tree where each intermediate node is a function node and each leaf is a constant value or an attribute value (green node). The attribute value node is acquired by matching a graph pattern in the KB, which the previous datasets focus on and can be seen as a description of a value.
29
+
30
+ We propose PyQL (**Py**thonic Query Language for SPAR**QL**), a logical form written in Python as a reasoning step representation for NRQ. A PyQL is a sequence of commands: {$c_{1},c_{2}, ..., c_{n}$}, where $c_i$ either initializes a PyQL object or calls a function on the object. As shown in the top left of Figure [2](#fig:construction_process){reference-type="ref" reference="fig:construction_process"}, the user should first initialize a PyQL object and sequentially add functions to construct the whole query. Each function represents a reasoning step such as stating the relation between two entities or computing the average. A valid PyQL can be directly compiled into an executable SPARQL query. In detail, PyQL encapsulates various SPARQL syntax elements, such as Basic Graph Patterns, Assignments, Filters, Aggregations, and Subqueries. The detailed function list of PyQL can be found in Appendix [14](#ref:pyql_func){reference-type="ref" reference="ref:pyql_func"}. The main features of PyQL can be summarized as follows:
31
+
32
+ - **User-friendly and conciseness**. PyQL offers an intuitive and concise approach for querying KB by shielding users from unreadable and lengthy database query language. It alleviates the burden of learning and writing SPARQL and effectively reduces the entry barrier for the community in utilizing SPARQL. It also makes sure that the generated SPARQL is grammatically correct and uniformly formatted. Specifically, in MarkQA, the average token length of PyQL is only 60.6% of SPARQL, and the grammar errors when using PyQL as output is half of those of SPARQL.
33
+
34
+ - **Step-by-Step reasoning path**. PyQL, in a symbolic manner, shows the transparent reasoning pathway of a question. Compared to SPARQL or S-expression, which is presented as a whole query and is difficult to parse or decompose, PyQL exhibits how to construct a query step by step. PyQL also serves as an efficient supervision signal, with our experiment showing up to a 19% performance improvement. With the prevalence of the Large Language Model (LLM) and Chain-of-Thought (CoT) [@wei2023chainofthought], it is feasible to use PyQL as a structural CoT. Besides, the code-style format also benefits LLMs in understanding due to their pre-training on code resources.
35
+
36
+ ![The Seeds-to-Forest (SoF) data construction framework. We first collect some seed questions and corresponding logic forms. And then generate more examples by paraphrasing, generalization, and composition.](fig/fig_2.pdf){#fig:construction_process width="\\textwidth"}
37
+
38
+ Our construction comprises 4 steps: Seeds collection, Paraphrasing, Generalization, and Composition, detailed in Figure [2](#fig:construction_process){reference-type="ref" reference="fig:construction_process"}. We extract this process into a general framework and name it **S**eeds-t**o**-**F**orest (SoF).
39
+
40
+ As mentioned in [2.1](#subsec:rw_kbqa){reference-type="ref" reference="subsec:rw_kbqa"}, we collect seed questions in an NLQ-to-LF manner. This allows for a greater diversity of questions, providing the possibility to reach questions with longer reasoning paths while ensuring the meaningfulness of each seed question. Besides, the question is more natural, as they are posed directly by humans, instead of transformed from randomly searched logic forms.
41
+
42
+ We invite 10 graduate students familiar with KBQA to annotate seed questions. We instruct them to focus on exploring patterns specific to certain relations, thus enhancing the variety and originality of the questions. Annotators are required to annotate at least three questions for each quantitative property and each question must involve at least one computational operation. Furthermore, for different questions related to the same relation, their computational structure must be different, which means that the differences between the two questions can not be merely the replacement of entities. In addition, all the entities and relations in the questions are recorded for logic form annotation purposes.
43
+
44
+ We then invite six graduate students familiar with QDMR, PyQL, and SPARQL to annotate the QDMR and PyQL for each seed question. We instruct annotators to first write down the QDMR (some sub-questions) of each question, then annotate the PyQL for each sub-question. The SPARQL of each question is automatically generated from PyQL and the annotators need to make sure that each SPARQL is executable with a unique answer. An annotated example and an auxiliary annotation system page can be found in Appendix [10](#ref:annotation_example){reference-type="ref" reference="ref:annotation_example"} and [11](#ref:auxiliary_annotation_system){reference-type="ref" reference="ref:auxiliary_annotation_system"}.
45
+
46
+ For each seed question, we ask another three annotators to check if it is meaningful and can be posed in the real world. We only keep the questions receiving over 2 approvals, resulting in 10.19% dropped. For QDMR and PyQL, we ask another two annotators to check the correctness independently and correct all error cases. In the end, we collect a total of 950 seed questions for 318 quantitative properties.
47
+
48
+ We perform question paraphrasing through GPT-3.5-turbo to increase the surface-level diversity of the questions. The prompts and a paraphrased result can be found in Appendix [12](#ref:prompt_paraphrase){reference-type="ref" reference="ref:prompt_paraphrase"}. Given a question, we enclose the mentioned entities in parentheses and anonymize them as A/B/C. Then, we ask the model to paraphrase the anonymized question, but when outputting, it should restore the specific entity labels based on the correspondence between A/B/C and the entity labels. In this way, we can obtain question paraphrases in a more controlled manner, and the model's output is directly usable. For each question, we instruct GPT-3.5-turbo to generate 10 paraphrases that preserve the original question's semantics.
49
+
50
+ We sample 1000 paraphrases of 100 questions and find only 4.3% with minor problems, so we consider the quality to be basically acceptable and obtain 9,366 paraphrases.
51
+
52
+ In this step, we extract the logic form templates for each seed question by masking entities with variables. We execute the templates to acquire more entities that satisfy the SPARQL to generate more examples. In addition, for numerical literals in the seed questions, we introduce some perturbations to avoid the model learning the bias from literals. We also collect the alias of the entity through the *skos:altLabel*. To ensure the quality and unambiguous nature of aliases, we have removed aliases that can be associated with multiple entities simultaneously, as well as aliases that are identical to the labels of other entities.
53
+
54
+ For each seed question, we sample up to 30 generalized questions. We employ the new entity's label or alias and its QID to replace the original in the question and logic form, respectively. As a consequence, we obtain about 20k examples.
55
+
56
+ ::: table*
57
+ :::
58
+
59
+ So far, our emphasis has primarily been on exploring numerical reasoning. In real-world scenarios, numerical and multi-hop reasoning often arise together. In this step, we incorporate multi-hop reasoning into MarkQA by converting entities in the questions into descriptions. This is done through two steps: Sub-graph Sampling and Naturalization.
60
+
61
+ The aim of Sub-graph Sampling is to sample a sub-graph centering around the target entity for description. We consider structures within two hops, where each variable can be restricted by at most two triple patterns, resulting in six types of structures (detailed in Appendix [16](#appendix:six_structure){reference-type="ref" reference="appendix:six_structure"}). These structures cover most structure patterns of previous complex KBQA datasets (e.g. LC-QuAD, CWQ, GrailQA). During sampling, we ensure that the sub-graph is sufficient (able to uniquely identify the entity) and non-redundant (without extra triple patterns). To guarantee the meaningfulness of the sub-graph, we collect high-quality relations from existing datasets, including SimpleQuestions, CWQ, and GrailQA. For Freebase relations, we transform them into wikidataIDs using Wikidata's official mapping page. We calculate their distribution as a base to select triple patterns and incorporate some smoothing techniques to balance the sampling probability and alleviate the long-tail phenomena.
62
+
63
+ For Naturalization, we prompt GPT-3.5-turbo to transform the sub-graphs into natural language descriptions. To prevent information leakage, we mask the target entity and the inner entity as a special token and instruct GPT-3.5-turbo which one to describe. In other words, it should not output the label of the target entity or the internal entity as a variable. Otherwise, it is not a real two-hop problem, since the internal variable is already leaked. The prompt we use can be found in Appendix [13](#ref:prompt_composition){reference-type="ref" reference="ref:prompt_composition"}.
64
+
65
+ With our well-designed prompt, GPT-3.5-turbo excels at the task of converting sub-graphs into natural language descriptions. We sample 100 (sub-graph, description) pairs and find 93 acceptable. In this step, 50% of questions from [4.1.3](#s4_generlization){reference-type="ref" reference="s4_generlization"} have one or two entities replaced with descriptions. Each question can generate at most 2 new questions through this step. Finally, we obtain 31,902 examples.
66
+
67
+ Our MarkQA consists of 31,902 examples. A detailed comparison with existing datasets is shown in Table [\[tab:compare_static\]](#tab:compare_static){reference-type="ref" reference="tab:compare_static"}. Compared with existing datasets, MarkQA has quite a lot more questions that require numerical reasoning (NRQ). In addition to being more numerous, MarkQA is superior in terms of the difficulty of numerical reasoning (Avg NS) and supports more comprehensive operators (SPT-OP). Combined with multi-hop reasoning, the template of our query far exceeds others (Canonical LF). This results in more diverse questions that have not been included in previous datasets. Considering the query structure (Struc), MarkQA shows a great diversity compared to others. This is intuitive, as the combination of KB graph patterns and computational graphs produces a combinatorial explosion. We provide other detailed statistics in Appendix [8](#ref:data_analysis){reference-type="ref" reference="ref:data_analysis"}.
68
+
69
+ The question of MarkQA considers 15 types of operators, including 5 arithmetic operations (addition, subtraction, multiply, divide, absolute value), 5 aggregation operations (count, argmin, argmax, average, summation), and 5 comparative operations(\>, \<, \>=, \<=, =). With their combination, MarkQA supports most of the questions that require numerical reasoning. The percentage of questions that have at least one arithmetic, aggregation, or comparative operation are 88.1%, 24.8%, and 34.6%, respectively. There are 71.20% of questions with different operators and the average number of operators per question is 2.51.
70
+
71
+ :::: table*
72
+ ::: tabular
73
+ lccccc **Methods** & **Output** & **Overall** & **I.I.D** & **Compositional** & **Zero-shot**\
74
+ \***T5-base** & SPARQL & 34.24 & 70.05 & 53.71 & 6.32\
75
+ & PyQL & 40.70 & 78.32 & 63.10 & 10.39\
76
+ \***GMT** & SPARQL & 38.68 & 78.32 & 63.58 & 6.07\
77
+ & PyQL & 43.63 & 82.10 & 68.33 & 11.71\
78
+ \***QDTQA** & SPARQL & 38.96 & 80.95 & 62.60 & 5.82\
79
+ & PyQL & 44.12 & 85.46 & 71.98 & 9.14\
80
+ :::
81
+ ::::
82
+
83
+ The answers of MarkQA are all unique, which may manifest as a numerical value, an entity, or a boolean value. When presented with questions that produce collections of entities, we transform them into ones with unique answers (such as the size of the entity set or the maximum value of one of their attributes). The percentage of answers in the dataset that are a number, an entity, and a boolean value are 72.9%, 17.9%, and 9.1%, respectively.
84
+
85
+ We invite three workers to evaluate the final dataset. Each example can be accepted if it receives over 2 approvals. In detail, we randomly sample 100 examples from the test set of MarkQA to examine their questions, QDMR, and PyQL. We find that all samples have fluent questions. 5 questions are ambiguous or not meaningful, which means they are not expected in real-world questions. The QDMR or PyQL of 8 questions is problematic. In total, 89 out of 100 examples are acceptable.
2310.19180/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-10-18T05:20:59.984Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" etag="FbJna-dMEHE1_VZTS64Z" version="22.0.0" type="google">
2
+ <diagram name="Page-1" id="BN-eN61mGyekvCFOMh0n">
3
+ <mxGraphModel dx="2173" dy="811" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="0" pageScale="1" pageWidth="850" pageHeight="1100" math="0" shadow="0">
4
+ <root>
5
+ <mxCell id="0" />
6
+ <mxCell id="1" parent="0" />
7
+ <mxCell id="7CoqnSFOIrunSOh43Noi-86" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-1" target="7CoqnSFOIrunSOh43Noi-3">
8
+ <mxGeometry relative="1" as="geometry" />
9
+ </mxCell>
10
+ <mxCell id="7CoqnSFOIrunSOh43Noi-87" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-1" target="7CoqnSFOIrunSOh43Noi-4">
11
+ <mxGeometry relative="1" as="geometry" />
12
+ </mxCell>
13
+ <mxCell id="7CoqnSFOIrunSOh43Noi-88" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-1" target="7CoqnSFOIrunSOh43Noi-5">
14
+ <mxGeometry relative="1" as="geometry" />
15
+ </mxCell>
16
+ <mxCell id="7CoqnSFOIrunSOh43Noi-90" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-1" target="7CoqnSFOIrunSOh43Noi-6">
17
+ <mxGeometry relative="1" as="geometry" />
18
+ </mxCell>
19
+ <mxCell id="7CoqnSFOIrunSOh43Noi-1" value="&lt;span style=&quot;font-size: 20px;&quot;&gt;JEN-COMPOSER&lt;/span&gt;" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;fontColor=#333333;strokeColor=none;fontStyle=1;fontFamily=Helvetica;fontSize=20;strokeWidth=1;" vertex="1" parent="1">
20
+ <mxGeometry x="110" y="337" width="140" height="100" as="geometry" />
21
+ </mxCell>
22
+ <mxCell id="7CoqnSFOIrunSOh43Noi-91" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-2" target="7CoqnSFOIrunSOh43Noi-1">
23
+ <mxGeometry relative="1" as="geometry" />
24
+ </mxCell>
25
+ <mxCell id="7CoqnSFOIrunSOh43Noi-2" value="&lt;span style=&quot;font-size: 19px;&quot;&gt;Prompt(s):&lt;/span&gt;&lt;br style=&quot;font-size: 19px;&quot;&gt;&lt;span style=&quot;font-weight: normal; font-size: 19px;&quot;&gt;genre, style, instruments, tags, bpm, &lt;i style=&quot;font-size: 19px;&quot;&gt;etc.&lt;/i&gt;&lt;/span&gt;" style="text;html=1;strokeColor=none;fillColor=#f5f5f5;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=1;fontColor=#333333;fontStyle=1;fontFamily=Helvetica;fontSize=19;" vertex="1" parent="1">
26
+ <mxGeometry x="-140" y="335.75" width="210" height="100" as="geometry" />
27
+ </mxCell>
28
+ <mxCell id="7CoqnSFOIrunSOh43Noi-31" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-3" target="7CoqnSFOIrunSOh43Noi-7">
29
+ <mxGeometry relative="1" as="geometry" />
30
+ </mxCell>
31
+ <mxCell id="7CoqnSFOIrunSOh43Noi-3" value="" style="rounded=1;whiteSpace=wrap;html=1;align=left;fillColor=#dae8fc;strokeColor=none;flipV=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" parent="1">
32
+ <mxGeometry x="320" y="240" width="280" height="70" as="geometry" />
33
+ </mxCell>
34
+ <mxCell id="7CoqnSFOIrunSOh43Noi-35" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-4" target="7CoqnSFOIrunSOh43Noi-7">
35
+ <mxGeometry relative="1" as="geometry" />
36
+ </mxCell>
37
+ <mxCell id="7CoqnSFOIrunSOh43Noi-4" value="" style="rounded=1;whiteSpace=wrap;html=1;align=left;fillColor=#d5e8d4;strokeColor=none;flipV=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" parent="1">
38
+ <mxGeometry x="320" y="315" width="280" height="70" as="geometry" />
39
+ </mxCell>
40
+ <mxCell id="7CoqnSFOIrunSOh43Noi-33" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-5" target="7CoqnSFOIrunSOh43Noi-7">
41
+ <mxGeometry relative="1" as="geometry" />
42
+ </mxCell>
43
+ <mxCell id="7CoqnSFOIrunSOh43Noi-5" value="" style="rounded=1;whiteSpace=wrap;html=1;align=left;fillColor=#fff2cc;strokeColor=none;flipV=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" parent="1">
44
+ <mxGeometry x="320" y="390" width="280" height="70" as="geometry" />
45
+ </mxCell>
46
+ <mxCell id="7CoqnSFOIrunSOh43Noi-34" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-6" target="7CoqnSFOIrunSOh43Noi-7">
47
+ <mxGeometry relative="1" as="geometry" />
48
+ </mxCell>
49
+ <mxCell id="7CoqnSFOIrunSOh43Noi-6" value="" style="rounded=1;whiteSpace=wrap;html=1;align=left;fillColor=#f8cecc;strokeColor=none;flipV=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" parent="1">
50
+ <mxGeometry x="320" y="469.75" width="280" height="60.25" as="geometry" />
51
+ </mxCell>
52
+ <mxCell id="7CoqnSFOIrunSOh43Noi-30" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-7" target="7CoqnSFOIrunSOh43Noi-10">
53
+ <mxGeometry relative="1" as="geometry" />
54
+ </mxCell>
55
+ <mxCell id="7CoqnSFOIrunSOh43Noi-93" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;strokeWidth=2;strokeColor=#B3B3B3;fontSize=16;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-7" target="7CoqnSFOIrunSOh43Noi-92">
56
+ <mxGeometry relative="1" as="geometry" />
57
+ </mxCell>
58
+ <mxCell id="7CoqnSFOIrunSOh43Noi-7" value="Huamn&lt;br style=&quot;font-size: 20px;&quot;&gt;Feedback" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;fontColor=#333333;strokeColor=none;fontStyle=1;fontFamily=Helvetica;fontSize=20;strokeWidth=1;" vertex="1" parent="1">
59
+ <mxGeometry x="662" y="335.75" width="140" height="100" as="geometry" />
60
+ </mxCell>
61
+ <mxCell id="7CoqnSFOIrunSOh43Noi-29" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=2;strokeColor=#B3B3B3;" edge="1" parent="1" source="7CoqnSFOIrunSOh43Noi-10" target="7CoqnSFOIrunSOh43Noi-1">
62
+ <mxGeometry relative="1" as="geometry" />
63
+ </mxCell>
64
+ <mxCell id="7CoqnSFOIrunSOh43Noi-10" value="Interactive Selection &amp;amp; Editing" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;fontColor=#333333;strokeColor=none;fontStyle=1;fontFamily=Helvetica;fontSize=20;strokeWidth=1;" vertex="1" parent="1">
65
+ <mxGeometry x="317" y="128.75" width="280" height="80" as="geometry" />
66
+ </mxCell>
67
+ <mxCell id="7CoqnSFOIrunSOh43Noi-25" value="selected tracks (optional)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontStyle=0;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" parent="1">
68
+ <mxGeometry x="190" y="155" width="130" height="30" as="geometry" />
69
+ </mxCell>
70
+ <mxCell id="7CoqnSFOIrunSOh43Noi-46" value="" style="group;flipV=1;fontStyle=1;fontFamily=Helvetica;fontSize=20;" vertex="1" connectable="0" parent="1">
71
+ <mxGeometry x="334" y="444" width="290" height="136" as="geometry" />
72
+ </mxCell>
73
+ <mxCell id="7CoqnSFOIrunSOh43Noi-39" value="" style="shape=image;verticalLabelPosition=bottom;labelBackgroundColor=default;verticalAlign=top;imageAspect=0;image=data:image/png,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIABAMAAAAGVsnJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAodEVYdHN2ZzpiYXNlLXVyaQBmaWxlOi8vL3RtcC9tYWdpY2stb0d6aDhuTljeVWFtAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA5LTMwVDAwOjEwOjQyLTAzOjAw9ZOTigAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wOS0zMFQwMDoxMDo0Mi0wMzowMITOKzYAAAAJcEhZcwAAAEgAAABIAEbJaz4AAAAnUExURUdwTDAwMC8vLy8vLy8vLy8vLy8vLy8vLzAwMC8vLzAwMDAwMDAwMLX3vW4AAAAMdFJOUwD9BHsbozJIxGDa7aTsnz4AAAzMSURBVHja7Z3NjxTHGcZbtb277LEVAhb0odQMEHMixrFkeQ9jCwSGHNaBRErYwwB2sAKHDXYcJ+KA7SSy4j2sg634g0uARDnMxQFFsb2XElp22ak/Kv1Rs/0x1dXVPT073VvPT0hejWd6up95uqr6fd+qsiwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEI8z3gJiNFXb//h79To67/M+QOTFbjBHZd/aO71d9Zdzp0NcwWYZdzHXTP1JiCv8lCAO6YagN5zQwUeGyvAYiTAUwgAASBAix9mKBxgsgDeOxdvGS3AV5wP1gwW4AhnjrthrgB00eGcuSepqQLY6+EF/NdUB5CZ8GmOPSGmOiB6nGVbxrYB+yIBts1pAwg1WwCaCWJLBSDag+MWOuD6e9RoB7zvpoPYUgGunfpMx0q0hQLM+oMe/lAtADnnv3Cp8FCnX2ujA5b9cZ+z3VUKML/uOG7yPdK29ItvvnuxfQLMB5fL3CWVAORC8JJbkOvwXcIHK60TQAx7HisdsOwWjw3tvuMLeVBXANKUBOqF8GzZM6IQoBM9HRyihUqy7a6uA7zTzXDATZ4d+Y8IQMLbxH+pF5x33g93JcqHnNQU4PlPvv+IZj1h0WYKMCMEWKGF3b97R0+Azqr/rn9lXvt112q0AK4vAPn844+kp0mXo+u+b2kJcN7ve51BL/nSwgf8fyuNF+CPfp/5rUqAx1oCkPDdqX7FXva/ZYM2WwBv3nUch58cW4C5kabX/1o33R030gHhgMDZzBeAaQkwDLpsZNuQlCRjPeFOSIBoQLA1tgMkg63+WGEY6ZNJ/QKIcCHvjekA8R08IcCx6MhOPR2Bff3qRBwQ3brS6o8qDkgIQBaiV/hKHc3gsU/4oRcmIYAYEclC5mM6YCiAW4sAwSlsd6ciAG+CADNhh/LQXAdcyXYxhjlA9FWa7ekedEDUo+oW6u0FBxAv9aGOm3uWe9cByU/ZQoAlYxxw9stbMgcsmeKAM+t88ILBDpjrM4c9R811wHmWCWub5oDl7HN0kxzgeXTSAkRPqMnnaLMcMPzgoNtEB8zdfbk3aQHEySViFM1xgP0G5z/s7roAzXHAb7jjyKvNmukAka7TFIBQT+0AYvfdMFnWHgd03qQ1OkB8THoezXTAmX+wIF2nKcDvPr7YVbcBJ3g28dxsBxz3O1T2UFeAc8zlj9QOEIc+2JY24HaQrvPvWC0B7L7js6R0QDR9TJrraKID7GDCZzCm1hLgR2GK6JmyF7g9JQEqOkAc6GlKgDDWIhMg/HWTmZ32O2AfS/+hdsCqmznd1jtg2GZvJAUg1069KxWArGf7w9Y7YFgslBSAvM2DWItEAFE1lPgOiQPutcsBEgEWXOa4W1QigJ0nAG/ALaDrAEILBKC/jWItZQSo4gCvNgFoOJZPOiAzYM8EkolaABK2dM7m5B1AaxKAnKb6DhArnSgEmBdlHHU4ILOwTlKAs19ezTiAeLSSAHO/+v6fvbQDXv/5L3MyZOQVzh9RpQDi0ANa0QF6t8CZdb69VskB1KJpAX7MuPMk5YAjbk5NV1DZ6498/6QUYHYYbKrogMQtYN891ZMLcGydOXyDpgR4/dRLVRww4x/JcU8mHEBWg5quJenngvubbVFaswByBwR22+pJBbghGtqEAGf8z/1FIxr6+Z9fSgpAI8GfkViA8AoCU+ToFsyOLBbA6Y7dBpwLQkMHpQIsi4Y2FiB4znIHK4UCfM1ddikpwKqo4IoFuOkErxyWtQLRuI/ftybogPgWCO3GehIB5lzxrbEA4XOWWzhv84gvqnOoGwuwU8EVtwHROUnz5LejNx+wanBAOKBQOSC6uOS6MjsC7DS0z+8IEBUMRvUdinK/e054zFiA+WEFV+wA8cSyZI30Q2KpF7a5Kw7Yx5KpEpkArBs7IDpr1tPJh8SDFF8AcYS1HQcMKyWE8kQiwLMaHGD//l21A4aHDsJuNOOA2aEAOw4Y/n5r4aHfVOdD4kFKQgBbJsDxn7zcrShAgQPm3uCDXyh7AeunsQD23XBthlEBFrK1iP6D5vwHfJAzVYvMZjzEe8dHHODFAiz0OXs0GQd8ylg4VSh/HJAMvPr94fbKqAA8dkBCgEXHb8J7SgF2fp0CBywyv2v5mbgRaJ0OWOD+kYKpQgoHxAIcDc7jiaYDgnK/vO5AIoDCAf4N48YNK63VAReYeHRUtAGxAMth3HGNajkgLPdztmpwAD0f/j8ncNO1v/77b3U6YHHY1eb3ArEAUa/vPiRaDlhWdAflHCAud/h0sN2r0QHLbuoPtQPEHL1nRMsBfTe/2q2kA8RZ3gnmibpOcMfW5QDSd1PfoXbAPpb6DrUDRsca1R2wKiY0Rb1nMLmvLgdkBVA6YBh43dJyQFkBVA4YCmCdYMm4/oQdMPod5/ME2B0HsP8IV7r3d8UBtxvoAHFy+63pOODEtB1AxMkdsKbTBpxvigOeWlPpBeAAOAAOMMoBDewFxMmx/dZ0xgHTd8BwIGSsA6oMhfdSG1DpYWgv9ALpx2HnoDUdB+Q/Dk/aAamACC8ZEBmUCYio4wG5ARFpRKhkQETlgCgkxqqFxNQRoXvpkJjaAbkhMWlESEjaGz8mKIKiYZaUEErqjAlmgqK8WlBU6oArijUlyjlgvLC4UyYsXpAXyAuLyxzg6YTFE4kRZV5g3v/zEZlIZuhTnkyMFDggLzEizwwtBt/e002NKR3gjZEaK8gMzV1OpsZYtdSYPDOkSI0NJ4tt6uYGafXk6KBMclTaC6STo0SdHdZLjqrS452a0+PjV4go0uNHRrLD4qwj6ys2bZsJCiS2ZAUStk6BxE39AonC+gBSVB+gUyCxULZAwvs6XOZTUiITO+Bmdkph3IaIQen9XXHAREpkrI60SMpvFfSLpLhWkVQNVWJFRVIsXSTFtIqkcsvkOjplcp5+mZwzfp1gUZmcs+TFbQA56nKmUSbnN6heplDS/88PSLJS9Ih/nvJCSTKhQsmcOsFObqHk3GihJK1YKGnZUalsolKU5JfKeq9wplcqe5jWUCtMaE6pLJGUypISO3amy+XDYulUtXj+BzsTKZbOqxbPzKOfTLF0UHVO09XiinJ5S7Ncnm1Oer5AfeXy5WaMREYrmjDBy02YqDJfoL4JE3XPGQqnzDxHJ++Aps4ZsnInTdU4X6Axs8ZKTZvz8qbNtXjmqHziJCk5cbLFM0fFgQ6MN3W2vW1AtN4DLzV5OnltLZw3ONb0+U4wfd5/8NhDDggXUAieDrQECDbd4N8S5foBIuawabXEAaTcEhpvSZfQSK4gMa0lNKqvI1TDIip8dBEVWWanmavIlFxGx5Ito5N0AJnSMjqNWUmKTGkhpeasJLUrS2mNLqbWoNXk7MkvpiZZTq9Bq8nlRjYmuqBik9YT9LzJCzCypKZpa4oGi6ryrcTHjFtX+Mw6P5xcVte4dYWzCyubt7I0oamltc1zQKazGTrgpDEOyOZ8+jw3K743HZBzKl3LUAdMdYsNZvomK7wZ2+z4R9rqmusAay7YaIkY7ABqX7+qu91aKQGKt9pijdhqSyyzVb8AxZutVXWA2IFQt+8qHhtpv7OUAMXb7bGq2+15fZfXut3eZAQo3HCRN2TDxUkJcLxoy01NB0S7C6Yu9yhrw5abwaarqYRQxTZAuukqb/6mq7Vtu0taue1u0EWRvHu05MbL9ipv0cbLYvwTLsNT09bbRLL1NpnK1ts6m6+L7n9beXplN18np5ux/fxRJtouhQCi+3+iPGG77/it+EFLUwD/927E9VvzLJ5UkSeAFXb/qTZbQlAxEEwe0nRAY1h2eFgYohJgfp05btE2ruSLb7570WqfALOibFwlgPW2P0i5VGjZ069ZLRTAet/lD2iBANa1U59pPIGF/1onAD37Hi0UIDfLlzcgaJMANN0eSwUgHt2zAmTnJ8gdoK9n+wQofFY3TAAxyb9qaKL1AgyDFU+IqQ4QI//9pjrAvwInWnjeVAGsGe4PfDcsYx1geV/5TzVrBgtgkXcu3rKMFoCWmbu5BwXI3S/ZGAHG60QgAASAAEYLcM/NxpkN41WeXTPGMKJMg7tmrADh/r3OhmUuN7jjsg8NFsC+zPkDSk1WIFqfxVyIR4nRAuhnUQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAw/g/DeNj5Q8nTPIAAAAASUVORK5CYII=;fontFamily=Helvetica;fontSize=16;fontColor=default;fontStyle=1;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-46">
74
+ <mxGeometry width="135.34246575342465" height="109.5" as="geometry" />
75
+ </mxCell>
76
+ <mxCell id="7CoqnSFOIrunSOh43Noi-45" value="" style="shape=image;verticalLabelPosition=bottom;labelBackgroundColor=default;verticalAlign=top;imageAspect=0;image=data:image/png,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIABAMAAAAGVsnJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAodEVYdHN2ZzpiYXNlLXVyaQBmaWxlOi8vL3RtcC9tYWdpY2stb0d6aDhuTljeVWFtAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA5LTMwVDAwOjEwOjQyLTAzOjAw9ZOTigAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wOS0zMFQwMDoxMDo0Mi0wMzowMITOKzYAAAAJcEhZcwAAAEgAAABIAEbJaz4AAAAnUExURUdwTDAwMC8vLy8vLy8vLy8vLy8vLy8vLzAwMC8vLzAwMDAwMDAwMLX3vW4AAAAMdFJOUwD9BHsbozJIxGDa7aTsnz4AAAzMSURBVHja7Z3NjxTHGcZbtb277LEVAhb0odQMEHMixrFkeQ9jCwSGHNaBRErYwwB2sAKHDXYcJ+KA7SSy4j2sg634g0uARDnMxQFFsb2XElp22ak/Kv1Rs/0x1dXVPT073VvPT0hejWd6up95uqr6fd+qsiwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEI8z3gJiNFXb//h79To67/M+QOTFbjBHZd/aO71d9Zdzp0NcwWYZdzHXTP1JiCv8lCAO6YagN5zQwUeGyvAYiTAUwgAASBAix9mKBxgsgDeOxdvGS3AV5wP1gwW4AhnjrthrgB00eGcuSepqQLY6+EF/NdUB5CZ8GmOPSGmOiB6nGVbxrYB+yIBts1pAwg1WwCaCWJLBSDag+MWOuD6e9RoB7zvpoPYUgGunfpMx0q0hQLM+oMe/lAtADnnv3Cp8FCnX2ujA5b9cZ+z3VUKML/uOG7yPdK29ItvvnuxfQLMB5fL3CWVAORC8JJbkOvwXcIHK60TQAx7HisdsOwWjw3tvuMLeVBXANKUBOqF8GzZM6IQoBM9HRyihUqy7a6uA7zTzXDATZ4d+Y8IQMLbxH+pF5x33g93JcqHnNQU4PlPvv+IZj1h0WYKMCMEWKGF3b97R0+Azqr/rn9lXvt112q0AK4vAPn844+kp0mXo+u+b2kJcN7ve51BL/nSwgf8fyuNF+CPfp/5rUqAx1oCkPDdqX7FXva/ZYM2WwBv3nUch58cW4C5kabX/1o33R030gHhgMDZzBeAaQkwDLpsZNuQlCRjPeFOSIBoQLA1tgMkg63+WGEY6ZNJ/QKIcCHvjekA8R08IcCx6MhOPR2Bff3qRBwQ3brS6o8qDkgIQBaiV/hKHc3gsU/4oRcmIYAYEclC5mM6YCiAW4sAwSlsd6ciAG+CADNhh/LQXAdcyXYxhjlA9FWa7ekedEDUo+oW6u0FBxAv9aGOm3uWe9cByU/ZQoAlYxxw9stbMgcsmeKAM+t88ILBDpjrM4c9R811wHmWCWub5oDl7HN0kxzgeXTSAkRPqMnnaLMcMPzgoNtEB8zdfbk3aQHEySViFM1xgP0G5z/s7roAzXHAb7jjyKvNmukAka7TFIBQT+0AYvfdMFnWHgd03qQ1OkB8THoezXTAmX+wIF2nKcDvPr7YVbcBJ3g28dxsBxz3O1T2UFeAc8zlj9QOEIc+2JY24HaQrvPvWC0B7L7js6R0QDR9TJrraKID7GDCZzCm1hLgR2GK6JmyF7g9JQEqOkAc6GlKgDDWIhMg/HWTmZ32O2AfS/+hdsCqmznd1jtg2GZvJAUg1069KxWArGf7w9Y7YFgslBSAvM2DWItEAFE1lPgOiQPutcsBEgEWXOa4W1QigJ0nAG/ALaDrAEILBKC/jWItZQSo4gCvNgFoOJZPOiAzYM8EkolaABK2dM7m5B1AaxKAnKb6DhArnSgEmBdlHHU4ILOwTlKAs19ezTiAeLSSAHO/+v6fvbQDXv/5L3MyZOQVzh9RpQDi0ANa0QF6t8CZdb69VskB1KJpAX7MuPMk5YAjbk5NV1DZ6498/6QUYHYYbKrogMQtYN891ZMLcGydOXyDpgR4/dRLVRww4x/JcU8mHEBWg5quJenngvubbVFaswByBwR22+pJBbghGtqEAGf8z/1FIxr6+Z9fSgpAI8GfkViA8AoCU+ToFsyOLBbA6Y7dBpwLQkMHpQIsi4Y2FiB4znIHK4UCfM1ddikpwKqo4IoFuOkErxyWtQLRuI/ftybogPgWCO3GehIB5lzxrbEA4XOWWzhv84gvqnOoGwuwU8EVtwHROUnz5LejNx+wanBAOKBQOSC6uOS6MjsC7DS0z+8IEBUMRvUdinK/e054zFiA+WEFV+wA8cSyZI30Q2KpF7a5Kw7Yx5KpEpkArBs7IDpr1tPJh8SDFF8AcYS1HQcMKyWE8kQiwLMaHGD//l21A4aHDsJuNOOA2aEAOw4Y/n5r4aHfVOdD4kFKQgBbJsDxn7zcrShAgQPm3uCDXyh7AeunsQD23XBthlEBFrK1iP6D5vwHfJAzVYvMZjzEe8dHHODFAiz0OXs0GQd8ylg4VSh/HJAMvPr94fbKqAA8dkBCgEXHb8J7SgF2fp0CBywyv2v5mbgRaJ0OWOD+kYKpQgoHxAIcDc7jiaYDgnK/vO5AIoDCAf4N48YNK63VAReYeHRUtAGxAMth3HGNajkgLPdztmpwAD0f/j8ncNO1v/77b3U6YHHY1eb3ArEAUa/vPiRaDlhWdAflHCAud/h0sN2r0QHLbuoPtQPEHL1nRMsBfTe/2q2kA8RZ3gnmibpOcMfW5QDSd1PfoXbAPpb6DrUDRsca1R2wKiY0Rb1nMLmvLgdkBVA6YBh43dJyQFkBVA4YCmCdYMm4/oQdMPod5/ME2B0HsP8IV7r3d8UBtxvoAHFy+63pOODEtB1AxMkdsKbTBpxvigOeWlPpBeAAOAAOMMoBDewFxMmx/dZ0xgHTd8BwIGSsA6oMhfdSG1DpYWgv9ALpx2HnoDUdB+Q/Dk/aAamACC8ZEBmUCYio4wG5ARFpRKhkQETlgCgkxqqFxNQRoXvpkJjaAbkhMWlESEjaGz8mKIKiYZaUEErqjAlmgqK8WlBU6oArijUlyjlgvLC4UyYsXpAXyAuLyxzg6YTFE4kRZV5g3v/zEZlIZuhTnkyMFDggLzEizwwtBt/e002NKR3gjZEaK8gMzV1OpsZYtdSYPDOkSI0NJ4tt6uYGafXk6KBMclTaC6STo0SdHdZLjqrS452a0+PjV4go0uNHRrLD4qwj6ys2bZsJCiS2ZAUStk6BxE39AonC+gBSVB+gUyCxULZAwvs6XOZTUiITO+Bmdkph3IaIQen9XXHAREpkrI60SMpvFfSLpLhWkVQNVWJFRVIsXSTFtIqkcsvkOjplcp5+mZwzfp1gUZmcs+TFbQA56nKmUSbnN6heplDS/88PSLJS9Ih/nvJCSTKhQsmcOsFObqHk3GihJK1YKGnZUalsolKU5JfKeq9wplcqe5jWUCtMaE6pLJGUypISO3amy+XDYulUtXj+BzsTKZbOqxbPzKOfTLF0UHVO09XiinJ5S7Ncnm1Oer5AfeXy5WaMREYrmjDBy02YqDJfoL4JE3XPGQqnzDxHJ++Aps4ZsnInTdU4X6Axs8ZKTZvz8qbNtXjmqHziJCk5cbLFM0fFgQ6MN3W2vW1AtN4DLzV5OnltLZw3ONb0+U4wfd5/8NhDDggXUAieDrQECDbd4N8S5foBIuawabXEAaTcEhpvSZfQSK4gMa0lNKqvI1TDIip8dBEVWWanmavIlFxGx5Ito5N0AJnSMjqNWUmKTGkhpeasJLUrS2mNLqbWoNXk7MkvpiZZTq9Bq8nlRjYmuqBik9YT9LzJCzCypKZpa4oGi6ryrcTHjFtX+Mw6P5xcVte4dYWzCyubt7I0oamltc1zQKazGTrgpDEOyOZ8+jw3K743HZBzKl3LUAdMdYsNZvomK7wZ2+z4R9rqmusAay7YaIkY7ABqX7+qu91aKQGKt9pijdhqSyyzVb8AxZutVXWA2IFQt+8qHhtpv7OUAMXb7bGq2+15fZfXut3eZAQo3HCRN2TDxUkJcLxoy01NB0S7C6Yu9yhrw5abwaarqYRQxTZAuukqb/6mq7Vtu0taue1u0EWRvHu05MbL9ipv0cbLYvwTLsNT09bbRLL1NpnK1ts6m6+L7n9beXplN18np5ux/fxRJtouhQCi+3+iPGG77/it+EFLUwD/927E9VvzLJ5UkSeAFXb/qTZbQlAxEEwe0nRAY1h2eFgYohJgfp05btE2ruSLb7570WqfALOibFwlgPW2P0i5VGjZ069ZLRTAet/lD2iBANa1U59pPIGF/1onAD37Hi0UIDfLlzcgaJMANN0eSwUgHt2zAmTnJ8gdoK9n+wQofFY3TAAxyb9qaKL1AgyDFU+IqQ4QI//9pjrAvwInWnjeVAGsGe4PfDcsYx1geV/5TzVrBgtgkXcu3rKMFoCWmbu5BwXI3S/ZGAHG60QgAASAAEYLcM/NxpkN41WeXTPGMKJMg7tmrADh/r3OhmUuN7jjsg8NFsC+zPkDSk1WIFqfxVyIR4nRAuhnUQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAw/g/DeNj5Q8nTPIAAAAASUVORK5CYII=;fontFamily=Helvetica;fontSize=16;fontColor=default;fontStyle=1;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-46">
77
+ <mxGeometry x="124.65753424657534" width="135.34246575342465" height="109.5" as="geometry" />
78
+ </mxCell>
79
+ <mxCell id="7CoqnSFOIrunSOh43Noi-94" value="&lt;font style=&quot;font-size: 20px;&quot;&gt;Generated Multi-Track Music&lt;/font&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=16;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-46">
80
+ <mxGeometry y="91" width="260" height="30" as="geometry" />
81
+ </mxCell>
82
+ <mxCell id="7CoqnSFOIrunSOh43Noi-74" value="" style="group;direction=west;flipV=1;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" connectable="0" parent="1">
83
+ <mxGeometry x="330" y="370" width="260" height="109" as="geometry" />
84
+ </mxCell>
85
+ <mxCell id="7CoqnSFOIrunSOh43Noi-75" value="" style="shape=image;verticalLabelPosition=bottom;labelBackgroundColor=default;verticalAlign=top;imageAspect=0;image=data:image/png,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIABAMAAAAGVsnJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAodEVYdHN2ZzpiYXNlLXVyaQBmaWxlOi8vL3RtcC9tYWdpY2stb0d6aDhuTljeVWFtAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA5LTMwVDAwOjEwOjQyLTAzOjAw9ZOTigAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wOS0zMFQwMDoxMDo0Mi0wMzowMITOKzYAAAAJcEhZcwAAAEgAAABIAEbJaz4AAAAnUExURUdwTDAwMC8vLy8vLy8vLy8vLy8vLy8vLzAwMC8vLzAwMDAwMDAwMLX3vW4AAAAMdFJOUwD9BHsbozJIxGDa7aTsnz4AAAzMSURBVHja7Z3NjxTHGcZbtb277LEVAhb0odQMEHMixrFkeQ9jCwSGHNaBRErYwwB2sAKHDXYcJ+KA7SSy4j2sg634g0uARDnMxQFFsb2XElp22ak/Kv1Rs/0x1dXVPT073VvPT0hejWd6up95uqr6fd+qsiwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEI8z3gJiNFXb//h79To67/M+QOTFbjBHZd/aO71d9Zdzp0NcwWYZdzHXTP1JiCv8lCAO6YagN5zQwUeGyvAYiTAUwgAASBAix9mKBxgsgDeOxdvGS3AV5wP1gwW4AhnjrthrgB00eGcuSepqQLY6+EF/NdUB5CZ8GmOPSGmOiB6nGVbxrYB+yIBts1pAwg1WwCaCWJLBSDag+MWOuD6e9RoB7zvpoPYUgGunfpMx0q0hQLM+oMe/lAtADnnv3Cp8FCnX2ujA5b9cZ+z3VUKML/uOG7yPdK29ItvvnuxfQLMB5fL3CWVAORC8JJbkOvwXcIHK60TQAx7HisdsOwWjw3tvuMLeVBXANKUBOqF8GzZM6IQoBM9HRyihUqy7a6uA7zTzXDATZ4d+Y8IQMLbxH+pF5x33g93JcqHnNQU4PlPvv+IZj1h0WYKMCMEWKGF3b97R0+Azqr/rn9lXvt112q0AK4vAPn844+kp0mXo+u+b2kJcN7ve51BL/nSwgf8fyuNF+CPfp/5rUqAx1oCkPDdqX7FXva/ZYM2WwBv3nUch58cW4C5kabX/1o33R030gHhgMDZzBeAaQkwDLpsZNuQlCRjPeFOSIBoQLA1tgMkg63+WGEY6ZNJ/QKIcCHvjekA8R08IcCx6MhOPR2Bff3qRBwQ3brS6o8qDkgIQBaiV/hKHc3gsU/4oRcmIYAYEclC5mM6YCiAW4sAwSlsd6ciAG+CADNhh/LQXAdcyXYxhjlA9FWa7ekedEDUo+oW6u0FBxAv9aGOm3uWe9cByU/ZQoAlYxxw9stbMgcsmeKAM+t88ILBDpjrM4c9R811wHmWCWub5oDl7HN0kxzgeXTSAkRPqMnnaLMcMPzgoNtEB8zdfbk3aQHEySViFM1xgP0G5z/s7roAzXHAb7jjyKvNmukAka7TFIBQT+0AYvfdMFnWHgd03qQ1OkB8THoezXTAmX+wIF2nKcDvPr7YVbcBJ3g28dxsBxz3O1T2UFeAc8zlj9QOEIc+2JY24HaQrvPvWC0B7L7js6R0QDR9TJrraKID7GDCZzCm1hLgR2GK6JmyF7g9JQEqOkAc6GlKgDDWIhMg/HWTmZ32O2AfS/+hdsCqmznd1jtg2GZvJAUg1069KxWArGf7w9Y7YFgslBSAvM2DWItEAFE1lPgOiQPutcsBEgEWXOa4W1QigJ0nAG/ALaDrAEILBKC/jWItZQSo4gCvNgFoOJZPOiAzYM8EkolaABK2dM7m5B1AaxKAnKb6DhArnSgEmBdlHHU4ILOwTlKAs19ezTiAeLSSAHO/+v6fvbQDXv/5L3MyZOQVzh9RpQDi0ANa0QF6t8CZdb69VskB1KJpAX7MuPMk5YAjbk5NV1DZ6498/6QUYHYYbKrogMQtYN891ZMLcGydOXyDpgR4/dRLVRww4x/JcU8mHEBWg5quJenngvubbVFaswByBwR22+pJBbghGtqEAGf8z/1FIxr6+Z9fSgpAI8GfkViA8AoCU+ToFsyOLBbA6Y7dBpwLQkMHpQIsi4Y2FiB4znIHK4UCfM1ddikpwKqo4IoFuOkErxyWtQLRuI/ftybogPgWCO3GehIB5lzxrbEA4XOWWzhv84gvqnOoGwuwU8EVtwHROUnz5LejNx+wanBAOKBQOSC6uOS6MjsC7DS0z+8IEBUMRvUdinK/e054zFiA+WEFV+wA8cSyZI30Q2KpF7a5Kw7Yx5KpEpkArBs7IDpr1tPJh8SDFF8AcYS1HQcMKyWE8kQiwLMaHGD//l21A4aHDsJuNOOA2aEAOw4Y/n5r4aHfVOdD4kFKQgBbJsDxn7zcrShAgQPm3uCDXyh7AeunsQD23XBthlEBFrK1iP6D5vwHfJAzVYvMZjzEe8dHHODFAiz0OXs0GQd8ylg4VSh/HJAMvPr94fbKqAA8dkBCgEXHb8J7SgF2fp0CBywyv2v5mbgRaJ0OWOD+kYKpQgoHxAIcDc7jiaYDgnK/vO5AIoDCAf4N48YNK63VAReYeHRUtAGxAMth3HGNajkgLPdztmpwAD0f/j8ncNO1v/77b3U6YHHY1eb3ArEAUa/vPiRaDlhWdAflHCAud/h0sN2r0QHLbuoPtQPEHL1nRMsBfTe/2q2kA8RZ3gnmibpOcMfW5QDSd1PfoXbAPpb6DrUDRsca1R2wKiY0Rb1nMLmvLgdkBVA6YBh43dJyQFkBVA4YCmCdYMm4/oQdMPod5/ME2B0HsP8IV7r3d8UBtxvoAHFy+63pOODEtB1AxMkdsKbTBpxvigOeWlPpBeAAOAAOMMoBDewFxMmx/dZ0xgHTd8BwIGSsA6oMhfdSG1DpYWgv9ALpx2HnoDUdB+Q/Dk/aAamACC8ZEBmUCYio4wG5ARFpRKhkQETlgCgkxqqFxNQRoXvpkJjaAbkhMWlESEjaGz8mKIKiYZaUEErqjAlmgqK8WlBU6oArijUlyjlgvLC4UyYsXpAXyAuLyxzg6YTFE4kRZV5g3v/zEZlIZuhTnkyMFDggLzEizwwtBt/e002NKR3gjZEaK8gMzV1OpsZYtdSYPDOkSI0NJ4tt6uYGafXk6KBMclTaC6STo0SdHdZLjqrS452a0+PjV4go0uNHRrLD4qwj6ys2bZsJCiS2ZAUStk6BxE39AonC+gBSVB+gUyCxULZAwvs6XOZTUiITO+Bmdkph3IaIQen9XXHAREpkrI60SMpvFfSLpLhWkVQNVWJFRVIsXSTFtIqkcsvkOjplcp5+mZwzfp1gUZmcs+TFbQA56nKmUSbnN6heplDS/88PSLJS9Ih/nvJCSTKhQsmcOsFObqHk3GihJK1YKGnZUalsolKU5JfKeq9wplcqe5jWUCtMaE6pLJGUypISO3amy+XDYulUtXj+BzsTKZbOqxbPzKOfTLF0UHVO09XiinJ5S7Ncnm1Oer5AfeXy5WaMREYrmjDBy02YqDJfoL4JE3XPGQqnzDxHJ++Aps4ZsnInTdU4X6Axs8ZKTZvz8qbNtXjmqHziJCk5cbLFM0fFgQ6MN3W2vW1AtN4DLzV5OnltLZw3ONb0+U4wfd5/8NhDDggXUAieDrQECDbd4N8S5foBIuawabXEAaTcEhpvSZfQSK4gMa0lNKqvI1TDIip8dBEVWWanmavIlFxGx5Ito5N0AJnSMjqNWUmKTGkhpeasJLUrS2mNLqbWoNXk7MkvpiZZTq9Bq8nlRjYmuqBik9YT9LzJCzCypKZpa4oGi6ryrcTHjFtX+Mw6P5xcVte4dYWzCyubt7I0oamltc1zQKazGTrgpDEOyOZ8+jw3K743HZBzKl3LUAdMdYsNZvomK7wZ2+z4R9rqmusAay7YaIkY7ABqX7+qu91aKQGKt9pijdhqSyyzVb8AxZutVXWA2IFQt+8qHhtpv7OUAMXb7bGq2+15fZfXut3eZAQo3HCRN2TDxUkJcLxoy01NB0S7C6Yu9yhrw5abwaarqYRQxTZAuukqb/6mq7Vtu0taue1u0EWRvHu05MbL9ipv0cbLYvwTLsNT09bbRLL1NpnK1ts6m6+L7n9beXplN18np5ux/fxRJtouhQCi+3+iPGG77/it+EFLUwD/927E9VvzLJ5UkSeAFXb/qTZbQlAxEEwe0nRAY1h2eFgYohJgfp05btE2ruSLb7570WqfALOibFwlgPW2P0i5VGjZ069ZLRTAet/lD2iBANa1U59pPIGF/1onAD37Hi0UIDfLlzcgaJMANN0eSwUgHt2zAmTnJ8gdoK9n+wQofFY3TAAxyb9qaKL1AgyDFU+IqQ4QI//9pjrAvwInWnjeVAGsGe4PfDcsYx1geV/5TzVrBgtgkXcu3rKMFoCWmbu5BwXI3S/ZGAHG60QgAASAAEYLcM/NxpkN41WeXTPGMKJMg7tmrADh/r3OhmUuN7jjsg8NFsC+zPkDSk1WIFqfxVyIR4nRAuhnUQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAw/g/DeNj5Q8nTPIAAAAASUVORK5CYII=;fontFamily=Helvetica;fontSize=16;fontColor=default;direction=west;fontStyle=1;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-74">
86
+ <mxGeometry y="0.4954545454545455" width="135.34246575342465" height="108.50454545454546" as="geometry" />
87
+ </mxCell>
88
+ <mxCell id="7CoqnSFOIrunSOh43Noi-76" value="" style="shape=image;verticalLabelPosition=bottom;labelBackgroundColor=default;verticalAlign=top;imageAspect=0;image=data:image/png,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIABAMAAAAGVsnJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAodEVYdHN2ZzpiYXNlLXVyaQBmaWxlOi8vL3RtcC9tYWdpY2stb0d6aDhuTljeVWFtAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA5LTMwVDAwOjEwOjQyLTAzOjAw9ZOTigAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wOS0zMFQwMDoxMDo0Mi0wMzowMITOKzYAAAAJcEhZcwAAAEgAAABIAEbJaz4AAAAnUExURUdwTDAwMC8vLy8vLy8vLy8vLy8vLy8vLzAwMC8vLzAwMDAwMDAwMLX3vW4AAAAMdFJOUwD9BHsbozJIxGDa7aTsnz4AAAzMSURBVHja7Z3NjxTHGcZbtb277LEVAhb0odQMEHMixrFkeQ9jCwSGHNaBRErYwwB2sAKHDXYcJ+KA7SSy4j2sg634g0uARDnMxQFFsb2XElp22ak/Kv1Rs/0x1dXVPT073VvPT0hejWd6up95uqr6fd+qsiwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEI8z3gJiNFXb//h79To67/M+QOTFbjBHZd/aO71d9Zdzp0NcwWYZdzHXTP1JiCv8lCAO6YagN5zQwUeGyvAYiTAUwgAASBAix9mKBxgsgDeOxdvGS3AV5wP1gwW4AhnjrthrgB00eGcuSepqQLY6+EF/NdUB5CZ8GmOPSGmOiB6nGVbxrYB+yIBts1pAwg1WwCaCWJLBSDag+MWOuD6e9RoB7zvpoPYUgGunfpMx0q0hQLM+oMe/lAtADnnv3Cp8FCnX2ujA5b9cZ+z3VUKML/uOG7yPdK29ItvvnuxfQLMB5fL3CWVAORC8JJbkOvwXcIHK60TQAx7HisdsOwWjw3tvuMLeVBXANKUBOqF8GzZM6IQoBM9HRyihUqy7a6uA7zTzXDATZ4d+Y8IQMLbxH+pF5x33g93JcqHnNQU4PlPvv+IZj1h0WYKMCMEWKGF3b97R0+Azqr/rn9lXvt112q0AK4vAPn844+kp0mXo+u+b2kJcN7ve51BL/nSwgf8fyuNF+CPfp/5rUqAx1oCkPDdqX7FXva/ZYM2WwBv3nUch58cW4C5kabX/1o33R030gHhgMDZzBeAaQkwDLpsZNuQlCRjPeFOSIBoQLA1tgMkg63+WGEY6ZNJ/QKIcCHvjekA8R08IcCx6MhOPR2Bff3qRBwQ3brS6o8qDkgIQBaiV/hKHc3gsU/4oRcmIYAYEclC5mM6YCiAW4sAwSlsd6ciAG+CADNhh/LQXAdcyXYxhjlA9FWa7ekedEDUo+oW6u0FBxAv9aGOm3uWe9cByU/ZQoAlYxxw9stbMgcsmeKAM+t88ILBDpjrM4c9R811wHmWCWub5oDl7HN0kxzgeXTSAkRPqMnnaLMcMPzgoNtEB8zdfbk3aQHEySViFM1xgP0G5z/s7roAzXHAb7jjyKvNmukAka7TFIBQT+0AYvfdMFnWHgd03qQ1OkB8THoezXTAmX+wIF2nKcDvPr7YVbcBJ3g28dxsBxz3O1T2UFeAc8zlj9QOEIc+2JY24HaQrvPvWC0B7L7js6R0QDR9TJrraKID7GDCZzCm1hLgR2GK6JmyF7g9JQEqOkAc6GlKgDDWIhMg/HWTmZ32O2AfS/+hdsCqmznd1jtg2GZvJAUg1069KxWArGf7w9Y7YFgslBSAvM2DWItEAFE1lPgOiQPutcsBEgEWXOa4W1QigJ0nAG/ALaDrAEILBKC/jWItZQSo4gCvNgFoOJZPOiAzYM8EkolaABK2dM7m5B1AaxKAnKb6DhArnSgEmBdlHHU4ILOwTlKAs19ezTiAeLSSAHO/+v6fvbQDXv/5L3MyZOQVzh9RpQDi0ANa0QF6t8CZdb69VskB1KJpAX7MuPMk5YAjbk5NV1DZ6498/6QUYHYYbKrogMQtYN891ZMLcGydOXyDpgR4/dRLVRww4x/JcU8mHEBWg5quJenngvubbVFaswByBwR22+pJBbghGtqEAGf8z/1FIxr6+Z9fSgpAI8GfkViA8AoCU+ToFsyOLBbA6Y7dBpwLQkMHpQIsi4Y2FiB4znIHK4UCfM1ddikpwKqo4IoFuOkErxyWtQLRuI/ftybogPgWCO3GehIB5lzxrbEA4XOWWzhv84gvqnOoGwuwU8EVtwHROUnz5LejNx+wanBAOKBQOSC6uOS6MjsC7DS0z+8IEBUMRvUdinK/e054zFiA+WEFV+wA8cSyZI30Q2KpF7a5Kw7Yx5KpEpkArBs7IDpr1tPJh8SDFF8AcYS1HQcMKyWE8kQiwLMaHGD//l21A4aHDsJuNOOA2aEAOw4Y/n5r4aHfVOdD4kFKQgBbJsDxn7zcrShAgQPm3uCDXyh7AeunsQD23XBthlEBFrK1iP6D5vwHfJAzVYvMZjzEe8dHHODFAiz0OXs0GQd8ylg4VSh/HJAMvPr94fbKqAA8dkBCgEXHb8J7SgF2fp0CBywyv2v5mbgRaJ0OWOD+kYKpQgoHxAIcDc7jiaYDgnK/vO5AIoDCAf4N48YNK63VAReYeHRUtAGxAMth3HGNajkgLPdztmpwAD0f/j8ncNO1v/77b3U6YHHY1eb3ArEAUa/vPiRaDlhWdAflHCAud/h0sN2r0QHLbuoPtQPEHL1nRMsBfTe/2q2kA8RZ3gnmibpOcMfW5QDSd1PfoXbAPpb6DrUDRsca1R2wKiY0Rb1nMLmvLgdkBVA6YBh43dJyQFkBVA4YCmCdYMm4/oQdMPod5/ME2B0HsP8IV7r3d8UBtxvoAHFy+63pOODEtB1AxMkdsKbTBpxvigOeWlPpBeAAOAAOMMoBDewFxMmx/dZ0xgHTd8BwIGSsA6oMhfdSG1DpYWgv9ALpx2HnoDUdB+Q/Dk/aAamACC8ZEBmUCYio4wG5ARFpRKhkQETlgCgkxqqFxNQRoXvpkJjaAbkhMWlESEjaGz8mKIKiYZaUEErqjAlmgqK8WlBU6oArijUlyjlgvLC4UyYsXpAXyAuLyxzg6YTFE4kRZV5g3v/zEZlIZuhTnkyMFDggLzEizwwtBt/e002NKR3gjZEaK8gMzV1OpsZYtdSYPDOkSI0NJ4tt6uYGafXk6KBMclTaC6STo0SdHdZLjqrS452a0+PjV4go0uNHRrLD4qwj6ys2bZsJCiS2ZAUStk6BxE39AonC+gBSVB+gUyCxULZAwvs6XOZTUiITO+Bmdkph3IaIQen9XXHAREpkrI60SMpvFfSLpLhWkVQNVWJFRVIsXSTFtIqkcsvkOjplcp5+mZwzfp1gUZmcs+TFbQA56nKmUSbnN6heplDS/88PSLJS9Ih/nvJCSTKhQsmcOsFObqHk3GihJK1YKGnZUalsolKU5JfKeq9wplcqe5jWUCtMaE6pLJGUypISO3amy+XDYulUtXj+BzsTKZbOqxbPzKOfTLF0UHVO09XiinJ5S7Ncnm1Oer5AfeXy5WaMREYrmjDBy02YqDJfoL4JE3XPGQqnzDxHJ++Aps4ZsnInTdU4X6Axs8ZKTZvz8qbNtXjmqHziJCk5cbLFM0fFgQ6MN3W2vW1AtN4DLzV5OnltLZw3ONb0+U4wfd5/8NhDDggXUAieDrQECDbd4N8S5foBIuawabXEAaTcEhpvSZfQSK4gMa0lNKqvI1TDIip8dBEVWWanmavIlFxGx5Ito5N0AJnSMjqNWUmKTGkhpeasJLUrS2mNLqbWoNXk7MkvpiZZTq9Bq8nlRjYmuqBik9YT9LzJCzCypKZpa4oGi6ryrcTHjFtX+Mw6P5xcVte4dYWzCyubt7I0oamltc1zQKazGTrgpDEOyOZ8+jw3K743HZBzKl3LUAdMdYsNZvomK7wZ2+z4R9rqmusAay7YaIkY7ABqX7+qu91aKQGKt9pijdhqSyyzVb8AxZutVXWA2IFQt+8qHhtpv7OUAMXb7bGq2+15fZfXut3eZAQo3HCRN2TDxUkJcLxoy01NB0S7C6Yu9yhrw5abwaarqYRQxTZAuukqb/6mq7Vtu0taue1u0EWRvHu05MbL9ipv0cbLYvwTLsNT09bbRLL1NpnK1ts6m6+L7n9beXplN18np5ux/fxRJtouhQCi+3+iPGG77/it+EFLUwD/927E9VvzLJ5UkSeAFXb/qTZbQlAxEEwe0nRAY1h2eFgYohJgfp05btE2ruSLb7570WqfALOibFwlgPW2P0i5VGjZ069ZLRTAet/lD2iBANa1U59pPIGF/1onAD37Hi0UIDfLlzcgaJMANN0eSwUgHt2zAmTnJ8gdoK9n+wQofFY3TAAxyb9qaKL1AgyDFU+IqQ4QI//9pjrAvwInWnjeVAGsGe4PfDcsYx1geV/5TzVrBgtgkXcu3rKMFoCWmbu5BwXI3S/ZGAHG60QgAASAAEYLcM/NxpkN41WeXTPGMKJMg7tmrADh/r3OhmUuN7jjsg8NFsC+zPkDSk1WIFqfxVyIR4nRAuhnUQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAw/g/DeNj5Q8nTPIAAAAASUVORK5CYII=;fontFamily=Helvetica;fontSize=16;fontColor=default;direction=west;fontStyle=1;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-74">
89
+ <mxGeometry x="124.65753424657534" y="0.4954545454545455" width="135.34246575342465" height="108.50454545454546" as="geometry" />
90
+ </mxCell>
91
+ <mxCell id="7CoqnSFOIrunSOh43Noi-92" value="Mix&amp;amp;Arrange Output" style="text;html=1;strokeColor=none;fillColor=#f5f5f5;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=1;fontColor=#333333;fontStyle=1;fontFamily=Helvetica;fontSize=20;" vertex="1" parent="1">
92
+ <mxGeometry x="830" y="335.75" width="210" height="100" as="geometry" />
93
+ </mxCell>
94
+ <mxCell id="7CoqnSFOIrunSOh43Noi-96" value="" style="group;direction=west;flipV=1;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" connectable="0" parent="1">
95
+ <mxGeometry x="330" y="300" width="260" height="130" as="geometry" />
96
+ </mxCell>
97
+ <mxCell id="7CoqnSFOIrunSOh43Noi-97" value="" style="shape=image;verticalLabelPosition=bottom;labelBackgroundColor=default;verticalAlign=top;imageAspect=0;image=data:image/png,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIABAMAAAAGVsnJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAodEVYdHN2ZzpiYXNlLXVyaQBmaWxlOi8vL3RtcC9tYWdpY2stb0d6aDhuTljeVWFtAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA5LTMwVDAwOjEwOjQyLTAzOjAw9ZOTigAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wOS0zMFQwMDoxMDo0Mi0wMzowMITOKzYAAAAJcEhZcwAAAEgAAABIAEbJaz4AAAAnUExURUdwTDAwMC8vLy8vLy8vLy8vLy8vLy8vLzAwMC8vLzAwMDAwMDAwMLX3vW4AAAAMdFJOUwD9BHsbozJIxGDa7aTsnz4AAAzMSURBVHja7Z3NjxTHGcZbtb277LEVAhb0odQMEHMixrFkeQ9jCwSGHNaBRErYwwB2sAKHDXYcJ+KA7SSy4j2sg634g0uARDnMxQFFsb2XElp22ak/Kv1Rs/0x1dXVPT073VvPT0hejWd6up95uqr6fd+qsiwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEI8z3gJiNFXb//h79To67/M+QOTFbjBHZd/aO71d9Zdzp0NcwWYZdzHXTP1JiCv8lCAO6YagN5zQwUeGyvAYiTAUwgAASBAix9mKBxgsgDeOxdvGS3AV5wP1gwW4AhnjrthrgB00eGcuSepqQLY6+EF/NdUB5CZ8GmOPSGmOiB6nGVbxrYB+yIBts1pAwg1WwCaCWJLBSDag+MWOuD6e9RoB7zvpoPYUgGunfpMx0q0hQLM+oMe/lAtADnnv3Cp8FCnX2ujA5b9cZ+z3VUKML/uOG7yPdK29ItvvnuxfQLMB5fL3CWVAORC8JJbkOvwXcIHK60TQAx7HisdsOwWjw3tvuMLeVBXANKUBOqF8GzZM6IQoBM9HRyihUqy7a6uA7zTzXDATZ4d+Y8IQMLbxH+pF5x33g93JcqHnNQU4PlPvv+IZj1h0WYKMCMEWKGF3b97R0+Azqr/rn9lXvt112q0AK4vAPn844+kp0mXo+u+b2kJcN7ve51BL/nSwgf8fyuNF+CPfp/5rUqAx1oCkPDdqX7FXva/ZYM2WwBv3nUch58cW4C5kabX/1o33R030gHhgMDZzBeAaQkwDLpsZNuQlCRjPeFOSIBoQLA1tgMkg63+WGEY6ZNJ/QKIcCHvjekA8R08IcCx6MhOPR2Bff3qRBwQ3brS6o8qDkgIQBaiV/hKHc3gsU/4oRcmIYAYEclC5mM6YCiAW4sAwSlsd6ciAG+CADNhh/LQXAdcyXYxhjlA9FWa7ekedEDUo+oW6u0FBxAv9aGOm3uWe9cByU/ZQoAlYxxw9stbMgcsmeKAM+t88ILBDpjrM4c9R811wHmWCWub5oDl7HN0kxzgeXTSAkRPqMnnaLMcMPzgoNtEB8zdfbk3aQHEySViFM1xgP0G5z/s7roAzXHAb7jjyKvNmukAka7TFIBQT+0AYvfdMFnWHgd03qQ1OkB8THoezXTAmX+wIF2nKcDvPr7YVbcBJ3g28dxsBxz3O1T2UFeAc8zlj9QOEIc+2JY24HaQrvPvWC0B7L7js6R0QDR9TJrraKID7GDCZzCm1hLgR2GK6JmyF7g9JQEqOkAc6GlKgDDWIhMg/HWTmZ32O2AfS/+hdsCqmznd1jtg2GZvJAUg1069KxWArGf7w9Y7YFgslBSAvM2DWItEAFE1lPgOiQPutcsBEgEWXOa4W1QigJ0nAG/ALaDrAEILBKC/jWItZQSo4gCvNgFoOJZPOiAzYM8EkolaABK2dM7m5B1AaxKAnKb6DhArnSgEmBdlHHU4ILOwTlKAs19ezTiAeLSSAHO/+v6fvbQDXv/5L3MyZOQVzh9RpQDi0ANa0QF6t8CZdb69VskB1KJpAX7MuPMk5YAjbk5NV1DZ6498/6QUYHYYbKrogMQtYN891ZMLcGydOXyDpgR4/dRLVRww4x/JcU8mHEBWg5quJenngvubbVFaswByBwR22+pJBbghGtqEAGf8z/1FIxr6+Z9fSgpAI8GfkViA8AoCU+ToFsyOLBbA6Y7dBpwLQkMHpQIsi4Y2FiB4znIHK4UCfM1ddikpwKqo4IoFuOkErxyWtQLRuI/ftybogPgWCO3GehIB5lzxrbEA4XOWWzhv84gvqnOoGwuwU8EVtwHROUnz5LejNx+wanBAOKBQOSC6uOS6MjsC7DS0z+8IEBUMRvUdinK/e054zFiA+WEFV+wA8cSyZI30Q2KpF7a5Kw7Yx5KpEpkArBs7IDpr1tPJh8SDFF8AcYS1HQcMKyWE8kQiwLMaHGD//l21A4aHDsJuNOOA2aEAOw4Y/n5r4aHfVOdD4kFKQgBbJsDxn7zcrShAgQPm3uCDXyh7AeunsQD23XBthlEBFrK1iP6D5vwHfJAzVYvMZjzEe8dHHODFAiz0OXs0GQd8ylg4VSh/HJAMvPr94fbKqAA8dkBCgEXHb8J7SgF2fp0CBywyv2v5mbgRaJ0OWOD+kYKpQgoHxAIcDc7jiaYDgnK/vO5AIoDCAf4N48YNK63VAReYeHRUtAGxAMth3HGNajkgLPdztmpwAD0f/j8ncNO1v/77b3U6YHHY1eb3ArEAUa/vPiRaDlhWdAflHCAud/h0sN2r0QHLbuoPtQPEHL1nRMsBfTe/2q2kA8RZ3gnmibpOcMfW5QDSd1PfoXbAPpb6DrUDRsca1R2wKiY0Rb1nMLmvLgdkBVA6YBh43dJyQFkBVA4YCmCdYMm4/oQdMPod5/ME2B0HsP8IV7r3d8UBtxvoAHFy+63pOODEtB1AxMkdsKbTBpxvigOeWlPpBeAAOAAOMMoBDewFxMmx/dZ0xgHTd8BwIGSsA6oMhfdSG1DpYWgv9ALpx2HnoDUdB+Q/Dk/aAamACC8ZEBmUCYio4wG5ARFpRKhkQETlgCgkxqqFxNQRoXvpkJjaAbkhMWlESEjaGz8mKIKiYZaUEErqjAlmgqK8WlBU6oArijUlyjlgvLC4UyYsXpAXyAuLyxzg6YTFE4kRZV5g3v/zEZlIZuhTnkyMFDggLzEizwwtBt/e002NKR3gjZEaK8gMzV1OpsZYtdSYPDOkSI0NJ4tt6uYGafXk6KBMclTaC6STo0SdHdZLjqrS452a0+PjV4go0uNHRrLD4qwj6ys2bZsJCiS2ZAUStk6BxE39AonC+gBSVB+gUyCxULZAwvs6XOZTUiITO+Bmdkph3IaIQen9XXHAREpkrI60SMpvFfSLpLhWkVQNVWJFRVIsXSTFtIqkcsvkOjplcp5+mZwzfp1gUZmcs+TFbQA56nKmUSbnN6heplDS/88PSLJS9Ih/nvJCSTKhQsmcOsFObqHk3GihJK1YKGnZUalsolKU5JfKeq9wplcqe5jWUCtMaE6pLJGUypISO3amy+XDYulUtXj+BzsTKZbOqxbPzKOfTLF0UHVO09XiinJ5S7Ncnm1Oer5AfeXy5WaMREYrmjDBy02YqDJfoL4JE3XPGQqnzDxHJ++Aps4ZsnInTdU4X6Axs8ZKTZvz8qbNtXjmqHziJCk5cbLFM0fFgQ6MN3W2vW1AtN4DLzV5OnltLZw3ONb0+U4wfd5/8NhDDggXUAieDrQECDbd4N8S5foBIuawabXEAaTcEhpvSZfQSK4gMa0lNKqvI1TDIip8dBEVWWanmavIlFxGx5Ito5N0AJnSMjqNWUmKTGkhpeasJLUrS2mNLqbWoNXk7MkvpiZZTq9Bq8nlRjYmuqBik9YT9LzJCzCypKZpa4oGi6ryrcTHjFtX+Mw6P5xcVte4dYWzCyubt7I0oamltc1zQKazGTrgpDEOyOZ8+jw3K743HZBzKl3LUAdMdYsNZvomK7wZ2+z4R9rqmusAay7YaIkY7ABqX7+qu91aKQGKt9pijdhqSyyzVb8AxZutVXWA2IFQt+8qHhtpv7OUAMXb7bGq2+15fZfXut3eZAQo3HCRN2TDxUkJcLxoy01NB0S7C6Yu9yhrw5abwaarqYRQxTZAuukqb/6mq7Vtu0taue1u0EWRvHu05MbL9ipv0cbLYvwTLsNT09bbRLL1NpnK1ts6m6+L7n9beXplN18np5ux/fxRJtouhQCi+3+iPGG77/it+EFLUwD/927E9VvzLJ5UkSeAFXb/qTZbQlAxEEwe0nRAY1h2eFgYohJgfp05btE2ruSLb7570WqfALOibFwlgPW2P0i5VGjZ069ZLRTAet/lD2iBANa1U59pPIGF/1onAD37Hi0UIDfLlzcgaJMANN0eSwUgHt2zAmTnJ8gdoK9n+wQofFY3TAAxyb9qaKL1AgyDFU+IqQ4QI//9pjrAvwInWnjeVAGsGe4PfDcsYx1geV/5TzVrBgtgkXcu3rKMFoCWmbu5BwXI3S/ZGAHG60QgAASAAEYLcM/NxpkN41WeXTPGMKJMg7tmrADh/r3OhmUuN7jjsg8NFsC+zPkDSk1WIFqfxVyIR4nRAuhnUQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAw/g/DeNj5Q8nTPIAAAAASUVORK5CYII=;fontFamily=Helvetica;fontSize=16;fontColor=default;direction=west;fontStyle=1;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-96">
98
+ <mxGeometry y="0.4954545454545455" width="135.34246575342465" height="108.50454545454546" as="geometry" />
99
+ </mxCell>
100
+ <mxCell id="7CoqnSFOIrunSOh43Noi-98" value="" style="shape=image;verticalLabelPosition=bottom;labelBackgroundColor=default;verticalAlign=top;imageAspect=0;image=data:image/png,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIABAMAAAAGVsnJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAodEVYdHN2ZzpiYXNlLXVyaQBmaWxlOi8vL3RtcC9tYWdpY2stb0d6aDhuTljeVWFtAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA5LTMwVDAwOjEwOjQyLTAzOjAw9ZOTigAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wOS0zMFQwMDoxMDo0Mi0wMzowMITOKzYAAAAJcEhZcwAAAEgAAABIAEbJaz4AAAAnUExURUdwTDAwMC8vLy8vLy8vLy8vLy8vLy8vLzAwMC8vLzAwMDAwMDAwMLX3vW4AAAAMdFJOUwD9BHsbozJIxGDa7aTsnz4AAAzMSURBVHja7Z3NjxTHGcZbtb277LEVAhb0odQMEHMixrFkeQ9jCwSGHNaBRErYwwB2sAKHDXYcJ+KA7SSy4j2sg634g0uARDnMxQFFsb2XElp22ak/Kv1Rs/0x1dXVPT073VvPT0hejWd6up95uqr6fd+qsiwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEI8z3gJiNFXb//h79To67/M+QOTFbjBHZd/aO71d9Zdzp0NcwWYZdzHXTP1JiCv8lCAO6YagN5zQwUeGyvAYiTAUwgAASBAix9mKBxgsgDeOxdvGS3AV5wP1gwW4AhnjrthrgB00eGcuSepqQLY6+EF/NdUB5CZ8GmOPSGmOiB6nGVbxrYB+yIBts1pAwg1WwCaCWJLBSDag+MWOuD6e9RoB7zvpoPYUgGunfpMx0q0hQLM+oMe/lAtADnnv3Cp8FCnX2ujA5b9cZ+z3VUKML/uOG7yPdK29ItvvnuxfQLMB5fL3CWVAORC8JJbkOvwXcIHK60TQAx7HisdsOwWjw3tvuMLeVBXANKUBOqF8GzZM6IQoBM9HRyihUqy7a6uA7zTzXDATZ4d+Y8IQMLbxH+pF5x33g93JcqHnNQU4PlPvv+IZj1h0WYKMCMEWKGF3b97R0+Azqr/rn9lXvt112q0AK4vAPn844+kp0mXo+u+b2kJcN7ve51BL/nSwgf8fyuNF+CPfp/5rUqAx1oCkPDdqX7FXva/ZYM2WwBv3nUch58cW4C5kabX/1o33R030gHhgMDZzBeAaQkwDLpsZNuQlCRjPeFOSIBoQLA1tgMkg63+WGEY6ZNJ/QKIcCHvjekA8R08IcCx6MhOPR2Bff3qRBwQ3brS6o8qDkgIQBaiV/hKHc3gsU/4oRcmIYAYEclC5mM6YCiAW4sAwSlsd6ciAG+CADNhh/LQXAdcyXYxhjlA9FWa7ekedEDUo+oW6u0FBxAv9aGOm3uWe9cByU/ZQoAlYxxw9stbMgcsmeKAM+t88ILBDpjrM4c9R811wHmWCWub5oDl7HN0kxzgeXTSAkRPqMnnaLMcMPzgoNtEB8zdfbk3aQHEySViFM1xgP0G5z/s7roAzXHAb7jjyKvNmukAka7TFIBQT+0AYvfdMFnWHgd03qQ1OkB8THoezXTAmX+wIF2nKcDvPr7YVbcBJ3g28dxsBxz3O1T2UFeAc8zlj9QOEIc+2JY24HaQrvPvWC0B7L7js6R0QDR9TJrraKID7GDCZzCm1hLgR2GK6JmyF7g9JQEqOkAc6GlKgDDWIhMg/HWTmZ32O2AfS/+hdsCqmznd1jtg2GZvJAUg1069KxWArGf7w9Y7YFgslBSAvM2DWItEAFE1lPgOiQPutcsBEgEWXOa4W1QigJ0nAG/ALaDrAEILBKC/jWItZQSo4gCvNgFoOJZPOiAzYM8EkolaABK2dM7m5B1AaxKAnKb6DhArnSgEmBdlHHU4ILOwTlKAs19ezTiAeLSSAHO/+v6fvbQDXv/5L3MyZOQVzh9RpQDi0ANa0QF6t8CZdb69VskB1KJpAX7MuPMk5YAjbk5NV1DZ6498/6QUYHYYbKrogMQtYN891ZMLcGydOXyDpgR4/dRLVRww4x/JcU8mHEBWg5quJenngvubbVFaswByBwR22+pJBbghGtqEAGf8z/1FIxr6+Z9fSgpAI8GfkViA8AoCU+ToFsyOLBbA6Y7dBpwLQkMHpQIsi4Y2FiB4znIHK4UCfM1ddikpwKqo4IoFuOkErxyWtQLRuI/ftybogPgWCO3GehIB5lzxrbEA4XOWWzhv84gvqnOoGwuwU8EVtwHROUnz5LejNx+wanBAOKBQOSC6uOS6MjsC7DS0z+8IEBUMRvUdinK/e054zFiA+WEFV+wA8cSyZI30Q2KpF7a5Kw7Yx5KpEpkArBs7IDpr1tPJh8SDFF8AcYS1HQcMKyWE8kQiwLMaHGD//l21A4aHDsJuNOOA2aEAOw4Y/n5r4aHfVOdD4kFKQgBbJsDxn7zcrShAgQPm3uCDXyh7AeunsQD23XBthlEBFrK1iP6D5vwHfJAzVYvMZjzEe8dHHODFAiz0OXs0GQd8ylg4VSh/HJAMvPr94fbKqAA8dkBCgEXHb8J7SgF2fp0CBywyv2v5mbgRaJ0OWOD+kYKpQgoHxAIcDc7jiaYDgnK/vO5AIoDCAf4N48YNK63VAReYeHRUtAGxAMth3HGNajkgLPdztmpwAD0f/j8ncNO1v/77b3U6YHHY1eb3ArEAUa/vPiRaDlhWdAflHCAud/h0sN2r0QHLbuoPtQPEHL1nRMsBfTe/2q2kA8RZ3gnmibpOcMfW5QDSd1PfoXbAPpb6DrUDRsca1R2wKiY0Rb1nMLmvLgdkBVA6YBh43dJyQFkBVA4YCmCdYMm4/oQdMPod5/ME2B0HsP8IV7r3d8UBtxvoAHFy+63pOODEtB1AxMkdsKbTBpxvigOeWlPpBeAAOAAOMMoBDewFxMmx/dZ0xgHTd8BwIGSsA6oMhfdSG1DpYWgv9ALpx2HnoDUdB+Q/Dk/aAamACC8ZEBmUCYio4wG5ARFpRKhkQETlgCgkxqqFxNQRoXvpkJjaAbkhMWlESEjaGz8mKIKiYZaUEErqjAlmgqK8WlBU6oArijUlyjlgvLC4UyYsXpAXyAuLyxzg6YTFE4kRZV5g3v/zEZlIZuhTnkyMFDggLzEizwwtBt/e002NKR3gjZEaK8gMzV1OpsZYtdSYPDOkSI0NJ4tt6uYGafXk6KBMclTaC6STo0SdHdZLjqrS452a0+PjV4go0uNHRrLD4qwj6ys2bZsJCiS2ZAUStk6BxE39AonC+gBSVB+gUyCxULZAwvs6XOZTUiITO+Bmdkph3IaIQen9XXHAREpkrI60SMpvFfSLpLhWkVQNVWJFRVIsXSTFtIqkcsvkOjplcp5+mZwzfp1gUZmcs+TFbQA56nKmUSbnN6heplDS/88PSLJS9Ih/nvJCSTKhQsmcOsFObqHk3GihJK1YKGnZUalsolKU5JfKeq9wplcqe5jWUCtMaE6pLJGUypISO3amy+XDYulUtXj+BzsTKZbOqxbPzKOfTLF0UHVO09XiinJ5S7Ncnm1Oer5AfeXy5WaMREYrmjDBy02YqDJfoL4JE3XPGQqnzDxHJ++Aps4ZsnInTdU4X6Axs8ZKTZvz8qbNtXjmqHziJCk5cbLFM0fFgQ6MN3W2vW1AtN4DLzV5OnltLZw3ONb0+U4wfd5/8NhDDggXUAieDrQECDbd4N8S5foBIuawabXEAaTcEhpvSZfQSK4gMa0lNKqvI1TDIip8dBEVWWanmavIlFxGx5Ito5N0AJnSMjqNWUmKTGkhpeasJLUrS2mNLqbWoNXk7MkvpiZZTq9Bq8nlRjYmuqBik9YT9LzJCzCypKZpa4oGi6ryrcTHjFtX+Mw6P5xcVte4dYWzCyubt7I0oamltc1zQKazGTrgpDEOyOZ8+jw3K743HZBzKl3LUAdMdYsNZvomK7wZ2+z4R9rqmusAay7YaIkY7ABqX7+qu91aKQGKt9pijdhqSyyzVb8AxZutVXWA2IFQt+8qHhtpv7OUAMXb7bGq2+15fZfXut3eZAQo3HCRN2TDxUkJcLxoy01NB0S7C6Yu9yhrw5abwaarqYRQxTZAuukqb/6mq7Vtu0taue1u0EWRvHu05MbL9ipv0cbLYvwTLsNT09bbRLL1NpnK1ts6m6+L7n9beXplN18np5ux/fxRJtouhQCi+3+iPGG77/it+EFLUwD/927E9VvzLJ5UkSeAFXb/qTZbQlAxEEwe0nRAY1h2eFgYohJgfp05btE2ruSLb7570WqfALOibFwlgPW2P0i5VGjZ069ZLRTAet/lD2iBANa1U59pPIGF/1onAD37Hi0UIDfLlzcgaJMANN0eSwUgHt2zAmTnJ8gdoK9n+wQofFY3TAAxyb9qaKL1AgyDFU+IqQ4QI//9pjrAvwInWnjeVAGsGe4PfDcsYx1geV/5TzVrBgtgkXcu3rKMFoCWmbu5BwXI3S/ZGAHG60QgAASAAEYLcM/NxpkN41WeXTPGMKJMg7tmrADh/r3OhmUuN7jjsg8NFsC+zPkDSk1WIFqfxVyIR4nRAuhnUQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAw/g/DeNj5Q8nTPIAAAAASUVORK5CYII=;fontFamily=Helvetica;fontSize=16;fontColor=default;direction=east;fontStyle=1;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-96">
101
+ <mxGeometry x="124.65753424657535" y="0.49545454545454604" width="135.34246575342465" height="108.50454545454546" as="geometry" />
102
+ </mxCell>
103
+ <mxCell id="7CoqnSFOIrunSOh43Noi-99" value="" style="group;direction=west;flipV=1;fontStyle=1;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" connectable="0" parent="1">
104
+ <mxGeometry x="330" y="220" width="260" height="109" as="geometry" />
105
+ </mxCell>
106
+ <mxCell id="7CoqnSFOIrunSOh43Noi-100" value="" style="shape=image;verticalLabelPosition=bottom;labelBackgroundColor=default;verticalAlign=top;imageAspect=0;image=data:image/png,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIABAMAAAAGVsnJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAodEVYdHN2ZzpiYXNlLXVyaQBmaWxlOi8vL3RtcC9tYWdpY2stb0d6aDhuTljeVWFtAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA5LTMwVDAwOjEwOjQyLTAzOjAw9ZOTigAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wOS0zMFQwMDoxMDo0Mi0wMzowMITOKzYAAAAJcEhZcwAAAEgAAABIAEbJaz4AAAAnUExURUdwTDAwMC8vLy8vLy8vLy8vLy8vLy8vLzAwMC8vLzAwMDAwMDAwMLX3vW4AAAAMdFJOUwD9BHsbozJIxGDa7aTsnz4AAAzMSURBVHja7Z3NjxTHGcZbtb277LEVAhb0odQMEHMixrFkeQ9jCwSGHNaBRErYwwB2sAKHDXYcJ+KA7SSy4j2sg634g0uARDnMxQFFsb2XElp22ak/Kv1Rs/0x1dXVPT073VvPT0hejWd6up95uqr6fd+qsiwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEI8z3gJiNFXb//h79To67/M+QOTFbjBHZd/aO71d9Zdzp0NcwWYZdzHXTP1JiCv8lCAO6YagN5zQwUeGyvAYiTAUwgAASBAix9mKBxgsgDeOxdvGS3AV5wP1gwW4AhnjrthrgB00eGcuSepqQLY6+EF/NdUB5CZ8GmOPSGmOiB6nGVbxrYB+yIBts1pAwg1WwCaCWJLBSDag+MWOuD6e9RoB7zvpoPYUgGunfpMx0q0hQLM+oMe/lAtADnnv3Cp8FCnX2ujA5b9cZ+z3VUKML/uOG7yPdK29ItvvnuxfQLMB5fL3CWVAORC8JJbkOvwXcIHK60TQAx7HisdsOwWjw3tvuMLeVBXANKUBOqF8GzZM6IQoBM9HRyihUqy7a6uA7zTzXDATZ4d+Y8IQMLbxH+pF5x33g93JcqHnNQU4PlPvv+IZj1h0WYKMCMEWKGF3b97R0+Azqr/rn9lXvt112q0AK4vAPn844+kp0mXo+u+b2kJcN7ve51BL/nSwgf8fyuNF+CPfp/5rUqAx1oCkPDdqX7FXva/ZYM2WwBv3nUch58cW4C5kabX/1o33R030gHhgMDZzBeAaQkwDLpsZNuQlCRjPeFOSIBoQLA1tgMkg63+WGEY6ZNJ/QKIcCHvjekA8R08IcCx6MhOPR2Bff3qRBwQ3brS6o8qDkgIQBaiV/hKHc3gsU/4oRcmIYAYEclC5mM6YCiAW4sAwSlsd6ciAG+CADNhh/LQXAdcyXYxhjlA9FWa7ekedEDUo+oW6u0FBxAv9aGOm3uWe9cByU/ZQoAlYxxw9stbMgcsmeKAM+t88ILBDpjrM4c9R811wHmWCWub5oDl7HN0kxzgeXTSAkRPqMnnaLMcMPzgoNtEB8zdfbk3aQHEySViFM1xgP0G5z/s7roAzXHAb7jjyKvNmukAka7TFIBQT+0AYvfdMFnWHgd03qQ1OkB8THoezXTAmX+wIF2nKcDvPr7YVbcBJ3g28dxsBxz3O1T2UFeAc8zlj9QOEIc+2JY24HaQrvPvWC0B7L7js6R0QDR9TJrraKID7GDCZzCm1hLgR2GK6JmyF7g9JQEqOkAc6GlKgDDWIhMg/HWTmZ32O2AfS/+hdsCqmznd1jtg2GZvJAUg1069KxWArGf7w9Y7YFgslBSAvM2DWItEAFE1lPgOiQPutcsBEgEWXOa4W1QigJ0nAG/ALaDrAEILBKC/jWItZQSo4gCvNgFoOJZPOiAzYM8EkolaABK2dM7m5B1AaxKAnKb6DhArnSgEmBdlHHU4ILOwTlKAs19ezTiAeLSSAHO/+v6fvbQDXv/5L3MyZOQVzh9RpQDi0ANa0QF6t8CZdb69VskB1KJpAX7MuPMk5YAjbk5NV1DZ6498/6QUYHYYbKrogMQtYN891ZMLcGydOXyDpgR4/dRLVRww4x/JcU8mHEBWg5quJenngvubbVFaswByBwR22+pJBbghGtqEAGf8z/1FIxr6+Z9fSgpAI8GfkViA8AoCU+ToFsyOLBbA6Y7dBpwLQkMHpQIsi4Y2FiB4znIHK4UCfM1ddikpwKqo4IoFuOkErxyWtQLRuI/ftybogPgWCO3GehIB5lzxrbEA4XOWWzhv84gvqnOoGwuwU8EVtwHROUnz5LejNx+wanBAOKBQOSC6uOS6MjsC7DS0z+8IEBUMRvUdinK/e054zFiA+WEFV+wA8cSyZI30Q2KpF7a5Kw7Yx5KpEpkArBs7IDpr1tPJh8SDFF8AcYS1HQcMKyWE8kQiwLMaHGD//l21A4aHDsJuNOOA2aEAOw4Y/n5r4aHfVOdD4kFKQgBbJsDxn7zcrShAgQPm3uCDXyh7AeunsQD23XBthlEBFrK1iP6D5vwHfJAzVYvMZjzEe8dHHODFAiz0OXs0GQd8ylg4VSh/HJAMvPr94fbKqAA8dkBCgEXHb8J7SgF2fp0CBywyv2v5mbgRaJ0OWOD+kYKpQgoHxAIcDc7jiaYDgnK/vO5AIoDCAf4N48YNK63VAReYeHRUtAGxAMth3HGNajkgLPdztmpwAD0f/j8ncNO1v/77b3U6YHHY1eb3ArEAUa/vPiRaDlhWdAflHCAud/h0sN2r0QHLbuoPtQPEHL1nRMsBfTe/2q2kA8RZ3gnmibpOcMfW5QDSd1PfoXbAPpb6DrUDRsca1R2wKiY0Rb1nMLmvLgdkBVA6YBh43dJyQFkBVA4YCmCdYMm4/oQdMPod5/ME2B0HsP8IV7r3d8UBtxvoAHFy+63pOODEtB1AxMkdsKbTBpxvigOeWlPpBeAAOAAOMMoBDewFxMmx/dZ0xgHTd8BwIGSsA6oMhfdSG1DpYWgv9ALpx2HnoDUdB+Q/Dk/aAamACC8ZEBmUCYio4wG5ARFpRKhkQETlgCgkxqqFxNQRoXvpkJjaAbkhMWlESEjaGz8mKIKiYZaUEErqjAlmgqK8WlBU6oArijUlyjlgvLC4UyYsXpAXyAuLyxzg6YTFE4kRZV5g3v/zEZlIZuhTnkyMFDggLzEizwwtBt/e002NKR3gjZEaK8gMzV1OpsZYtdSYPDOkSI0NJ4tt6uYGafXk6KBMclTaC6STo0SdHdZLjqrS452a0+PjV4go0uNHRrLD4qwj6ys2bZsJCiS2ZAUStk6BxE39AonC+gBSVB+gUyCxULZAwvs6XOZTUiITO+Bmdkph3IaIQen9XXHAREpkrI60SMpvFfSLpLhWkVQNVWJFRVIsXSTFtIqkcsvkOjplcp5+mZwzfp1gUZmcs+TFbQA56nKmUSbnN6heplDS/88PSLJS9Ih/nvJCSTKhQsmcOsFObqHk3GihJK1YKGnZUalsolKU5JfKeq9wplcqe5jWUCtMaE6pLJGUypISO3amy+XDYulUtXj+BzsTKZbOqxbPzKOfTLF0UHVO09XiinJ5S7Ncnm1Oer5AfeXy5WaMREYrmjDBy02YqDJfoL4JE3XPGQqnzDxHJ++Aps4ZsnInTdU4X6Axs8ZKTZvz8qbNtXjmqHziJCk5cbLFM0fFgQ6MN3W2vW1AtN4DLzV5OnltLZw3ONb0+U4wfd5/8NhDDggXUAieDrQECDbd4N8S5foBIuawabXEAaTcEhpvSZfQSK4gMa0lNKqvI1TDIip8dBEVWWanmavIlFxGx5Ito5N0AJnSMjqNWUmKTGkhpeasJLUrS2mNLqbWoNXk7MkvpiZZTq9Bq8nlRjYmuqBik9YT9LzJCzCypKZpa4oGi6ryrcTHjFtX+Mw6P5xcVte4dYWzCyubt7I0oamltc1zQKazGTrgpDEOyOZ8+jw3K743HZBzKl3LUAdMdYsNZvomK7wZ2+z4R9rqmusAay7YaIkY7ABqX7+qu91aKQGKt9pijdhqSyyzVb8AxZutVXWA2IFQt+8qHhtpv7OUAMXb7bGq2+15fZfXut3eZAQo3HCRN2TDxUkJcLxoy01NB0S7C6Yu9yhrw5abwaarqYRQxTZAuukqb/6mq7Vtu0taue1u0EWRvHu05MbL9ipv0cbLYvwTLsNT09bbRLL1NpnK1ts6m6+L7n9beXplN18np5ux/fxRJtouhQCi+3+iPGG77/it+EFLUwD/927E9VvzLJ5UkSeAFXb/qTZbQlAxEEwe0nRAY1h2eFgYohJgfp05btE2ruSLb7570WqfALOibFwlgPW2P0i5VGjZ069ZLRTAet/lD2iBANa1U59pPIGF/1onAD37Hi0UIDfLlzcgaJMANN0eSwUgHt2zAmTnJ8gdoK9n+wQofFY3TAAxyb9qaKL1AgyDFU+IqQ4QI//9pjrAvwInWnjeVAGsGe4PfDcsYx1geV/5TzVrBgtgkXcu3rKMFoCWmbu5BwXI3S/ZGAHG60QgAASAAEYLcM/NxpkN41WeXTPGMKJMg7tmrADh/r3OhmUuN7jjsg8NFsC+zPkDSk1WIFqfxVyIR4nRAuhnUQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAw/g/DeNj5Q8nTPIAAAAASUVORK5CYII=;fontFamily=Helvetica;fontSize=16;fontColor=default;direction=east;fontStyle=1;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-99">
107
+ <mxGeometry y="0.49545454545454604" width="135.34246575342465" height="108.50454545454546" as="geometry" />
108
+ </mxCell>
109
+ <mxCell id="7CoqnSFOIrunSOh43Noi-101" value="" style="shape=image;verticalLabelPosition=bottom;labelBackgroundColor=default;verticalAlign=top;imageAspect=0;image=data:image/png,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIABAMAAAAGVsnJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAodEVYdHN2ZzpiYXNlLXVyaQBmaWxlOi8vL3RtcC9tYWdpY2stb0d6aDhuTljeVWFtAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA5LTMwVDAwOjEwOjQyLTAzOjAw9ZOTigAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxNy0wOS0zMFQwMDoxMDo0Mi0wMzowMITOKzYAAAAJcEhZcwAAAEgAAABIAEbJaz4AAAAnUExURUdwTDAwMC8vLy8vLy8vLy8vLy8vLy8vLzAwMC8vLzAwMDAwMDAwMLX3vW4AAAAMdFJOUwD9BHsbozJIxGDa7aTsnz4AAAzMSURBVHja7Z3NjxTHGcZbtb277LEVAhb0odQMEHMixrFkeQ9jCwSGHNaBRErYwwB2sAKHDXYcJ+KA7SSy4j2sg634g0uARDnMxQFFsb2XElp22ak/Kv1Rs/0x1dXVPT073VvPT0hejWd6up95uqr6fd+qsiwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEI8z3gJiNFXb//h79To67/M+QOTFbjBHZd/aO71d9Zdzp0NcwWYZdzHXTP1JiCv8lCAO6YagN5zQwUeGyvAYiTAUwgAASBAix9mKBxgsgDeOxdvGS3AV5wP1gwW4AhnjrthrgB00eGcuSepqQLY6+EF/NdUB5CZ8GmOPSGmOiB6nGVbxrYB+yIBts1pAwg1WwCaCWJLBSDag+MWOuD6e9RoB7zvpoPYUgGunfpMx0q0hQLM+oMe/lAtADnnv3Cp8FCnX2ujA5b9cZ+z3VUKML/uOG7yPdK29ItvvnuxfQLMB5fL3CWVAORC8JJbkOvwXcIHK60TQAx7HisdsOwWjw3tvuMLeVBXANKUBOqF8GzZM6IQoBM9HRyihUqy7a6uA7zTzXDATZ4d+Y8IQMLbxH+pF5x33g93JcqHnNQU4PlPvv+IZj1h0WYKMCMEWKGF3b97R0+Azqr/rn9lXvt112q0AK4vAPn844+kp0mXo+u+b2kJcN7ve51BL/nSwgf8fyuNF+CPfp/5rUqAx1oCkPDdqX7FXva/ZYM2WwBv3nUch58cW4C5kabX/1o33R030gHhgMDZzBeAaQkwDLpsZNuQlCRjPeFOSIBoQLA1tgMkg63+WGEY6ZNJ/QKIcCHvjekA8R08IcCx6MhOPR2Bff3qRBwQ3brS6o8qDkgIQBaiV/hKHc3gsU/4oRcmIYAYEclC5mM6YCiAW4sAwSlsd6ciAG+CADNhh/LQXAdcyXYxhjlA9FWa7ekedEDUo+oW6u0FBxAv9aGOm3uWe9cByU/ZQoAlYxxw9stbMgcsmeKAM+t88ILBDpjrM4c9R811wHmWCWub5oDl7HN0kxzgeXTSAkRPqMnnaLMcMPzgoNtEB8zdfbk3aQHEySViFM1xgP0G5z/s7roAzXHAb7jjyKvNmukAka7TFIBQT+0AYvfdMFnWHgd03qQ1OkB8THoezXTAmX+wIF2nKcDvPr7YVbcBJ3g28dxsBxz3O1T2UFeAc8zlj9QOEIc+2JY24HaQrvPvWC0B7L7js6R0QDR9TJrraKID7GDCZzCm1hLgR2GK6JmyF7g9JQEqOkAc6GlKgDDWIhMg/HWTmZ32O2AfS/+hdsCqmznd1jtg2GZvJAUg1069KxWArGf7w9Y7YFgslBSAvM2DWItEAFE1lPgOiQPutcsBEgEWXOa4W1QigJ0nAG/ALaDrAEILBKC/jWItZQSo4gCvNgFoOJZPOiAzYM8EkolaABK2dM7m5B1AaxKAnKb6DhArnSgEmBdlHHU4ILOwTlKAs19ezTiAeLSSAHO/+v6fvbQDXv/5L3MyZOQVzh9RpQDi0ANa0QF6t8CZdb69VskB1KJpAX7MuPMk5YAjbk5NV1DZ6498/6QUYHYYbKrogMQtYN891ZMLcGydOXyDpgR4/dRLVRww4x/JcU8mHEBWg5quJenngvubbVFaswByBwR22+pJBbghGtqEAGf8z/1FIxr6+Z9fSgpAI8GfkViA8AoCU+ToFsyOLBbA6Y7dBpwLQkMHpQIsi4Y2FiB4znIHK4UCfM1ddikpwKqo4IoFuOkErxyWtQLRuI/ftybogPgWCO3GehIB5lzxrbEA4XOWWzhv84gvqnOoGwuwU8EVtwHROUnz5LejNx+wanBAOKBQOSC6uOS6MjsC7DS0z+8IEBUMRvUdinK/e054zFiA+WEFV+wA8cSyZI30Q2KpF7a5Kw7Yx5KpEpkArBs7IDpr1tPJh8SDFF8AcYS1HQcMKyWE8kQiwLMaHGD//l21A4aHDsJuNOOA2aEAOw4Y/n5r4aHfVOdD4kFKQgBbJsDxn7zcrShAgQPm3uCDXyh7AeunsQD23XBthlEBFrK1iP6D5vwHfJAzVYvMZjzEe8dHHODFAiz0OXs0GQd8ylg4VSh/HJAMvPr94fbKqAA8dkBCgEXHb8J7SgF2fp0CBywyv2v5mbgRaJ0OWOD+kYKpQgoHxAIcDc7jiaYDgnK/vO5AIoDCAf4N48YNK63VAReYeHRUtAGxAMth3HGNajkgLPdztmpwAD0f/j8ncNO1v/77b3U6YHHY1eb3ArEAUa/vPiRaDlhWdAflHCAud/h0sN2r0QHLbuoPtQPEHL1nRMsBfTe/2q2kA8RZ3gnmibpOcMfW5QDSd1PfoXbAPpb6DrUDRsca1R2wKiY0Rb1nMLmvLgdkBVA6YBh43dJyQFkBVA4YCmCdYMm4/oQdMPod5/ME2B0HsP8IV7r3d8UBtxvoAHFy+63pOODEtB1AxMkdsKbTBpxvigOeWlPpBeAAOAAOMMoBDewFxMmx/dZ0xgHTd8BwIGSsA6oMhfdSG1DpYWgv9ALpx2HnoDUdB+Q/Dk/aAamACC8ZEBmUCYio4wG5ARFpRKhkQETlgCgkxqqFxNQRoXvpkJjaAbkhMWlESEjaGz8mKIKiYZaUEErqjAlmgqK8WlBU6oArijUlyjlgvLC4UyYsXpAXyAuLyxzg6YTFE4kRZV5g3v/zEZlIZuhTnkyMFDggLzEizwwtBt/e002NKR3gjZEaK8gMzV1OpsZYtdSYPDOkSI0NJ4tt6uYGafXk6KBMclTaC6STo0SdHdZLjqrS452a0+PjV4go0uNHRrLD4qwj6ys2bZsJCiS2ZAUStk6BxE39AonC+gBSVB+gUyCxULZAwvs6XOZTUiITO+Bmdkph3IaIQen9XXHAREpkrI60SMpvFfSLpLhWkVQNVWJFRVIsXSTFtIqkcsvkOjplcp5+mZwzfp1gUZmcs+TFbQA56nKmUSbnN6heplDS/88PSLJS9Ih/nvJCSTKhQsmcOsFObqHk3GihJK1YKGnZUalsolKU5JfKeq9wplcqe5jWUCtMaE6pLJGUypISO3amy+XDYulUtXj+BzsTKZbOqxbPzKOfTLF0UHVO09XiinJ5S7Ncnm1Oer5AfeXy5WaMREYrmjDBy02YqDJfoL4JE3XPGQqnzDxHJ++Aps4ZsnInTdU4X6Axs8ZKTZvz8qbNtXjmqHziJCk5cbLFM0fFgQ6MN3W2vW1AtN4DLzV5OnltLZw3ONb0+U4wfd5/8NhDDggXUAieDrQECDbd4N8S5foBIuawabXEAaTcEhpvSZfQSK4gMa0lNKqvI1TDIip8dBEVWWanmavIlFxGx5Ito5N0AJnSMjqNWUmKTGkhpeasJLUrS2mNLqbWoNXk7MkvpiZZTq9Bq8nlRjYmuqBik9YT9LzJCzCypKZpa4oGi6ryrcTHjFtX+Mw6P5xcVte4dYWzCyubt7I0oamltc1zQKazGTrgpDEOyOZ8+jw3K743HZBzKl3LUAdMdYsNZvomK7wZ2+z4R9rqmusAay7YaIkY7ABqX7+qu91aKQGKt9pijdhqSyyzVb8AxZutVXWA2IFQt+8qHhtpv7OUAMXb7bGq2+15fZfXut3eZAQo3HCRN2TDxUkJcLxoy01NB0S7C6Yu9yhrw5abwaarqYRQxTZAuukqb/6mq7Vtu0taue1u0EWRvHu05MbL9ipv0cbLYvwTLsNT09bbRLL1NpnK1ts6m6+L7n9beXplN18np5ux/fxRJtouhQCi+3+iPGG77/it+EFLUwD/927E9VvzLJ5UkSeAFXb/qTZbQlAxEEwe0nRAY1h2eFgYohJgfp05btE2ruSLb7570WqfALOibFwlgPW2P0i5VGjZ069ZLRTAet/lD2iBANa1U59pPIGF/1onAD37Hi0UIDfLlzcgaJMANN0eSwUgHt2zAmTnJ8gdoK9n+wQofFY3TAAxyb9qaKL1AgyDFU+IqQ4QI//9pjrAvwInWnjeVAGsGe4PfDcsYx1geV/5TzVrBgtgkXcu3rKMFoCWmbu5BwXI3S/ZGAHG60QgAASAAEYLcM/NxpkN41WeXTPGMKJMg7tmrADh/r3OhmUuN7jjsg8NFsC+zPkDSk1WIFqfxVyIR4nRAuhnUQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAw/g/DeNj5Q8nTPIAAAAASUVORK5CYII=;fontFamily=Helvetica;fontSize=16;fontColor=default;direction=east;fontStyle=1;strokeWidth=1;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-99">
110
+ <mxGeometry x="124.65753424657535" y="0.49545454545454604" width="135.34246575342465" height="108.50454545454546" as="geometry" />
111
+ </mxCell>
112
+ <mxCell id="7CoqnSFOIrunSOh43Noi-102" value="Bass" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;strokeWidth=1;fontSize=16;" vertex="1" parent="7CoqnSFOIrunSOh43Noi-99">
113
+ <mxGeometry x="-18" y="16" width="60" height="30" as="geometry" />
114
+ </mxCell>
115
+ <mxCell id="7CoqnSFOIrunSOh43Noi-103" value="Drum" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;strokeWidth=1;fontSize=16;" vertex="1" parent="1">
116
+ <mxGeometry x="312" y="310" width="60" height="30" as="geometry" />
117
+ </mxCell>
118
+ <mxCell id="7CoqnSFOIrunSOh43Noi-104" value="Instrument" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;strokeWidth=1;fontSize=16;" vertex="1" parent="1">
119
+ <mxGeometry x="326" y="384" width="60" height="30" as="geometry" />
120
+ </mxCell>
121
+ <mxCell id="7CoqnSFOIrunSOh43Noi-105" value="Melody" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;strokeWidth=1;fontSize=16;" vertex="1" parent="1">
122
+ <mxGeometry x="317" y="463" width="60" height="30" as="geometry" />
123
+ </mxCell>
124
+ <mxCell id="7CoqnSFOIrunSOh43Noi-111" value="&lt;p style=&quot;line-height: 160%; font-size: 16px;&quot;&gt;yes to select&lt;br style=&quot;font-size: 16px;&quot;&gt;no to re-generate&lt;/p&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontStyle=0;fontFamily=Helvetica;fontSize=16;strokeWidth=1;" vertex="1" parent="1">
125
+ <mxGeometry x="606" y="146" width="125" height="45.5" as="geometry" />
126
+ </mxCell>
127
+ </root>
128
+ </mxGraphModel>
129
+ </diagram>
130
+ </mxfile>
2310.19180/main_diagram/main_diagram.pdf ADDED
Binary file (49.4 kB). View file
 
2310.19180/paper_text/intro_method.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The rapid evolution of generative modeling has positioned AI-driven music generation as a prominent field, merging research innovation with practical applications in the music industry. Early systems like Music Transformer [@huang2018music] and MuseNet [@payne2019musenet], which utilized symbolic representations [@engel2017neural], were pivotal in translating textual descriptions into MIDI-style outputs. Although these methods were groundbreaking, their dependence on predefined virtual synthesizers often compromised the audio quality and restricted the diversity of their musical outputs.
4
+
5
+ Recent advancements in text-to-music synthesis, as demonstrated by models like MusicGen [@copet2023simple], MusicLM [@agostinelli2023musiclm], and Jen-1 [@li2023jen], represent a significant leap forward in directly generating authentic audio waveforms from textual prompts. These innovations have greatly expanded the versatility and diversity of music generation, bypassing the need for extensive musical theory knowledge and traditional symbolic representations. However, their focus on producing composite audio mixes rather than discrete, manipulable tracks limits the level of creative control necessary in professional music production environments. Similarly, the introduction of digital audio workstations and the expansion of available timbres have indeed revolutionized musical creativity---enabling composers to explore complex harmonies, melodies, and rhythms beyond the confines of physical instruments [@zhu2020pop]---but these tools still impose significant barriers. Despite their advancements, they require a deep understanding of music theory and proficiency in symbolic musical notation, which continue to pose challenges for many aspiring musicians and composers.
6
+
7
+ In response to these challenges, we propose JEN-1 Composer, a framework designed to democratize music production by streamlining the creative process. This framework employs end-to-end training to intuitively grasp the relationships between different tracks, enabling audio-to-audio orchestration that learns directly from waveform datasets. By allowing both direct audio and text input, JEN-1 Composer not only simplifies user interaction but also expands creative freedom through detailed manipulation of individual tracks.
8
+
9
+ JEN-1 Composer is a comprehensive generative framework designed to model the marginal, conditional, and joint distributions of multi-track music within a single model. By leveraging the Jen-1 [@li2023jen] audio latent diffusion model as a foundation, our approach effectively and efficiently manages these distributions concurrently. To extend the capabilities of Jen-1, we introduce several key enhancements: (a) We design specialized input-output configurations to handle latent representations of multiple music tracks, allowing the model to effectively capture the temporal relationships and harmonic coherence across these tracks. (b) We incorporate timestep vectors to govern the generation of individual tracks, providing the necessary flexibility for fine-grained control during the generation process. (c) We augment conventional text prompts with prefix prompts to clearly define generation tasks, reducing ambiguity and improving model performance. Additionally, we employ a curriculum training strategy that gradually introduces more complex tasks, from generating single tracks to orchestrating intricate combinations of multiple tracks.
10
+
11
+ To further bridge the gap between AI capabilities and human creativity, we introduce a Human-AI co-composition workflow, as illustrated in Figure [1](#fig:overview){reference-type="ref" reference="fig:overview"}. This approach enables iterative refinement of music tracks during the model's inference phase, allowing producers to collaboratively adjust and blend AI-generated tracks to align with their creative visions. Through this workflow, users can directly influence the music generation process by providing feedback on textual prompts and previously generated music tracks, ensuring that all tracks meet their precise standards.
12
+
13
+ <figure id="fig:overview" data-latex-placement="tb!">
14
+ <embed src="workflow.pdf" style="width:75.0%" />
15
+ <figcaption> The Human-AI co-composition workflow of JEN-1 Composer. JEN-1 Composer generates multiple music tracks based on user-provided text prompts (specifying genres, eras, rhythms, <em>etc.</em>) and optional audio feedback, where users can select, edit, or upload tracks. The human feedback guides the generation of target tracks, ensuring temporal alignment and musical coherence. The iterative process of human feedback and AI generation continues until a harmonious and cohesive musical piece is achieved. </figcaption>
16
+ </figure>
17
+
18
+ In summary, the contributions of this work are four-fold:
19
+
20
+ 1. We introduce a collaborative music generation workflow that seamlessly integrates human creativity with AI, designed for the iterative creation of multi-track music within an audio-based framework.
21
+
22
+ 2. We present JEN-1 Composer, a unified framework that effectively models marginal, conditional, and joint distributions for multi-track music generation using a single audio latent diffusion model.
23
+
24
+ 3. We design an intuitive curriculum training strategy that progressively enhances the model's capability to generate complex musical compositions.
25
+
26
+ 4. We provide comprehensive quantitative and qualitative evaluations demonstrating that JEN-1 Composer achieves state-of-the-art performance in generating diverse track combinations, advancing the flexibility and creativity of music production.
27
+
28
+ # Method
29
+
30
+ Diffusion models [@ho2020denoising] are generative models that produce high-quality samples through iterative denoising. The process begins by gradually corrupting the original data $x_0$ with Gaussian noise over a series of timesteps in a forward process, where each noisy sample $x_t$ is generated as: $$\begin{equation}
31
+ \label{latent:diff}
32
+ x_t = \sqrt{\bar{\alpha}_t}x_0+\sqrt{1-\bar{\alpha}_t}\epsilon_t,
33
+ \end{equation}$$ with $\epsilon_t$ as the standard Gaussian noise, $\bar{\alpha}_t=\prod^t_{i=1}\alpha_i$, and $\alpha_t=1-\beta_t$, where the sequence of $\beta_t$ is the noise schedule that controls the level of corruption over time.
34
+
35
+ In the reverse process, the diffusion model aims to recover $x_0$ by iteratively denoising $x_t$. A noise prediction model, parameterized by $\theta$, is trained to estimate the noise $\epsilon_t$ in $x_t$ at each timestep $t$ by minimizing the following loss function: $$\begin{equation}
36
+ \min_\theta \mathbb{E}_{t,x_0,\epsilon_t}\left\Vert \epsilon_t- \epsilon_\theta\left(x_t,t \right) \right\Vert^2_2,
37
+ \end{equation}$$ where $t$ is uniformly sampled from $\{1, 2, \ldots, T\}$. With the optimized noise predictor, $x_0$ can be approximated by sampling from a Gaussian model $p\left(x_{t-1}\mid x_t\right)=\mathcal N\left(x_{t-1}\mid \mu_t\left(x_t\right), \sigma^2_t{I}\right)$ in a stepwise manner [@bao2023one]. The optimal mean for this Gaussian, under maximum likelihood estimation, is: $$\begin{equation}
38
+ \mu^*_t\left(x_t\right) = \frac{1}{\sqrt{\alpha}_t}\left(x_t-\frac{\beta_t}{\sqrt{1-\bar{\alpha}_t}}\epsilon_\theta\left(x_t,t \right)\right).
39
+ \end{equation}$$ By iteratively applying this process, the diffusion model refines the noise, progressively generating new samples that closely resemble the original training data.
40
+
41
+ Modeling raw audio waveforms directly poses challenges due to their high dimensionality, where $x_0 \in \mathbb{R}^{C \times S}$ represents the waveform, with $C$ denoting the number of channels and $S$ indicating the sequence length. To address this, Jen-1 [@li2023jen] extends the Latent Diffusion Model (LDM) [@rombach2022high] framework, originally formulated for images, to the domain of audio generation. In the Jen-1 architecture, the audio waveform $x_0$ is mapped to a lower-dimensional latent representation $z_0 \in \mathbb{R}^{D \times S^{\prime}}$ via an audio encoder $f_\phi$, and then reconstructed back to the original waveform $\widehat{x}_0$ through an audio decoder $g_\psi$. Here $S^{\prime} \ll S$ is the compressed sequence length and $D$ is the latent dimension. This process is denoted as: $$\begin{equation}
42
+ \label{eq:latent}
43
+ z_0 = f_{\phi}(x_0), \quad \widehat{x}_0 = g_{\psi}(z_0) \approx x_0.
44
+ \end{equation}$$
45
+
46
+ In Jen-1 [@li2023jen], the diffusion model $\theta$ operates in the audio latent space, predicting the noise $\widehat{\epsilon}_t=\epsilon_\theta(z_t, e, t)$ and iteratively denoising from Gaussian noise to generate the final latent $\widehat{z}_0$. The variable $e$ represents the embedding of conditioning inputs, such as text prompts, guiding the generation process. Similar to LDM, Jen-1 utilizes a U-Net architecture [@ronneberger2015u] as the backbone for its diffusion process. Specifically adapted for audio data, Jen-1 replaces the 2D convolutions used in image processing with 1D convolutions tailored for audio latent representations. The model consists of a sequence of blocks, including AttnDownBlock1D, UNetMidBlock1D, and AttnUpBlock1D, which integrate residual 1D convolutional layers with cross-attention transformers [@vaswani2017attention]. The overall architecture is depicted in Figure [2](#fig:unet){reference-type="ref" reference="fig:unet"}.
47
+
48
+ <figure id="fig:unet" data-latex-placement="tb!">
49
+ <img src="diff-unet1.png" style="width:48.0%" />
50
+ <figcaption> The U-Net architecture used in Jen-1. </figcaption>
51
+ </figure>
52
+
53
+ <figure id="fig:t-illustration" data-latex-placement="tbhp">
54
+ <img src="unified.png" style="width:73.0%" />
55
+ <figcaption> Illustration of three generation modes using independent timesteps as indicators. In Marginal Generation, non-target track latents are fixed as Gaussian noise (timestep <span class="math inline"><em>T</em></span>) to minimize their impact on the target track’s latent. Conditional Generation designates a timestep of <span class="math inline">0</span> for a conditional track, guiding the target track’s generation. Joint Generation synchronizes multiple target tracks by sharing the same timestep <span class="math inline"><em>t</em></span>, allowing for coordinated denoising from <span class="math inline"><em>T</em></span> to <span class="math inline">0</span>. </figcaption>
56
+ </figure>
57
+
58
+ To enable JEN-1 Composer to handle multi-track input and output for joint modeling, we make several critical modifications to Jen-1's single-track architecture. As elaborated below, the input-output representation, timestep vectors, and prompt prefixes are adapted to fit multi-track distributions efficiently using a single model.
59
+
60
+ **Multi-track Input-Output Representation.** We extend the single-track input paradigm of Jen-1 to accommodate multi-track inputs, denoted as $\mathbf{X}=\left[x^1_0, \ldots, x^K_0\right]$, where $x^i_0$ represents the $i$-th track and $K$ denotes the total number of tracks. Each track undergoes encoding into the audio latent space, yielding latent representations $z^i_0 = f_\phi(x^i_0) \in \mathbb{R}^{D \times S^{\prime}}$ prior to being inputted to the audio latent diffusion model. These latent representations are concatenated along the channel dimension to form the input latent variables $\mathbf{Z} \in \mathbb{R}^{KD \times S^{\prime}}$. During inference, the output of the audio latent diffusion model is split into $K$ tracks, with each denoised latent variable reconstructed into waveform via the pre-trained audio decoder, denoted as $\widehat{x}_0^i =g_{\psi}(\widehat{z}^i_0)$. The extension of the input-output representation to multi-track enables explicit modeling of inter-track dependencies and consistency, crucial for high-quality multi-track generation that is absent in single-track models.
61
+
62
+ **Individual Timestep Vectors.** Introducing separate timesteps for each track not only provides precise control over the generation process but also enables unified distribution modeling. This is achieved by extending the scalar timestep $t$ in Jen-1 to a multi-dimensional vector $\mathbf{T}=\left[t_1,\ldots, t_K\right]$. Each $t_i$ determines the corresponding latent variable $z^i$ in $\mathbf{Z}=\left[z^1,\ldots, z^K\right]$ according to the diffusion forward process defined in Equation ([\[latent:diff\]](#latent:diff){reference-type="ref" reference="latent:diff"}). In the diffusion model, these timesteps are independently learned for each track and concatenated to form the conditional embedding. The process is formalized as follows: $$\begin{equation}
63
+ \label{defz}
64
+ z^i =
65
+ \begin{cases}
66
+ z^i_T, & \text{if }\ t_i = T \\
67
+ z^i_0, & \text{if }\ t_i = 0 \\
68
+ z^i_t, & \text{if }\ 0 < t_i < T
69
+ \end{cases}
70
+ \end{equation}$$
71
+
72
+ For $k\geq 2$ target generation tracks, we adopt a uniform timestep $t$ across these tracks, mirroring the modeling of their joint probability distribution. Specifically, if $k<K$, two modes are considered for the remaining $(K-k)$ tracks. Firstly, the conditional generation mode sets all timesteps to 0, representing latent variables corresponding to original waveforms, akin to conditional generation. Here, the corresponding latent variables and timestep vectors are denoted as $\mathbf{Z}_c$ and $\mathbf{T}_c$, respectively. Secondly, the unconditional generation mode involves fixing all non-target timesteps to $T$, indicating perturbation of corresponding latent variables to approximate Gaussian random noise, akin to marginal generation. Correspondingly, the latent variables and timestep vectors are labeled as $\mathbf{Z}_m$ and $\mathbf{T}_m$. During loss computation, emphasis is laid solely on the channels corresponding to the target tracks, while inference benefits from the Classifier-Free Guidance (CFG) technique [@ho2022classifier]. Specifically, if $k<K$, the alignment across tracks and generation quality are enhanced via the expression: $$\begin{equation}
73
+ \label{eq.cfg}
74
+ \widehat{\epsilon} = \left(1-\lambda \right)\epsilon_\theta\left(\mathbf{Z}_m, e, \mathbf{T}_m\right) + \lambda \epsilon_\theta\left(\mathbf{Z}_c, e, \mathbf{T}_c\right),
75
+ \end{equation}$$ where $\lambda$ denotes the guidance scale. Illustrations presented in Figure [3](#fig:t-illustration){reference-type="ref" reference="fig:t-illustration"} showcase diverse scenarios of a straightforward two-track generation task, providing additional clarity on the concept of achieving unified distribution modeling through the manipulation of timestep vectors.
76
+
77
+ We enhance traditional text prompts, which typically describe the musical content and style, by integrating task-specific tokens as prefix prompts. These tokens act as explicit directives, similar to command flags in programming, providing clear and concise instructions regarding the generation task at hand. By using specific prefixes like "\[bass & drum generation\]\", we direct the model's focus to the production of target tracks, such as bass and drums. This approach not only specifies the generation objectives but also significantly diminishes ambiguity, thus improving both the fidelity and relevance of the generated content.
78
+
79
+ We introduce a progressive curriculum training strategy designed to systematically enhance the model's capability to generate coherent multi-track audio sequences while accommodating varying levels of conditioning and noise injection. This strategy includes curriculum decay and task allocation, a strategic sampler for conditional and marginal generation modes, and self-bootstrapping training to improve model generalization.
80
+
81
+ The training begins with single-track text-to-music generation, establishing a strong foundation for the model. As the model advances, more complex multi-track tasks are gradually introduced. This progression is carefully managed by reducing the sampling probabilities of simpler tasks, allowing the model to develop the ability to generate harmonically aligned tracks across multiple channels. Each task involves multi-track audio input and output, with latent representations configured as described in Equation ([\[defz\]](#defz){reference-type="ref" reference="defz"}). During this phase, the model's learning is focused on critical aspects by computing losses only for target tracks, while non-target tracks are masked. This structured approach facilitates efficient learning, enabling the model to generate high-fidelity audio compositions.
82
+
83
+ **Curriculum Decay and Task Allocation.** The curriculum starts with single-track ($k=1$) tasks, focusing on conditional generation using other tracks as signals or simpler marginal generation tasks. As training progresses, the focus shifts towards multi-track generation ($2 \leq k < K$), with increased sampling probabilities for more complex tasks over time. Ultimately, the curriculum incorporates joint generation tasks ($k=K$) driven solely by text prompts.
84
+
85
+ **Sampler for Conditional and Marginal Generation.** A strategic sampler is employed when fewer target tracks are generated than available ($k < K$). The sampler assigns non-target tracks a timestep of either $0$ or $T$: with probability $p_1$, a timestep of $0$ is chosen to encourage conditioning, and with probability $1-p_1$, a timestep of $T$ is selected for non-conditioned generation. This approach allows the model to effectively learn both conditional and marginal generation, preparing it for CFG technique implementation during inference and enhancing its overall performance in generating coherent music tracks.
86
+
87
+ **Incorporation of Self-Bootstrapping Training.** In later training stages, self-bootstrapping is introduced with a probability $p_2$ to improve generalization and align with the Human-AI co-composition workflow. During this phase, tracks generated by a teacher model---using an exponential moving average of the model's parameters---replace a portion of the ground truth-aligned conditional input tracks. This technique refines the model's alignment and synchronization capabilities, expands the training dataset, and enhances generalization, which is crucial for performance in real-world, interactive environments.
88
+
89
+ :::: algorithm
90
+ ::: algorithmic
91
+ **Input:** Text prompt, user-provided tracks $\mathbf{S}$ (optional) **Output:** Set of selected and refined tracks $\mathbf{S}$
92
+
93
+ $e \leftarrow$ Embedding of the given prompt *\# Joint Generation* $(\widehat{x}^1_0, \ldots, \widehat{x}^K_0) \leftarrow$ Model.GenerateTracks$(e)$ $\mathbf{S} \leftarrow$ User.selectAndRefineTracks$(\widehat{x}^1_0, \ldots, \widehat{x}^K_0)$
94
+
95
+ *\# Using the CFG technique defined in Equation ([\[eq.cfg\]](#eq.cfg){reference-type="ref" reference="eq.cfg"})* $(\widehat{x}^1_0, \ldots, \widehat{x}^K_0) \leftarrow$ Model.GenerateTracks$(\mathbf{S}, e)$ *\# Update $\mathbf{S}$* $\mathbf{S} \leftarrow \mathbf{S}\ \cup$ User.selectAndRefineTracks$(\widehat{x}^1_0, \ldots, \widehat{x}^K_0)$
96
+ :::
97
+ ::::
98
+
99
+ During inference, our model supports the conditional generation of multiple tracks given $0$ to $K-1$ input tracks. To facilitate Human-AI collaborative music creation, we propose an interactive generation procedure, outlined in Algorithm [\[algo:1\]](#algo:1){reference-type="ref" reference="algo:1"}. The proposed interactive inference approach effectively integrates human creativity with AI capabilities, enabling a collaborative music generation process. This workflow offers three primary benefits:
100
+
101
+ - **Enhanced Refinement.** The iterative feedback mechanism allows users to progressively refine each track, enabling nuanced improvements that purely AI-driven generation may struggle to achieve. By selecting and refining satisfactory tracks, users help steer the generation process toward desired outcomes, filtering out low-quality outputs.
102
+
103
+ - **Alignment with Human Aesthetics.** The interaction between human creators and the model enhances the AI's understanding of human aesthetic preferences and sound quality standards. This ongoing collaboration ensures that the generated tracks align more closely with artistic intent and professional expectations.
104
+
105
+ - **Creative Control and Engagement.** The collaborative experience empowers human producers, providing a sense of control over the creative process. By balancing AI-driven generation with human input, the workflow ensures that both improvisation and structural coherence are maintained, enabling the realization of creative visions with AI assistance.
2312.11792/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-12-15T11:08:49.617Z" agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" version="22.1.9" etag="fjHuxVjVM_suaKpMXOsH" type="google">
2
+ <diagram name="第 1 页" id="wdCjAqkwKgtSTyoLLiAt">
3
+ <mxGraphModel dx="1437" dy="683" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1654" pageHeight="2336" math="1" shadow="0">
4
+ <root>
5
+ <mxCell id="0" />
6
+ <mxCell id="1" parent="0" />
7
+ <mxCell id="2" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;strokeColor=#808080;strokeWidth=2;fontFamily=Helvetica;" edge="1" source="69" parent="1">
8
+ <mxGeometry relative="1" as="geometry">
9
+ <mxPoint x="820" y="1752" as="targetPoint" />
10
+ <Array as="points">
11
+ <mxPoint x="82" y="1752" />
12
+ </Array>
13
+ </mxGeometry>
14
+ </mxCell>
15
+ <mxCell id="3" value="" style="group" connectable="0" vertex="1" parent="1">
16
+ <mxGeometry x="138" y="1714" width="360" height="233" as="geometry" />
17
+ </mxCell>
18
+ <mxCell id="4" value="" style="rounded=0;whiteSpace=wrap;html=1;glass=0;shadow=0;fontSize=14;fillColor=#FAFAFA;opacity=97;strokeColor=#4D4D4D;strokeWidth=1.5;arcSize=2;dashed=1;dashPattern=1 1;fontFamily=Helvetica;" vertex="1" parent="3">
19
+ <mxGeometry width="360" height="233" as="geometry" />
20
+ </mxCell>
21
+ <mxCell id="5" value="" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#dae8fc;strokeColor=#6c8ebf;shadow=1;glass=0;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="3">
22
+ <mxGeometry x="17" y="113.02057613168724" width="98.25" height="67.11934156378601" as="geometry" />
23
+ </mxCell>
24
+ <mxCell id="6" value="&lt;font style=&quot;font-size: 16px;&quot;&gt;Progression&lt;br style=&quot;font-size: 16px;&quot;&gt;Analysis&amp;nbsp;&lt;/font&gt;🔥" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#ffe6cc;strokeColor=#d79b00;shadow=1;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="3">
25
+ <mxGeometry x="241" y="69.51646090534979" width="100" height="67.11934156378601" as="geometry" />
26
+ </mxCell>
27
+ <mxCell id="7" value="&lt;font style=&quot;font-size: 16px;&quot;&gt;Aspect&lt;br style=&quot;font-size: 16px;&quot;&gt;Promoter&amp;nbsp;&lt;/font&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;❄️&lt;/font&gt;" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#dae8fc;strokeColor=#6c8ebf;shadow=1;glass=0;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="3">
28
+ <mxGeometry x="242" y="144.78600823045267" width="100" height="67.11934156378601" as="geometry" />
29
+ </mxCell>
30
+ <mxCell id="8" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;" edge="1" parent="3" source="10">
31
+ <mxGeometry relative="1" as="geometry">
32
+ <Array as="points">
33
+ <mxPoint x="222.25" y="139.9917695473251" />
34
+ <mxPoint x="222.25" y="93.96707818930041" />
35
+ </Array>
36
+ <mxPoint x="241" y="94.02348099733715" as="targetPoint" />
37
+ </mxGeometry>
38
+ </mxCell>
39
+ <mxCell id="9" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;" edge="1" parent="3" source="10" target="7">
40
+ <mxGeometry relative="1" as="geometry" />
41
+ </mxCell>
42
+ <mxCell id="10" value="&lt;font style=&quot;font-size: 15px;&quot;&gt;Aspect&lt;br&gt;State &lt;font color=&quot;#324259&quot;&gt;\(\mathcal{S}^t_1\)&lt;/font&gt;&lt;/font&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="3">
43
+ <mxGeometry x="133.5" y="125.84876543209876" width="68.75" height="28.76543209876543" as="geometry" />
44
+ </mxCell>
45
+ <mxCell id="11" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;" edge="1" parent="3" target="10">
46
+ <mxGeometry relative="1" as="geometry">
47
+ <mxPoint x="112.25" y="139.9917695473251" as="sourcePoint" />
48
+ </mxGeometry>
49
+ </mxCell>
50
+ <mxCell id="12" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;" edge="1" parent="3" source="13" target="6">
51
+ <mxGeometry relative="1" as="geometry" />
52
+ </mxCell>
53
+ <mxCell id="13" value="" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=7.689655172413779;fontSize=15;shadow=1;fillColor=#fff2cc;strokeColor=#d6b656;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="3">
54
+ <mxGeometry x="98.13" y="31.724279835390945" width="129.75" height="61.36625514403292" as="geometry" />
55
+ </mxCell>
56
+ <mxCell id="14" value="&lt;b style=&quot;border-color: var(--border-color); color: rgb(26, 26, 26); font-family: Helvetica; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;&quot;&gt;Agent #3&amp;nbsp;&lt;/b&gt;&lt;span style=&quot;color: rgb(26, 26, 26); font-family: Helvetica; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;&quot;&gt;&amp;nbsp;&lt;/span&gt;" style="text;whiteSpace=wrap;html=1;align=right;" vertex="1" parent="3">
57
+ <mxGeometry x="252" y="-2" width="110" height="30" as="geometry" />
58
+ </mxCell>
59
+ <mxCell id="15" value="" style="group" connectable="0" vertex="1" parent="1">
60
+ <mxGeometry x="158" y="1744" width="360" height="233" as="geometry" />
61
+ </mxCell>
62
+ <mxCell id="16" value="" style="rounded=0;whiteSpace=wrap;html=1;glass=0;shadow=0;fontSize=14;fillColor=#FAFAFA;opacity=97;strokeColor=#4D4D4D;strokeWidth=1.5;arcSize=2;dashed=1;dashPattern=1 1;fontFamily=Helvetica;" vertex="1" parent="15">
63
+ <mxGeometry width="360" height="233" as="geometry" />
64
+ </mxCell>
65
+ <mxCell id="17" value="" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#dae8fc;strokeColor=#6c8ebf;shadow=1;glass=0;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="15">
66
+ <mxGeometry x="17" y="114.02057613168724" width="98.25" height="67.11934156378601" as="geometry" />
67
+ </mxCell>
68
+ <mxCell id="18" value="&lt;font style=&quot;font-size: 16px;&quot;&gt;Progression&lt;br style=&quot;font-size: 16px;&quot;&gt;Analysis&amp;nbsp;&lt;/font&gt;🔥" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#ffe6cc;strokeColor=#d79b00;shadow=1;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="15">
69
+ <mxGeometry x="241" y="59.92798353909465" width="100" height="67.11934156378601" as="geometry" />
70
+ </mxCell>
71
+ <mxCell id="19" value="&lt;font style=&quot;font-size: 16px;&quot;&gt;Aspect&lt;br style=&quot;font-size: 16px;&quot;&gt;Promoter&amp;nbsp;&lt;/font&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;❄️&lt;/font&gt;" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#dae8fc;strokeColor=#6c8ebf;shadow=1;glass=0;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="15">
72
+ <mxGeometry x="242" y="144.78600823045267" width="100" height="67.11934156378601" as="geometry" />
73
+ </mxCell>
74
+ <mxCell id="20" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;" edge="1" parent="15">
75
+ <mxGeometry relative="1" as="geometry">
76
+ <mxPoint x="112.25" y="139.9917695473251" as="sourcePoint" />
77
+ <mxPoint x="133.5" y="140.23148148148175" as="targetPoint" />
78
+ </mxGeometry>
79
+ </mxCell>
80
+ <mxCell id="21" value="" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=7.689655172413779;fontSize=15;shadow=1;fillColor=#fff2cc;strokeColor=#d6b656;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="15">
81
+ <mxGeometry x="98.13" y="30.68312757201646" width="129.75" height="61.36625514403292" as="geometry" />
82
+ </mxCell>
83
+ <mxCell id="22" value="&lt;p style=&quot;font-size: 16px;&quot;&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;&lt;b&gt;Agent #2&lt;/b&gt;&amp;nbsp;&lt;/font&gt;&lt;/p&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=right;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=14;strokeWidth=2;fontFamily=Helvetica;fontColor=#1A1A1A;" vertex="1" parent="15">
84
+ <mxGeometry x="270" width="90" height="28.77" as="geometry" />
85
+ </mxCell>
86
+ <mxCell id="23" value="" style="endArrow=none;dashed=1;html=1;strokeWidth=2;rounded=0;strokeColor=#4D4D4D;fontFamily=Helvetica;" edge="1" parent="1">
87
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
88
+ <mxPoint x="779" y="2020" as="sourcePoint" />
89
+ <mxPoint x="779" y="1690" as="targetPoint" />
90
+ </mxGeometry>
91
+ </mxCell>
92
+ <mxCell id="24" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeColor=#EBA55E;strokeWidth=2;exitX=0.5;exitY=0;exitDx=0;exitDy=0;fontFamily=Helvetica;" edge="1" source="48" target="35" parent="1">
93
+ <mxGeometry relative="1" as="geometry">
94
+ <Array as="points">
95
+ <mxPoint x="676" y="1771" />
96
+ </Array>
97
+ <mxPoint x="652.7241379310344" y="1836" as="sourcePoint" />
98
+ </mxGeometry>
99
+ </mxCell>
100
+ <mxCell id="25" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=-0.038;entryY=0.683;entryDx=0;entryDy=0;entryPerimeter=0;strokeColor=#EBA55E;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
101
+ <mxGeometry relative="1" as="geometry">
102
+ <mxPoint x="499.75" y="1840.5" as="sourcePoint" />
103
+ <mxPoint x="578.75" y="1840.5" as="targetPoint" />
104
+ </mxGeometry>
105
+ </mxCell>
106
+ <mxCell id="26" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;fontSize=14;strokeColor=#EBA55E;fontColor=#666666;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
107
+ <mxGeometry relative="1" as="geometry">
108
+ <mxPoint x="477.75" y="1805" as="sourcePoint" />
109
+ <mxPoint x="556.75" y="1806" as="targetPoint" />
110
+ </mxGeometry>
111
+ </mxCell>
112
+ <mxCell id="27" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=-0.038;entryY=0.683;entryDx=0;entryDy=0;entryPerimeter=0;strokeColor=#6484B3;fontColor=#666666;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
113
+ <mxGeometry relative="1" as="geometry">
114
+ <mxPoint x="477.75" y="1896" as="sourcePoint" />
115
+ <mxPoint x="556.75" y="1896" as="targetPoint" />
116
+ </mxGeometry>
117
+ </mxCell>
118
+ <mxCell id="28" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=-0.038;entryY=0.683;entryDx=0;entryDy=0;entryPerimeter=0;strokeColor=#6484B3;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
119
+ <mxGeometry relative="1" as="geometry">
120
+ <mxPoint x="499.75" y="1927" as="sourcePoint" />
121
+ <mxPoint x="578.75" y="1927" as="targetPoint" />
122
+ </mxGeometry>
123
+ </mxCell>
124
+ <mxCell id="29" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;fontFamily=Helvetica;fontSize=12;fontColor=default;strokeColor=#808080;strokeWidth=2;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1">
125
+ <mxGeometry relative="1" as="geometry">
126
+ <mxPoint x="79" y="1918" as="sourcePoint" />
127
+ <mxPoint x="894.3750000000002" y="1982.5" as="targetPoint" />
128
+ <Array as="points">
129
+ <mxPoint x="81" y="1918" />
130
+ <mxPoint x="81" y="2015" />
131
+ <mxPoint x="894" y="2015" />
132
+ </Array>
133
+ </mxGeometry>
134
+ </mxCell>
135
+ <mxCell id="30" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0;entryY=0.75;entryDx=0;entryDy=0;strokeColor=#6484B3;fontSize=13;strokeWidth=2;fontFamily=Helvetica;" edge="1" source="31" target="35" parent="1">
136
+ <mxGeometry relative="1" as="geometry" />
137
+ </mxCell>
138
+ <mxCell id="31" value="Topic&lt;br style=&quot;font-size: 15px;&quot;&gt;Candidates" style="text;html=1;strokeColor=none;fillColor=#FFFFFF;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;opacity=90;fontFamily=Helvetica;" vertex="1" parent="1">
139
+ <mxGeometry x="724.75" y="1900.5" width="68.25" height="47.5" as="geometry" />
140
+ </mxCell>
141
+ <mxCell id="32" value="" style="endArrow=none;html=1;rounded=1;entryX=1;entryY=0.633;entryDx=0;entryDy=0;entryPerimeter=0;strokeColor=#6484B3;fontSize=13;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
142
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
143
+ <mxPoint x="679.75" y="1894" as="sourcePoint" />
144
+ <mxPoint x="719.75" y="1961" as="targetPoint" />
145
+ <Array as="points">
146
+ <mxPoint x="689.75" y="1894" />
147
+ <mxPoint x="729.75" y="1961" />
148
+ </Array>
149
+ </mxGeometry>
150
+ </mxCell>
151
+ <mxCell id="33" value="" style="endArrow=none;html=1;rounded=0;entryX=-0.002;entryY=0.577;entryDx=0;entryDy=0;entryPerimeter=0;strokeColor=#6484B3;fontSize=13;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
152
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
153
+ <mxPoint x="708.75" y="1926" as="sourcePoint" />
154
+ <mxPoint x="718.63" y="1925.81" as="targetPoint" />
155
+ </mxGeometry>
156
+ </mxCell>
157
+ <mxCell id="34" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;strokeColor=#6484B3;strokeWidth=2;fontFamily=Helvetica;" edge="1" source="35" target="37" parent="1">
158
+ <mxGeometry relative="1" as="geometry" />
159
+ </mxCell>
160
+ <mxCell id="35" value="&lt;font style=&quot;font-size: 16px;&quot;&gt;Topic Candidate&lt;br&gt;Ranker&lt;font style=&quot;font-size: 16px;&quot;&gt;🔥&lt;/font&gt;&lt;/font&gt;" style="rounded=1;whiteSpace=wrap;html=1;fontSize=14;fillColor=#ffe6cc;strokeColor=#d79b00;shadow=1;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
161
+ <mxGeometry x="822.75" y="1733" width="134.25" height="76" as="geometry" />
162
+ </mxCell>
163
+ <mxCell id="36" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;strokeColor=#6484B3;strokeWidth=2;exitX=0.628;exitY=0.984;exitDx=0;exitDy=0;fontFamily=Helvetica;exitPerimeter=0;" edge="1" source="54" target="43" parent="1">
164
+ <mxGeometry relative="1" as="geometry">
165
+ <Array as="points">
166
+ <mxPoint x="1056" y="1883" />
167
+ <mxPoint x="894" y="1883" />
168
+ </Array>
169
+ <mxPoint x="1033.44" y="1799" as="sourcePoint" />
170
+ <mxPoint x="890.065" y="1905" as="targetPoint" />
171
+ </mxGeometry>
172
+ </mxCell>
173
+ <mxCell id="37" value="&lt;div style=&quot;font-size: 15px;&quot;&gt;&lt;span style=&quot;background-color: initial; font-size: 15px;&quot;&gt;&lt;font style=&quot;font-size: 15px;&quot;&gt;Top-\(K\)&amp;nbsp;candidates&lt;/font&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;font-size: 15px;&quot;&gt;&lt;br&gt;&lt;/div&gt;" style="text;whiteSpace=wrap;html=1;fontSize=15;fontFamily=Helvetica;fontColor=default;align=center;strokeWidth=2;" vertex="1" parent="1">
174
+ <mxGeometry x="987" y="1734.25" width="101.5" height="72.75" as="geometry" />
175
+ </mxCell>
176
+ <mxCell id="38" value="" style="endArrow=none;dashed=1;html=1;strokeWidth=2;rounded=0;strokeColor=#4D4D4D;fontFamily=Helvetica;" edge="1" parent="1">
177
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
178
+ <mxPoint x="1091" y="1834" as="sourcePoint" />
179
+ <mxPoint x="776.75" y="1834" as="targetPoint" />
180
+ </mxGeometry>
181
+ </mxCell>
182
+ <mxCell id="39" value="Step 1: Local Analysis&amp;nbsp;with Specialized Agents" style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=18;fontFamily=Helvetica;fontColor=default;fontStyle=1;strokeWidth=2;" vertex="1" parent="1">
183
+ <mxGeometry x="57.75" y="1682" width="404.25" height="33" as="geometry" />
184
+ </mxCell>
185
+ <mxCell id="40" value="Step 2: Global Coordination" style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=18;fontFamily=Helvetica;fontColor=default;fontStyle=1;strokeWidth=2;" vertex="1" parent="1">
186
+ <mxGeometry x="788.25" y="1686" width="262.75" height="30" as="geometry" />
187
+ </mxCell>
188
+ <mxCell id="41" value="Step 3: Utterance Generation" style="text;html=1;strokeColor=none;fillColor=none;align=left;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=18;fontFamily=Helvetica;fontColor=default;fontStyle=1;strokeWidth=2;" vertex="1" parent="1">
189
+ <mxGeometry x="788.25" y="1841" width="262.75" height="30" as="geometry" />
190
+ </mxCell>
191
+ <mxCell id="42" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;fontFamily=Helvetica;fontSize=12;fontColor=default;strokeWidth=2;strokeColor=#4D4D4D;" edge="1" source="43" target="44" parent="1">
192
+ <mxGeometry relative="1" as="geometry" />
193
+ </mxCell>
194
+ <mxCell id="43" value="Utterance Generator&lt;br style=&quot;font-size: 16px;&quot;&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;(&lt;i style=&quot;font-size: 16px;&quot;&gt;Finetuned&lt;/i&gt;🔥&amp;nbsp;&lt;i style=&quot;font-size: 16px;&quot;&gt;or &lt;br&gt;LLM-based&lt;/i&gt;❄️)&lt;/font&gt;" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#D5E8D4;strokeColor=#82b366;shadow=1;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
195
+ <mxGeometry x="807.75" y="1908" width="173.25" height="79.5" as="geometry" />
196
+ </mxCell>
197
+ <mxCell id="44" value="&lt;font style=&quot;font-size: 15px;&quot;&gt;Utterance&lt;br&gt;&lt;br&gt;&lt;/font&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
198
+ <mxGeometry x="1009.25" y="1932.75" width="78.25" height="30" as="geometry" />
199
+ </mxCell>
200
+ <mxCell id="45" value="\(\mathcal{C}^t_{31}&lt;br style=&quot;font-size: 15px;&quot;&gt;, \mathcal{C}^t_{32},..., \mathcal{C}^t_{3m}\)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontColor=#324259;fontFamily=Helvetica;" vertex="1" parent="1">
201
+ <mxGeometry x="556.75" y="1881" width="130" height="30" as="geometry" />
202
+ </mxCell>
203
+ <mxCell id="46" value="\(\mathcal{C}^t_{11}&lt;br style=&quot;font-size: 15px;&quot;&gt;, \mathcal{C}^t_{12},..., \mathcal{C}^t_{1m}\)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontColor=#324259;fontFamily=Helvetica;" vertex="1" parent="1">
204
+ <mxGeometry x="600.75" y="1944.5" width="130.25" height="30" as="geometry" />
205
+ </mxCell>
206
+ <mxCell id="47" value="\(\mathcal{C}^t_{21}&lt;br style=&quot;font-size: 15px;&quot;&gt;, \mathcal{C}^t_{22},..., \mathcal{C}^t_{2m}\)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontColor=#324259;fontFamily=Helvetica;" vertex="1" parent="1">
207
+ <mxGeometry x="576.75" y="1913" width="130" height="30" as="geometry" />
208
+ </mxCell>
209
+ <mxCell id="48" value="&lt;span style=&quot;color: rgb(0, 0, 0); font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;&quot;&gt;Progression&lt;br style=&quot;font-size: 15px;&quot;&gt;&lt;/span&gt;&lt;span style=&quot;color: rgb(0, 0, 0); font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;&quot;&gt;Signals&lt;/span&gt;" style="text;whiteSpace=wrap;html=1;align=center;fontSize=15;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
210
+ <mxGeometry x="643.75" y="1820" width="65" height="38" as="geometry" />
211
+ </mxCell>
212
+ <mxCell id="49" value="&lt;font style=&quot;font-size: 15px;&quot;&gt;$$\mathbf{p}_1^t$$&lt;/font&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontColor=#825C39;fontFamily=Helvetica;" vertex="1" parent="1">
213
+ <mxGeometry x="603.75" y="1854" width="20" height="30" as="geometry" />
214
+ </mxCell>
215
+ <mxCell id="50" value="$$\mathbf{p}_2^t$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontColor=#825C39;fontFamily=Helvetica;" vertex="1" parent="1">
216
+ <mxGeometry x="585.75" y="1822" width="20" height="30" as="geometry" />
217
+ </mxCell>
218
+ <mxCell id="51" value="$$\mathbf{p}_3^t$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontColor=#825C39;fontFamily=Helvetica;" vertex="1" parent="1">
219
+ <mxGeometry x="563.75" y="1789" width="20" height="30" as="geometry" />
220
+ </mxCell>
221
+ <mxCell id="52" value="" style="endArrow=none;html=1;rounded=1;entryX=1;entryY=0.633;entryDx=0;entryDy=0;entryPerimeter=0;strokeColor=#EBA55E;fontSize=13;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
222
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
223
+ <mxPoint x="592.75" y="1806" as="sourcePoint" />
224
+ <mxPoint x="632.75" y="1872.9899999999998" as="targetPoint" />
225
+ <Array as="points">
226
+ <mxPoint x="602.75" y="1806" />
227
+ <mxPoint x="642.75" y="1873" />
228
+ </Array>
229
+ </mxGeometry>
230
+ </mxCell>
231
+ <mxCell id="53" value="" style="endArrow=none;html=1;rounded=0;strokeColor=#EBA55E;fontSize=13;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
232
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
233
+ <mxPoint x="622.75" y="1840" as="sourcePoint" />
234
+ <mxPoint x="632.75" y="1840" as="targetPoint" />
235
+ </mxGeometry>
236
+ </mxCell>
237
+ <mxCell id="54" value="&lt;span style=&quot;font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;&quot;&gt;\(\hat{\mathcal{C}}^t_1, \hat{\mathcal{C}}^t_2, ..., \hat{\mathcal{C}}^t_K\)&lt;/span&gt;" style="text;whiteSpace=wrap;html=1;fontColor=#324259;fontFamily=Helvetica;" vertex="1" parent="1">
238
+ <mxGeometry x="987" y="1768" width="110" height="40" as="geometry" />
239
+ </mxCell>
240
+ <mxCell id="55" value="&lt;span style=&quot;font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: center; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(251, 251, 251); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline !important;&quot;&gt;\(\mathcal{U}^t\)&lt;/span&gt;" style="text;whiteSpace=wrap;html=1;fontColor=#262E22;fontFamily=Helvetica;" vertex="1" parent="1">
241
+ <mxGeometry x="1038.38" y="1945.5" width="20" height="40" as="geometry" />
242
+ </mxCell>
243
+ <mxCell id="56" value="" style="rounded=0;whiteSpace=wrap;html=1;glass=0;shadow=0;fontSize=14;fillColor=#FAFAFA;opacity=97;strokeColor=#4D4D4D;strokeWidth=1.5;arcSize=2;dashed=1;dashPattern=1 1;fontFamily=Helvetica;" vertex="1" parent="1">
244
+ <mxGeometry x="178.75" y="1774" width="360" height="233" as="geometry" />
245
+ </mxCell>
246
+ <mxCell id="57" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;" edge="1" source="58" parent="1">
247
+ <mxGeometry relative="1" as="geometry">
248
+ <mxPoint x="314.25" y="1920.25" as="targetPoint" />
249
+ </mxGeometry>
250
+ </mxCell>
251
+ <mxCell id="58" value="&lt;font style=&quot;font-size: 16px;&quot;&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;State&lt;br style=&quot;font-size: 16px;&quot;&gt;Tracker&amp;nbsp;&lt;/font&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;❄️&lt;/font&gt;&lt;/font&gt;" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#dae8fc;strokeColor=#6c8ebf;shadow=1;glass=0;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
252
+ <mxGeometry x="194.75" y="1885" width="98.25" height="70" as="geometry" />
253
+ </mxCell>
254
+ <mxCell id="59" value="&lt;font style=&quot;font-size: 16px;&quot;&gt;Progression&lt;br style=&quot;font-size: 16px;&quot;&gt;Analysis&amp;nbsp;&lt;/font&gt;🔥" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#ffe6cc;strokeColor=#d79b00;shadow=1;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
255
+ <mxGeometry x="425.75" y="1836.5" width="100" height="70" as="geometry" />
256
+ </mxCell>
257
+ <mxCell id="60" value="&lt;font style=&quot;font-size: 16px;&quot;&gt;Aspect&amp;nbsp;&lt;br style=&quot;font-size: 16px;&quot;&gt;&amp;nbsp;Promoter&lt;/font&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;❄️&lt;/font&gt;" style="rounded=1;whiteSpace=wrap;html=1;fontSize=16;fillColor=#dae8fc;strokeColor=#6c8ebf;shadow=1;glass=0;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
258
+ <mxGeometry x="426.75" y="1924" width="100" height="70" as="geometry" />
259
+ </mxCell>
260
+ <mxCell id="61" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;" edge="1" source="73" target="59" parent="1">
261
+ <mxGeometry relative="1" as="geometry">
262
+ <Array as="points">
263
+ <mxPoint x="407" y="1919" />
264
+ <mxPoint x="407" y="1872" />
265
+ </Array>
266
+ <mxPoint x="387" y="1920.25" as="sourcePoint" />
267
+ </mxGeometry>
268
+ </mxCell>
269
+ <mxCell id="62" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" edge="1" source="73" target="60" parent="1">
270
+ <mxGeometry relative="1" as="geometry">
271
+ <mxPoint x="390" y="1920" as="sourcePoint" />
272
+ <Array as="points">
273
+ <mxPoint x="390" y="1920" />
274
+ <mxPoint x="408" y="1920" />
275
+ <mxPoint x="408" y="1960" />
276
+ </Array>
277
+ </mxGeometry>
278
+ </mxCell>
279
+ <mxCell id="63" style="edgeStyle=orthogonalEdgeStyle;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;strokeWidth=2;strokeColor=#808080;fontFamily=Helvetica;fontColor=#B3B3B3;" edge="1" source="64" target="59" parent="1">
280
+ <mxGeometry relative="1" as="geometry" />
281
+ </mxCell>
282
+ <mxCell id="64" value="&lt;font style=&quot;font-size: 15px;&quot;&gt;Typical Target States \(\mathbf{E}_1\)&lt;/font&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=7.689655172413779;fontSize=15;shadow=1;fillColor=#fff2cc;strokeColor=#d6b656;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
283
+ <mxGeometry x="276.88" y="1785" width="129.75" height="64" as="geometry" />
284
+ </mxCell>
285
+ <mxCell id="65" value="&lt;p style=&quot;font-size: 16px;&quot;&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;&lt;b&gt;Agent #1&lt;/b&gt;&amp;nbsp;&lt;/font&gt;&lt;/p&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=right;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=14;strokeWidth=2;fontFamily=Helvetica;fontColor=#1A1A1A;" vertex="1" parent="1">
286
+ <mxGeometry x="450" y="1773" width="88.75" height="30" as="geometry" />
287
+ </mxCell>
288
+ <mxCell id="66" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;strokeColor=#6484B3;strokeWidth=2;fontFamily=Helvetica;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" source="60" target="46" parent="1">
289
+ <mxGeometry relative="1" as="geometry">
290
+ <mxPoint x="520.75" y="1960" as="sourcePoint" />
291
+ <mxPoint x="600" y="1960" as="targetPoint" />
292
+ <Array as="points">
293
+ <mxPoint x="550" y="1959" />
294
+ </Array>
295
+ </mxGeometry>
296
+ </mxCell>
297
+ <mxCell id="67" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;fontSize=14;strokeColor=#EBA55E;strokeWidth=2;fontFamily=Helvetica;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" edge="1" source="59" parent="1">
298
+ <mxGeometry relative="1" as="geometry">
299
+ <mxPoint x="598.75" y="1871" as="targetPoint" />
300
+ <mxPoint x="530" y="1870" as="sourcePoint" />
301
+ </mxGeometry>
302
+ </mxCell>
303
+ <mxCell id="68" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;fontSize=14;strokeColor=#808080;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
304
+ <mxGeometry relative="1" as="geometry">
305
+ <mxPoint x="154.75" y="1920.79" as="sourcePoint" />
306
+ <mxPoint x="195.75" y="1920.79" as="targetPoint" />
307
+ </mxGeometry>
308
+ </mxCell>
309
+ <mxCell id="69" value="&lt;font style=&quot;font-size: 15px;&quot;&gt;Dialogue History&amp;nbsp;\(\mathcal{H}^t\)&lt;font style=&quot;font-size: 15px;&quot;&gt;&lt;br style=&quot;font-size: 15px;&quot;&gt;&lt;/font&gt;&lt;/font&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
310
+ <mxGeometry x="44" y="1871.79" width="75.75" height="40" as="geometry" />
311
+ </mxCell>
312
+ <mxCell id="70" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;fontSize=14;exitX=1;exitY=0.5;exitDx=0;exitDy=0;strokeColor=#808080;strokeWidth=2;fontFamily=Helvetica;" edge="1" source="69" parent="1">
313
+ <mxGeometry relative="1" as="geometry">
314
+ <mxPoint x="121.75" y="1891.79" as="sourcePoint" />
315
+ <mxPoint x="175.75" y="1891.31" as="targetPoint" />
316
+ </mxGeometry>
317
+ </mxCell>
318
+ <mxCell id="71" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;fontSize=14;strokeColor=#808080;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
319
+ <mxGeometry relative="1" as="geometry">
320
+ <mxPoint x="114.75" y="1860.79" as="sourcePoint" />
321
+ <mxPoint x="155.75" y="1860.79" as="targetPoint" />
322
+ </mxGeometry>
323
+ </mxCell>
324
+ <mxCell id="72" value="" style="endArrow=none;html=1;rounded=0;strokeColor=#808080;strokeWidth=2;fontFamily=Helvetica;" edge="1" parent="1">
325
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
326
+ <mxPoint x="154.75" y="1920.79" as="sourcePoint" />
327
+ <mxPoint x="114.75" y="1861.79" as="targetPoint" />
328
+ </mxGeometry>
329
+ </mxCell>
330
+ <mxCell id="73" value="&lt;font style=&quot;font-size: 15px;&quot;&gt;State&amp;nbsp;&amp;nbsp;&lt;br&gt;&amp;nbsp;Summary&amp;nbsp;&lt;br&gt;&lt;font color=&quot;#324259&quot;&gt;\(\mathcal{S}^t_1\)&lt;/font&gt;&lt;/font&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=15;strokeWidth=2;fontFamily=Helvetica;" vertex="1" parent="1">
331
+ <mxGeometry x="315.56" y="1884" width="74.44" height="70" as="geometry" />
332
+ </mxCell>
333
+ </root>
334
+ </mxGraphModel>
335
+ </diagram>
336
+ </mxfile>
2312.11792/paper_text/intro_method.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The use of human language is intentional and purposeful (Austin 1975; Grice 1975). In daily communication, we use language deliberately to achieve various goals, ranging from simple inquiries about a product's pricing to complex objectives like resolving conflicts. Developing goal-oriented dialogue systems has also been a prominent research topic.
4
+
5
+ In the past few years, there has been growing research interest in dialogue tasks with more complex objectives, such as persuasion (Wang et al. 2019), negotiation (He et al. 2018), and emotional support (Liu et al. 2021b). Compared to traditional service-focused goal-oriented dialogue systems (Rieser and Moore 2005; Boyer et al. 2011; Wen et al. 2016; Liu et al. 2022), these tasks require much more sophisticated strategic reasoning and communication skills. Recent studies show that even state-of-the-art Large Language Models (LLMs) struggle with these tasks, where they exhibit weak awareness of the overall dialogue progression and
6
+
7
+ Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
8
+
9
+ fail to accomplish a complex dialogue goal through multiturn interactions strategically (Zhao et al. 2023a). Moreover, another major challenge lies in the difficulty of objectively measuring the achievement of such complex dialogue goals in a quantifiable and reliable way. Consequently, most existing research stays overly focused on how to fit the groundtruth data, without explicit consideration of how each utterance could contribute to the final objective (Zhou et al. 2019a; Joshi et al. 2021; Chen et al. 2023). In the few works that attempt to model these dialogue goals explicitly, it remains highly challenging to optimize the dialogue procedure towards them directly due to their inherent intangibility (Cheng et al. 2022; Sanders et al. 2022; Zhou et al. 2023).
10
+
11
+ In this work, we highlight the multifaceted nature of complex dialogue goals, which typically encompass multiple interdependent aspects that must be collectively promoted to approach the final objective. For instance, psychological guidelines suggest that Emotional Support Conversations (ESC) should include three key aspects:<sup>1</sup> *exploration* (identify the support-seeker's problem), *comforting* (comfort the seeker's emotion through expressing empathy), and *action* (help the seeker solve the problem) (Hill 2009; Liu et al. 2021b). These aspects are interdependent. For example, exploring the seeker's situation lays the foundation for conveying appropriate empathy, while comforting the user to be in a better emotional state makes them more willing to share details about their experiences and feelings.
12
+
13
+ Compared with directly optimizing towards the complex dialogue goal, it is more feasible to accomplish it by comprehensively considering and jointly promoting its different aspects. Nonetheless, due to the interdependence among different aspects, the interlocutor still needs to address the challenge of how to strategically coordinate their priority during the conversation. To achieve this, they must dynamically track the states of all the aspects and analyze their progression, that is, how much progress has been achieved so far and where the state of each aspect is heading. As in ESC, a seasoned supporter would continuously record information about the seeker's situation and keep estimating the under-
14
+
15
+ <sup>1</sup> Some works may refer to the "aspects" here as "stages", but they also emphasize that these "stages" are closely interwoven in practice rather than sequential (Liu et al. 2021b). Given that, we choose to call them as "aspects" uniformly in our work to avoid misunderstanding about their sequential nature.
16
+
17
+ lying root problem for further exploration. They would also monitor the progression of the *comforting* and *action* aspects simultaneously. Through comprehensive analysis, the supporter could determine which aspect to prioritize at each point of the conversation.
18
+
19
+ Based on the above insight, we propose a novel dialogue framework, COOPER, which coordinates multiple specialized agents, each dedicated to a specific aspect separately, to approach a complex dialogue goal. Specifically, by tracking the current state of its assigned aspect, each agent analyzes the progression of this aspect and suggests several topic candidates for the next utterance that can further promote the aspect (e.g., the agent responsible for the *exploration* aspect in ESC will suggest questions to ask the seeker). Then, we coordinate the specialized agents by ranking all the topic candidates with consideration of the overall dialogue progression. Finally, the top-ranked topic candidates are used to guide the generation of the next utterance.
20
+
21
+ Through this divide-and-conquer manner, we make the complex dialogue goal more approachable and elicit greater intelligence via the collaboration of individual agents. Experiments on ESC and persuasion dialogues demonstrate the superiority of COOPER over a set of competitive LLM-based methods and previous state-of-the-art.
22
+
23
+ In summary, our contributions are as follows:
24
+
25
+ - To this best of knowledge, this is the first work that explores how to achieve a complex dialogue goal by coordinating the joint promotion of its different aspects.
26
+ - We propose COOPER, an innovative framework that coordinates multiple specialized agents to collaboratively work towards a complex dialogue goal.
27
+ - Extensive experiments demonstrate the effectiveness of our approach and also reveal the limitations of current LLMs in handling complex dialogue goals.
28
+
29
+ # Method
30
+
31
+ **Problem Formulation** We consider the problem of how to achieve a complex dialogue goal that encompasses multiple aspects, denoted as $\{\mathcal{T}_1, \mathcal{T}_2, ..., \mathcal{T}_{n_T}\}$ , where $n_T$ is the number of aspects. Given the dialogue history $\mathcal{H}^t$ at the t-th dialogue round, the system generates the next utterance $\mathcal{U}^t$ , which promotes one or several dialogue goal aspects.
32
+
33
+ **ESC Framework** Following the ESC framework defined by Liu et al. (2021b), our implementation considers the following aspects for effective emotional support: 1) *Exploration*: identify the support-seeker's problems that cause their distress; 2) *Comforting*: comfort the seeker's emotion by expressing empathy and understanding; 3) *Action*: help the seeker conceive actionable plans to resolve the problems.
34
+
35
+ **Persuasion Dialogues** Referring to the elaboration likelihood model of persuasion proposed by Petty et al. (1986), we consider the following aspects within the broad goal of persuasion in our implementation: 1) *Attention*: capture the persuadee's attention and elicit their motivation to discuss the related topic; 2) *Appeal*: present persuasive arguments via different strategies and encourage the persuadee to think deeply about the arguments; 3) *Proposition*: explicitly state the persuader's position or call to action, and seek confirmation of the persuadee's attitude towards the proposition.
36
+
37
+ Figure 1 presents an overview of our proposed framework. In this section, we illustrate the three major steps within it, as well as its training procedure.
38
+
39
+ ![](_page_2_Figure_0.jpeg)
40
+
41
+ Figure 1: Illustration of our proposed framework COOPER (suppose the number of aspects within the dialogue goal $n_T$ =3). The icons of snowflake and flame denote that the module is frozen (LLM prompt-based) or finetuned, respectively.
42
+
43
+ We devise multiple specialized agents to separately tackle different dialogue goal aspects. We denote them as $\{A_1, A_2, ..., A_{n_T}\}$ , with agent $A_i$ dedicated the aspect $T_i$ (i=1, 2, ..., $n_T$ ). Each agent consists of three modules: a *state tracker*, an *aspect promoter*, and a *progression analysis* module.
44
+
45
+ Given the context $\mathcal{H}^t$ at the t-th dialogue round, the state tracker of $\mathcal{A}_i$ utilizes an LLM to summarize the current state of its assigned aspect, producing a summary $\mathcal{S}_i^t$ . For example, in order to get the state summary for the *exploration* aspect in ESC, we prompt the LLM to "summarize the seeker's experience that caused their emotional distress".<sup>2</sup>
46
+
47
+ The aspect promoter in $\mathcal{A}_i$ then suggests m topic candidates $\{\mathcal{C}_{i1}^t, \mathcal{C}_{i2}^t, ..., \mathcal{C}_{im}^t\}$ that can be used to further promote the assigned aspect, based on $\mathcal{H}^t$ and $\mathcal{S}_i^t$ . This module is also realized by prompting an LLM. The topic candidates here can be seen as a brief content outline for the following utterance. For instance, the aspect promoter of the exploration agent in ESC is implemented by instructing an LLM to "list < m > questions that the supporter can ask the seeker to further understand their situation (each less than 20 words)".
48
+
49
+ The progression analysis module in $\mathcal{A}_i$ produces a signal $\mathbf{p}_i^t$ for its assigned aspect. This signal is expected to indicate how much progress has been achieved so far regarding this aspect and its estimated target state at the end of conversation. To achieve this, we construct a state embedding space to consider the evolving path of the past states in this space and estimate the position of the potential target state regarding each aspect. Specifically, given the state summary $\mathcal{S}_i^t$ , we map it into the state embedding space by encoding it with a pretrained sentence encoder, MPNet (Song et al. 2020). We denote the encoded embedding of $\mathcal{S}_i^t$ as $\mathbf{s}_i^t \in \mathbb{R}^{n_d}$ , where $n_d$ is the dimension of the state embedding. Intuitively, the information in $\mathbf{s}_i^t$ summarizes the progress has been made so far regarding the aspect $\mathcal{T}_i$ .
50
+
51
+ To estimate the target state of $\mathcal{T}_i$ , we first resort to the dialogues in the training set and record the states of each aspect at the end of these conversations to obtain the typical tar-
52
+
53
+ get states of this aspect. For instance, to obtain the typical target states for the exploration aspect in ESC, for each dialogue in the training set, we adopt the same practice as in the state tracker to summarize the seeker's problem based on the complete dialogue. Then, we map these summaries to the state embedding space. Denote the matrix that encompasses all the obtained target state embeddings of this aspect as $\mathbf{E}_i \in \mathbb{R}^{N_D \times n_d}$ , where $N_D$ is the number of dialogues in the training set. After that, we cluster the embeddings in $E_i$ through the k-means algorithm (Hartigan and Wong 1979), where the number of clusters $k_i$ is determined based on the silhouette score (Rousseeuw 1987) of the clustering results. We denote the centroids of these clusters as $\{e_i^1, e_i^2, ..., e_i^{k_i}\}$ . Intuitively, these centroids represent the typical final states of the aspect $\mathcal{T}_i$ . The above clustering process is finished offline before inference. At the inference stage, we estimate the potential target state of $\mathcal{T}_i$ for the current dialogue by attending the state embedding $\mathbf{s}_{i}^{t}$ to the above centroids. Formally, we calculate the estimated target state $\mathbf{v}_i^t$ as follows:
54
+
55
+ $$h_{ij} = (\mathbf{W}_i \mathbf{s}_i^t) \cdot (\mathbf{W}_i \mathbf{e}_i^j),$$
56
+
57
+ $$\alpha_{ij} = \frac{\exp(h_{ij})}{\sum_{l=1}^{k_i} \exp(h_{il})},$$
58
+
59
+ $$\mathbf{v}_i^t = \text{ReLU}(\sum_{j=1}^{k_i} \alpha_{ij} \mathbf{e}_i^j),$$
60
+
61
+ where $\mathbf{W}_i \in \mathbb{R}^{n_d \times n_d}$ is a trainable matrix. Finally, we get the progression signal $\mathbf{p}_i^t = [\mathbf{v}_i^t; \mathbf{s}_i^t]$ , where $\mathbf{p}_i^t \in \mathbb{R}^{2 \times n_d}$ and [;] represents the vertical concatenation operation of vectors.
62
+
63
+ With the local analysis results from the specialized agents, we conduct global coordination among them by ranking all the topic candidates with consideration of the progression signals. Specifically, we learn a scoring function $f(\cdot)$ and conduct ranking based on the scoring results of the topic candidates. Here, we mainly explain the inference process in the global coordination module, and will leave the illustration of its training procedure at the end of this section.
64
+
65
+ <sup>&</sup>lt;sup>2</sup>For all the prompt-based methods mentioned in this paper, we provide the detailed prompt templates in the appendix.
66
+
67
+ During inference at the t-th round, we calculate the score $f(\mathcal{H}^t, \mathcal{C}^t_{ij})$ for each topic candidate $\mathcal{C}^t_{ij}$ (i=1, 2, ..., $n_T$ ; j=1, 2,..., m). To achieve this, we first concatenate $\mathcal{C}^t_{ij}$ with $\mathcal{H}^t$ and encode them with a Transformer (Vaswani et al. 2017):
68
+
69
+ $$\mathbf{B}_{ij}^t = \mathsf{TRS}[\mathsf{Emb}(\texttt{[CLS]} \oplus \mathcal{H}^t \oplus \mathcal{C}_{ij}^t)],$$
70
+
71
+ where TRS denotes the Transformer encoder, $\operatorname{Emb}(\cdot)$ represents the operation of the embedding layer, and $\oplus$ refers to the operation of text concatenation. We take the encoded hidden vector corresponding to the <code>[CLS]</code> token, denoted as $\widetilde{\mathbf{b}}_{ij}^t$ . Then, to take the progression signals into account, we pass all the progression signals through a multilayer perceptron (MLP), denoted as MLP<sub>PRG</sub>:
72
+
73
+ $$\widetilde{\mathbf{p}}_t = \text{MLP}_{\text{PRG}}(\mathbf{p}_1; \mathbf{p}_2; ...; \mathbf{p}_{n_T}),$$
74
+
75
+ where $\widetilde{\mathbf{p}}_t \in \mathbb{R}^{n_d}$ . Finally, we obtain the score $f(\mathcal{H}^t, \mathcal{C}^t_{ij})$ by passing $\widetilde{\mathbf{p}}_t$ and $\widetilde{\mathbf{b}}^t_{ij}$ through a single feedforward layer:
76
+
77
+ $$f(\mathcal{H}^t, \mathcal{C}_{ij}^t) = \text{FF}(\widetilde{\mathbf{p}}_t \mid \widetilde{\mathbf{b}}_{ij}^t),$$
78
+
79
+ where FF(·) represents the feedforward layer and | refers to the horizontal concatenation operation of two vectors into one long vector. By sorting the scores of all the topic candidates, we obtain the top-K candidates $\{\hat{\mathcal{C}}_1^t, \hat{\mathcal{C}}_2^t, ..., \hat{\mathcal{C}}_K^t\}$ , where the subscripts represent their ranking (i.e. $\hat{\mathcal{C}}_1^t$ is the candidate with the highest score).
80
+
81
+ The top-K ranked topic candidates are then used to guide the utterance generation. We experiment with two ways of implementing the utterance generator: a finetuned approach and an LLM prompt-based approach. Intuitively, the former way can learn the nuanced patterns specific to the complex dialogue task directly from the dataset, while the latter can leverage the remarkable performance of the LLM, which is supposed to have better generalization in various scenarios. The finetuned approach is developed upon BART (Lewis et al. 2020). Specifically, we concatenate the top-Ktopic candidates, the state summaries of all the aspects $\{S_1^t,$ $\mathcal{S}_2^t,...,\mathcal{S}_{n_T}^t\}$ , and the dialogue context $\mathcal{H}^t$ as its input, separated with the special token [SEP]. For the prompt-based approach, we directly utilize an LLM to generate the next utterance $\mathcal{U}^t$ , where the prompt includes the dialogue history $\mathcal{H}^t$ and the top-K topic candidates.
82
+
83
+ In the following, we will refer to our framework that uses the finetuned generator as $\mathbf{Cooper}_{(FT-G)}$ and the one that adopts the LLM prompt-based generator as $\mathbf{Cooper}_{(PT-G)}$ .
84
+
85
+ For COOPER<sub>(PT-G)</sub>, we train the progression analysis modules and the ranker in an end-to-end manner, optimizing with the weighted sum of the triplet ranking loss (Schroff, Kalenichenko, and Philbin 2015) and the pointwise loss. Specifically, the triplet loss is defined as:
86
+
87
+ $$\mathcal{L}_t = \sum_{\hat{g}(\mathcal{C}_{ij}^t) < \hat{g}(\mathcal{C}_{i'j'}^t)} \max(0, f(\mathcal{H}^t, \mathcal{C}_{ij}^t) - f(\mathcal{H}^t, \mathcal{C}_{i'j'}^t) + \tau),$$
88
+
89
+ where $\tau$ represents the margin enforced between the positive and negative pairs, and $\hat{g}(\cdot)$ returns the ranking label of the given topic candidate. The pointwise loss is defined as:
90
+
91
+ $$\mathcal{L}_p = \frac{1}{n_T \cdot m} \sum_{i,j} (\hat{g}(\mathcal{C}_{ij}^t) - g(\mathcal{C}_{ij}^t))^2,$$
92
+
93
+ where $g(\cdot)$ returns the predicted ranking position of the given topic candidate from our method. The overall ranking loss function is the combination of them:
94
+
95
+ $$\mathcal{L}_R = \alpha \cdot \mathcal{L}_t + (1 - \alpha) \cdot \mathcal{L}_p,$$
96
+
97
+ where $\alpha$ is a hyperparameter that balances the two losses. Since the experimental datasets do not contain the ground-truth labels for topic candidate ranking, we conduct pseudo-labeling and determine whether $g(\mathcal{C}_{ij}^t) < g(\mathcal{C}_{i'j'}^t)$ using the following criteria. First, we compare if one of the two candidates aims to promote the ground-truth dialogue goal aspect<sup>3</sup> while the other does not. In such cases, the former is ranked higher than the latter. If this criterion cannot enable a comparison, we then consider the text similarity between the candidate and the ground-truth utterance, ranking the more similar one as superior. The text similarity is measured by computing the inner product of their sentence embeddings encoded with MPNet.
98
+
99
+ For Cooper<sub>(FT-G)</sub>, we also need to finetune the utterance generator. We train it separately from the progression analysis modules and the ranker in a pipeline way. It is optimized with the generation loss $\mathcal{L}_G$ , defined as the negative log-likelihood of the ground-truth token.
2401.09192/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <mxfile host="Electron" modified="2023-07-28T02:58:21.587Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/21.6.5 Chrome/114.0.5735.243 Electron/25.3.1 Safari/537.36" version="21.6.5" etag="k_5773IJSBU31xe27Nry" type="device">
2
+ <diagram name="第 1 页" id="LS-x-xybpvXQwt6nW4lB">
3
+ <mxGraphModel dx="1124" dy="515" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="0" pageScale="1" pageWidth="400" pageHeight="400" math="0" shadow="0">
4
+ <root>
5
+ <mxCell id="0" />
6
+ <mxCell id="1" parent="0" />
7
+ <mxCell id="2" value="" style="rounded=1;whiteSpace=wrap;html=1;" vertex="1" parent="1">
8
+ <mxGeometry x="387" y="420" width="113" height="202" as="geometry" />
9
+ </mxCell>
10
+ <mxCell id="3" value="Layer 4" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b1ddf0;strokeColor=#10739e;dashed=1;arcSize=50;" vertex="1" parent="1">
11
+ <mxGeometry x="413.5" y="493" width="70" height="20" as="geometry" />
12
+ </mxCell>
13
+ <mxCell id="4" value="Layer 5" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b0e3e6;strokeColor=#0e8088;dashed=1;arcSize=50;" vertex="1" parent="1">
14
+ <mxGeometry x="413.5" y="463" width="70" height="20" as="geometry" />
15
+ </mxCell>
16
+ <mxCell id="5" value="Layer 6" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#fad9d5;strokeColor=#ae4132;dashed=1;arcSize=50;" vertex="1" parent="1">
17
+ <mxGeometry x="413.5" y="433" width="70" height="20" as="geometry" />
18
+ </mxCell>
19
+ <mxCell id="6" value="Layer 1" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b1ddf0;strokeColor=#10739e;" vertex="1" parent="1">
20
+ <mxGeometry x="413.5" y="593" width="70" height="20" as="geometry" />
21
+ </mxCell>
22
+ <mxCell id="7" value="Layer 2" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b0e3e6;strokeColor=#0e8088;" vertex="1" parent="1">
23
+ <mxGeometry x="413.5" y="563" width="70" height="20" as="geometry" />
24
+ </mxCell>
25
+ <mxCell id="8" value="Layer 3" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#fad9d5;strokeColor=#ae4132;" vertex="1" parent="1">
26
+ <mxGeometry x="413.5" y="533" width="70" height="20" as="geometry" />
27
+ </mxCell>
28
+ <mxCell id="9" value="" style="shape=flexArrow;endArrow=classic;html=1;rounded=0;fillColor=#fff2cc;strokeColor=#d6b656;" edge="1" parent="1">
29
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
30
+ <mxPoint x="555" y="564" as="sourcePoint" />
31
+ <mxPoint x="515" y="564" as="targetPoint" />
32
+ </mxGeometry>
33
+ </mxCell>
34
+ <mxCell id="10" value="" style="rounded=1;whiteSpace=wrap;html=1;dashed=1;" vertex="1" parent="1">
35
+ <mxGeometry x="570" y="521" width="90" height="100" as="geometry" />
36
+ </mxCell>
37
+ <mxCell id="11" value="Layer 1" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b1ddf0;strokeColor=#10739e;" vertex="1" parent="1">
38
+ <mxGeometry x="580" y="591" width="70" height="20" as="geometry" />
39
+ </mxCell>
40
+ <mxCell id="12" value="Layer 2" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b0e3e6;strokeColor=#0e8088;" vertex="1" parent="1">
41
+ <mxGeometry x="580" y="561" width="70" height="20" as="geometry" />
42
+ </mxCell>
43
+ <mxCell id="13" value="Layer 3" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#fad9d5;strokeColor=#ae4132;" vertex="1" parent="1">
44
+ <mxGeometry x="580" y="531" width="70" height="20" as="geometry" />
45
+ </mxCell>
46
+ <mxCell id="14" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=none;" vertex="1" parent="1">
47
+ <mxGeometry x="744" y="421" width="90" height="200" as="geometry" />
48
+ </mxCell>
49
+ <mxCell id="15" value="Layer 1" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b1ddf0;strokeColor=#10739e;" vertex="1" parent="1">
50
+ <mxGeometry x="754" y="592" width="70" height="20" as="geometry" />
51
+ </mxCell>
52
+ <mxCell id="16" value="Layer 3" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b0e3e6;strokeColor=#0e8088;" vertex="1" parent="1">
53
+ <mxGeometry x="754" y="529" width="70" height="20" as="geometry" />
54
+ </mxCell>
55
+ <mxCell id="17" value="Layer 6" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#fad9d5;strokeColor=#ae4132;dashed=1;arcSize=50;" vertex="1" parent="1">
56
+ <mxGeometry x="754" y="433" width="70" height="20" as="geometry" />
57
+ </mxCell>
58
+ <mxCell id="18" value="Layer 2" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b1ddf0;strokeColor=#10739e;dashed=1;arcSize=50;" vertex="1" parent="1">
59
+ <mxGeometry x="754" y="559" width="70" height="20" as="geometry" />
60
+ </mxCell>
61
+ <mxCell id="19" value="Layer 4" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#b0e3e6;strokeColor=#0e8088;dashed=1;arcSize=50;" vertex="1" parent="1">
62
+ <mxGeometry x="754" y="496" width="70" height="20" as="geometry" />
63
+ </mxCell>
64
+ <mxCell id="20" value="Layer 5" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#fad9d5;strokeColor=#ae4132;" vertex="1" parent="1">
65
+ <mxGeometry x="754" y="466" width="70" height="20" as="geometry" />
66
+ </mxCell>
67
+ <mxCell id="21" value="" style="endArrow=classic;html=1;rounded=0;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" source="15" target="18" parent="1">
68
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
69
+ <mxPoint x="520" y="658" as="sourcePoint" />
70
+ <mxPoint x="570" y="608" as="targetPoint" />
71
+ </mxGeometry>
72
+ </mxCell>
73
+ <mxCell id="22" value="" style="endArrow=classic;html=1;rounded=0;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" source="16" target="19" parent="1">
74
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
75
+ <mxPoint x="799" y="602" as="sourcePoint" />
76
+ <mxPoint x="799" y="589" as="targetPoint" />
77
+ </mxGeometry>
78
+ </mxCell>
79
+ <mxCell id="23" value="" style="endArrow=classic;html=1;rounded=0;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" source="20" target="17" parent="1">
80
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
81
+ <mxPoint x="799" y="539" as="sourcePoint" />
82
+ <mxPoint x="799" y="526" as="targetPoint" />
83
+ </mxGeometry>
84
+ </mxCell>
85
+ <mxCell id="24" value="" style="shape=flexArrow;endArrow=classic;html=1;rounded=0;fillColor=#f8cecc;strokeColor=#b85450;" edge="1" parent="1">
86
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
87
+ <mxPoint x="681" y="564.0899999999999" as="sourcePoint" />
88
+ <mxPoint x="721" y="564.0899999999999" as="targetPoint" />
89
+ </mxGeometry>
90
+ </mxCell>
91
+ <mxCell id="25" value="Stack" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;fontSize=14;" vertex="1" parent="1">
92
+ <mxGeometry x="504" y="520.5" width="60" height="30" as="geometry" />
93
+ </mxCell>
94
+ <mxCell id="26" value="&lt;font style=&quot;font-size: 14px;&quot;&gt;Interpolation&lt;/font&gt;" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;fontSize=14;" vertex="1" parent="1">
95
+ <mxGeometry x="652" y="520.5" width="100" height="30" as="geometry" />
96
+ </mxCell>
97
+ <mxCell id="27" value="" style="endArrow=classic;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;exitX=0;exitY=0.5;exitDx=0;exitDy=0;" edge="1" source="28" target="29" parent="1">
98
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
99
+ <mxPoint x="361.30899999999997" y="571.046" as="sourcePoint" />
100
+ <mxPoint x="361" y="471.46000000000004" as="targetPoint" />
101
+ <Array as="points">
102
+ <mxPoint x="394" y="572" />
103
+ <mxPoint x="394" y="473" />
104
+ </Array>
105
+ </mxGeometry>
106
+ </mxCell>
107
+ <mxCell id="28" value="" style="rounded=1;whiteSpace=wrap;html=1;dashed=1;fillColor=none;" vertex="1" parent="1">
108
+ <mxGeometry x="405" y="526.5" width="87.5" height="91" as="geometry" />
109
+ </mxCell>
110
+ <mxCell id="29" value="" style="rounded=1;whiteSpace=wrap;html=1;dashed=1;fillColor=none;" vertex="1" parent="1">
111
+ <mxGeometry x="405" y="427.5" width="87.5" height="91" as="geometry" />
112
+ </mxCell>
113
+ </root>
114
+ </mxGraphModel>
115
+ </diagram>
116
+ </mxfile>
2401.09192/main_diagram/main_diagram.pdf ADDED
Binary file (35.6 kB). View file
 
2401.09192/paper_text/intro_method.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Transformers [@DBLP:conf/nips/VaswaniSPUJGKP17] have recently achieved a significant impact on the field of artificial intelligence [@DBLP:conf/iconip/WangSL0ZX20; @DBLP:journals/corr/abs-2009-10580; @DBLP:conf/iclr/DosovitskiyB0WZ21; @DBLP:journals/corr/abs-2105-05537; @DBLP:journals/corr/abs-2302-09019]. Nevertheless, the training cost is increasing in terms of the growing model size, which causes an amount of resourcing consumption and greenhouse gases emission [@DBLP:journals/corr/abs-1907-10597; @DBLP:conf/aaai/PanXWYWBX19]. Addressing this problem, recent work [@DBLP:conf/acl/ChenYS0QWWCL022; @DBLP:journals/corr/ChenGS15; @wanglearning; @DBLP:journals/corr/abs-2310-10699] suggests to improve training efficiency by reusing a pretrained small model as an initialization method that contains knowledge prior. However, the requirement of a pretrained model can cause fatal obstacles, especially for a new-designed model structure, which affects the applicability of these studies as general strategies for training. On the other hand, the studies of training from scratch [@DBLP:conf/icml/GongHLQWL19; @DBLP:journals/corr/abs-2011-13635] generally stack layers to train Transformer progressively. Nevertheless, these methods usually cannot achieve significant acceleration in training, and are thus slower than training from pretrained models. Against this background, it is an emergency to design a universal method to efficiently train the models with reduced time and financial costs, while benefiting the ecological environment.
4
+
5
+ To achieve this goal, the progressive expansion of models in depth proves to be a crucial aspect in training from scratch. One noteworthy example is StackBERT [@DBLP:conf/icml/GongHLQWL19], where a stacking learning strategy contributes to improved training efficiency through two merits: (1) Fewer layers in the initial stages of training require fewer computational resources, leading to faster training; (2) Lower trained weights provide a usable prior that benefits the training of stacked higher weights. While StackBERT undeniably accelerates training from the second merit, there are still two concerns that need to be addressed. Firstly, the suitability of the stacking method is questionable. For instance, directly stacking the 1-st layer onto the 7-th layer of a 12-layer Transformer is not intuitive due to the clear differences in semantic functionality between them [@DBLP:journals/tacl/RogersKR20]. Moreover, even though there might be some similarities across the entire model, it has been pointed out that most of the layers are different from each other [@DBLP:conf/acl/ChenYS0QWWCL022]. Consequently, it raises doubts about whether the normally trained weights are sufficiently prepared well to be expanded effectively, given the lack of knowledge in higher layers.
6
+
7
+ <figure id="fig:overview" data-latex-placement="t">
8
+ <embed src="figure/meta-overview.pdf" style="width:91.0%" />
9
+ <figcaption>An illustration of the Apollo for training an <span class="math inline"><em>L</em></span>-layered model within <span class="math inline"><em>T</em></span> steps. We divide this training process into <span class="math inline"><em>S</em></span> stages. In the <span class="math inline"><em>t</em></span>-th step at the <span class="math inline"><em>s</em></span>-th stage, the model weights are <span class="math inline"><em>N</em><sup>(<em>s</em>)</sup></span> layers (the left layers in each stage in the figure). To let the <span class="math inline"><em>N</em><sup>(<em>s</em>)</sup></span> layers learn functionality in high layers in advance, we construct <span class="math inline"><em>L</em><sup>(<em>t</em>)</sup></span> layers (the right layers in each stage in the figure) by sharing the <span class="math inline"><em>N</em><sup>(<em>s</em>)</sup></span> weights through an interpolation method, where <span class="math inline"><em>N</em><sup>(<em>s</em>)</sup> ≤ <em>L</em><sup>(<em>t</em>)</sup></span>. As shown in the figure, the same color denotes the same weight. We randomly choose <span class="math inline"><em>L</em><sup>(<em>t</em>)</sup></span> at <span class="math inline"><em>t</em></span>-th step through a probability function Low-Value-Prioritized Sampling (LVPS). Since LVPS tends to select shallower layers, it can greatly save computation costs. Furthermore, we progressively increase the <span class="math inline"><em>N</em><sup>(<em>s</em>)</sup></span> weights when stepping into the next stage. Since weights in the early stage can learn the properties of higher layers, Apollo can significantly contribute to the training efficiency. </figcaption>
10
+ </figure>
11
+
12
+ Motivated by this consideration, we introduce "Apollo" - a novel approach to preparing lessons for low-layer weights to learn the high-layer functionality in the training process. This strategy proves beneficial in further extending the capabilities of the model. In essence, Apollo involves two key components. Firstly, we employ a low-value-prioritized sampling (LVPS) technique, which randomly selects a depth for training at each step. This helps to ensure a diverse training experience. Subsequently, we share the low-layer weights, enabling them to adapt to the selected layers by LVPS. These shared weights are well-prepared not only for learning high-layer functionality but also for recurrent transformation, as supported by previous work [@DBLP:conf/iclr/LanCGGSS20; @DBLP:conf/iclr/DehghaniGVUK19; @DBLP:journals/corr/abs-2110-03848]. It is worth noting that while weight-sharing strategies have been explored previously to facilitate information exchange across layers, our approach is novel in its dynamic application to sample layers, which is important to learn high-layer property to improve training efficiency. Furthermore, we address the issue of training stability by introducing an interpolation method to extend the depth of the model. This is essential since directly stacking layers can cause large gradients, which can be detrimental to training stability. In all, we summarize our contribution as:
13
+
14
+ - Through sharing weights in the early stage to learn the functionality of high layers, Apollo effectively expands the depth of networks, resulting in remarkable training acceleration.
15
+
16
+ - Through LVPS, Apollo achieves a substantial reduction in training FLOPs by predominantly sampling low depth layers, while retaining the benefit of expanding depth.
17
+
18
+ - Through replacing layer stacking with layer interpolation, Apollo further enhances the stability of the expanded model.
19
+
20
+ - Experiments show that Apollo attains state-of-the-art training efficiency, surpassing even the methods reliant on pretrained models.
21
+
22
+ # Method
23
+
24
+ In this section, we will introduce our method to sample the information of the higher layer to accelerate training.
25
+
26
+ To describe the implementation of Apollo, we introduce related notations here. We denote an $L$-layered network by $f^{(L)}(\cdot)$, and denote the function of the $l$-th layer of $f^{(L)}$ as $f_l(\cdot)$, where $l\in [L]$. Thus, given an input $x$, $f^{(L)}(x)$ can be formulated as $$\begin{align}
27
+ f^{(L)}(x) = f_L(f_{L-1}(\dots f_l(\dots f_1(x)))).
28
+ \end{align}$$ We denote the set of $N$ weights in $f$ as $\{\theta_i\}^{N}_{i=1}$, and the weights of $l$-the layer as $\Theta(f_l)$. As our method samples information of higher layers through weight sharing, here we formally consider a mapping function $g(\cdot)$ to arrange the $g(l)$-th weight to the $l$-th layer as $$\begin{align}
29
+ \Theta(f_l) = \theta_{g(l)},
30
+ \end{align}$$ where $g(\cdot) \in [N]$. Moreover, for the convenience of expressing the process of layer expansion, we denote the layer size of the $t$-th step as $L^{(t)}$, where $t\in T$ and $T$ is the total steps. Thus, a network at the $t$-th step can be denoted by $f^{(L^{(t)})}$.
31
+
32
+ **Transformer Architecture.** Here, we introduce the structure of the Transformer [@DBLP:conf/nips/VaswaniSPUJGKP17], each block of which mainly contains two basic layers, (1) the multi-head self-attention (MHSA) layer, (2) the feed-forward neural network (FFN) layer. Assuming inputs of $l$-th layer are query $\mathbf{Q}_l \in \mathbb{R}^{d}$, key $\mathbf{K}_l \in \mathbb{R}^{d}$, and value $\mathbf{V}_l \in \mathbb{R}^{d}$, where $d$ is the dimension, the $l$-th layer can be formulated as $$\begin{align}
33
+ f_l(\textbf{Q}_l, \textbf{K}_l, \textbf{V}_l) = \operatorname{FFN}(\operatorname{MHSA}(\textbf{Q}_l, \textbf{K}_l, \textbf{V}_l)).
34
+ \end{align}$$ The MHSA and FFN layers are defined as follows:
35
+
36
+ \(1\) MHSA Layer. The MHSA layer is another form of the Self-Attention (SA) layer that allows the model to weigh the importance of different words in a sentence, enabling the ability to capture long-range dependencies and contextual information effectively. Instead of using a single attention mechanism, MHSA divides SA into a multi-head structure. Each head attends to different aspects or dependencies in the input sequence. This allows the model to capture different patterns and relationships between words, potentially improving the model's ability to learn complex dependencies and relationships. Specifically, the parameters of SA in $l$-the layer are $\mathbf{W}_l^{\{Q, K, V, O\}} \in \mathbb{R}^{d\times d}$. Then, MHSA separates these parameters into $M$ heads: $\{\mathbf{W}_l^{\{Q, K, V, O\}, m}\}_{m=1}^M$. As a result, we can denote MHSA as $$\begin{align}
37
+ &\operatorname{Att}_m(\mathbf{Q}_l, \mathbf{K}_l, \mathbf{V}_l) = \notag \\
38
+ &~~~\operatorname{softmax}\left(\frac{\mathbf{Q}_l\mathbf{W}_l^{Q, m}(\mathbf{K}_l\mathbf{W}_l^{K, m})^{T}}{\sqrt{d_k}}\right) \mathbf{V}_l\mathbf{W}_l^{V, m}\mathbf{W}_l^{{O, m}^T}, \notag \\
39
+ &\operatorname{MHSA}((\mathbf{Q}_l, \mathbf{K}_l, \mathbf{V}_l)) = \sum^M_{m=1}{\operatorname{Att}_m(\mathbf{Q}_l, \mathbf{K}_l, \mathbf{V}_l)},
40
+ \end{align}$$ where $d_k$ is the dimensionality of the key vectors. The attention heads are computed multiple times in parallel with parameters $\mathbf{W}_l^{Q, m}$, $\mathbf{W}_l^{K, m}$, and $\mathbf{W}_l^{V, m}$. These heads will be transformed linearly through a learnable weight matrix $\mathbf{W}_l^{O, m}$. At last, MHSA derives the final result by concatenating the head outputs.
41
+
42
+ :::: algorithm
43
+ ::: algorithmic
44
+ the input data $x$, the ground-truth $y$, the stage setting $\{s_t, t\in[1, T], s_t\in [1, S]\}$: a non-decreasing list, indicating the stage of each step. $\theta_n := \text{COPY}\left(\theta_{g_{\text{interpolation}}^{N^{(s_{prev})}:N^{(s)}}(n)}\right)$ $L^{(t)} = \text{LVPS}(N^{(s)}, L)$ $\Theta(f_{l}) := \text{SHARE}\left(\theta_{g_{\text{interpolation}}^{N^{(s)}:L^{(t)}}(l)}\right)$ $\mathcal{L} = Loss\left(f^{(L^{(s)})} (x), y\right)$ $\{\Delta\theta_i\}^{N^{(t)}} = \mathcal{L}.\operatorname{backward}()$ Update all the weights $\{\theta_i\}^{N^{(s)}}$ through $\{\Delta\theta_i\}^{N^{(s)}}$ The trained model $f^{(L)}$ with $\{\theta_i\}^{L}$
45
+ :::
46
+ ::::
47
+
48
+ \(2\) FFN Layer. The FFN layer applies a non-linear transformation to the output of the self-attention layer by increasing the dimension and then shrinking to the original size. $\mathbf{X}_l \in \mathbb{R}^{d}$ is the input of the $l$-th layer. The weights are $\mathbf{W}_l^{IN} \in \mathbb{R}^{d\times \alpha d}$ and $\mathbf{W}_l^{OUT} \in \mathbb{R}^{\alpha d\times d}$, where $\alpha$ is an expanding ratio that is often set to 4. The FFN layer can be formed as $$\begin{align}
49
+ \operatorname{FFN}(\mathbf{X}_l) =
50
+ \operatorname{GeLU}(\mathbf{X}_l\mathbf{W}_l^{IN})\mathbf{W}_l^{OUT},
51
+ \end{align}$$ where $\operatorname{GeLU}(\cdot)$ is a non-linear activation function [@hendrycks2016gaussian]. The FFN layer plays a crucial role in enhancing the ability to model complex patterns, making it a powerful tool for various natural language processing tasks.
52
+
53
+ Finally, Applying residual connections to both the MHSA and FFN layers is achieved by incorporating addition and layer normalization operations. These crucial steps serve the purpose of safeguarding the model against collapse and overfitting of the training data. Note that, we neglect biases in the formulation for simplification, which does not affect the generality of the proposed method. Following the above formulation, all weights of the $l$-th layer can be denoted as $\theta_l = \mathbf{W}_l^{\{Q, K, V, O, IN, OUT\}}$.
54
+
55
+ <figure id="fig:lvps" data-latex-placement="t">
56
+ <embed src="figure/div-6.pdf" style="width:36.0%" />
57
+ <figcaption>A case of choosing hyper-parameters <span class="math inline"><em>k</em></span> of LVPS to sample 1-6 layer number.</figcaption>
58
+ </figure>
59
+
60
+ Given the potential similarities between high and low layers, the gradual expansion of layers during training can lead to enhanced acceleration, as opposed to the direct approach of training from scratch [@DBLP:conf/icml/GongHLQWL19]. However, as previously discussed, lower layers struggle to effectively capture the intricacies of higher-level features, particularly when utilizing a stacking methodology which can cause training instability. To address this challenge, we introduce Apollo, an innovative approach that facilitates progressive model training by leveraging weight sharing within the training process. This approach enables the accession of high-level functionality prior to layer expansion, thus mitigating the aforementioned issue.
61
+
62
+ **Progressive Training.** Specifically in an $L$-layer network, Apollo divides the whole training process into $S$ stages. We use $N^{(s)}, s\in [S]$ to denote the weight number of the $s$-th stage, satisfying $$\begin{align}
63
+ \label{eq:weight-number}
64
+ N^{(s)} < N^{(s+1)}.
65
+ \end{align}$$ As restricted in Eq. [\[eq:weight-number\]](#eq:weight-number){reference-type="eqref" reference="eq:weight-number"}, Apollo increases actual weights when stages go on. This progressive training way is efficient for accelerating language models.
66
+
67
+ **Preparing Lessons.** In order to provide lessons for $N^{(s)}$ weights within the $s$-th stage, thereby facilitating the pre-learning of higher-layer functionalities, Apollo employs a strategic weight-sharing approach. This involves distributing the $N^{(s)}$ weights across $L^{(t)}$ layers, drawn from the range $[N^{(s)}, L]$, thereby establishing a connection to higher-layer elements. The determination of the appropriate $L^{(t)}$ is realized through the utilization of a sampling function denoted as $P(L^{(t)})$ at each training step. The core idea of this sampling function is the prioritization of shallower depths -- specifically, $N^{(s)}$ in this context -- to strike a balance between computational efficiency and sustained performance. Grounded in this consideration, we introduce a novel approach coined as Low-Value-Prioritized Sampling (LVPS).
68
+
69
+ <figure id="fig:all-sampling" data-latex-placement="t">
70
+ <embed src="figure/all-sample-6.pdf" style="width:36.0%" />
71
+ <figcaption>Comparison among US, FS, ES, and LVPS to sample 1-6 layer number.</figcaption>
72
+ </figure>
73
+
74
+ ::: {#tbl:sampling}
75
+ Method Probability Density Function
76
+ -------- --------------------------------------------------------------------------------------------------------------------------
77
+ LVPS $P_{\text{LVPS}}(L^{(t)}) = \frac{b}{(L^{(t)}+k)^2}$
78
+ ES $P_{\text{ES}}\left(L^{(t)}\right)=\frac{1}{k} * \left(\frac{1}{L^{(t)} - N^{(s)} - b}+\frac{1}{L + b - L^{(t)}}\right)$
79
+ US $P_{\text{US}}(L^{(t)}) = \frac{1}{L - N^{(s)}}$
80
+ FS $P_{\text{FS}}(L) = 1$
81
+
82
+ : An overview of sampling methods: (1) Low-Value-Prioritized Sampling (LVPS), (2) Uniform Sampling (US), (3) Edge Sampling (ES), and (4) Full Sampling (FS). $L^{(t)}$ can only derive values in $[N^{(s)}, L]$. Since the integration of the probability density function is 1, $b$ can be solved when $k$ is determined. In this paper, we set $k=0$ and $k=10$ for LVPS and ES, respectively. FS always samples $L^{(t)}=L$.
83
+ :::
84
+
85
+ In the development of LVPS, we employ an inverse proportional function as the cumulative distribution function for layer selection with a formulation as $$\begin{align}
86
+ F_{\text{LVPS}}(x) = c-\frac{b}{x+k}, c > 0, b > 0, x\in [N^{(s)}, L],
87
+ \end{align}$$ where $b,k$ and $c$ are the hyper-parameters. This function is congruent with our sampling objective, which favors the selection of shallower depths. Then, by setting $F_{\text{LVPS}}\left(N^{(s)}\right)=0$ and $F_{\text{LVPS}}\left(L\right)=1$. The probability density function $P_{\text{LVPS}}$ can be solved as $$\begin{align}
88
+ \label{eq:lvps}
89
+ P_{\text{LVPS}}(&L^{(t)}) = \begin{cases}
90
+ \frac{b}{(L^{(t)}+k)^2}, & \text{if $L^{(t)} \in [N^{(s)}, L]$}, \\
91
+ 0, & \text{otherwise},
92
+ \end{cases} \\
93
+ \label{eq:lvps-condition}
94
+ &\text{w.r.t}~\int P_{\text{LVPS}}(L^{(t)})\,\mathrm{d}L^{(t)} = 1,
95
+ \end{align}$$ where $b$ and $c$ can be solved in terms of Eq. [\[eq:lvps-condition\]](#eq:lvps-condition){reference-type="eqref" reference="eq:lvps-condition"} as $$\begin{align}
96
+ b=\frac{(N^{(s)} + k) * (L + k)}{L - N^{(s)}},~~
97
+ c=\frac{L+k}{L-N^{(s)}}.
98
+ \end{align}$$ Therefore, we only need to adjust $k$ to derive different sampling settings as shown in Fig. [2](#fig:lvps){reference-type="ref" reference="fig:lvps"}. In this paper, we set $k=0$ in every experiment to obtain the least computation complexity. We employ the notation $\operatorname{LVPS}(\alpha, \beta)$ to signify the process of sampling a value within the interval $[\alpha, \beta]$. $P_{\text{LVPS}}(L^{(t)})$ is the probability function of $\operatorname{LVPS}(N^{(s)}, L)$.
99
+
100
+ A comparison with other sampling methods can be found in Table [1](#tbl:sampling){reference-type="ref" reference="tbl:sampling"} and Fig. [3](#fig:all-sampling){reference-type="ref" reference="fig:all-sampling"}. In detail, Uniform Sampling (US) samples the layers equally, thereby mitigating the potential layer bias, while Edge Sampling (ES) tends to sample layers in low and high positions to learn the high-layer information while maintaining efficiency. Alternatively, Full Sampling (FS) always samples the deepest depth, helping early-trained weights adequately acquire the functionality of each layer, which, however, demands significant computational resources due to its selection of the maximum layer number at each step. Among these sampling methods, LVPS can achieve the highest efficiency in progressive training.
101
+
102
+ <figure id="fig:expanding" data-latex-placement="t">
103
+ <embed src="figure/extend-one.pdf" style="width:43.0%" />
104
+ <figcaption>A case of expanding 3 layers to 6 layers. The same color denotes the same weight. The stacking method recurrently arranges the layers, e.g., the 1-st layer <span class="math inline">→</span> the 4-th layer. By contrast, the interpolation method arranges the layers in a neighbor, e.g., the 1-st layer <span class="math inline">→</span> the 2-nd layer.</figcaption>
105
+ </figure>
106
+
107
+ In the training process of Apollo, it is important to select a feasible method for expanding the layer for sharing in each step and initializing weights of the next stage. As shown in Fig. [4](#fig:expanding){reference-type="ref" reference="fig:expanding"}, the common-used methods are stacking and interpolating layers. Given a target to expand $L_1$ to $L_2$, the stacking method can be formed as $$\begin{align}
108
+ \label{eq:stack}
109
+ g^{L_1:L_2}_{\text{stack}}(l_2) = l_2 \bmod{L_1},
110
+ \end{align}$$ where $l_2 \in [L_2]$ is the index of $L_2$. The stacking method is usually adopted in language models, e.g., BERT [@DBLP:conf/icml/GongHLQWL19]. By contrast, the interpolation method is often used in the computer vision field, e.g., ResNet [@DBLP:conf/iclr/ChangMHTB18], and can be formulated as $$\begin{align}
111
+ \label{eq:inter}
112
+ g^{L_1:L_2}_{\text{interpolation}}(l_2) = \lfloor\frac{l_2\ast L_1}{L_2}\rceil,
113
+ \end{align}$$ where $\ast$ means dot production, and $\lfloor \cdot \rceil$ denotes the rounding operation. Although the two expanding methods have both shown good performance in their fields, there is still a lack of analysis on the comparison between them, especially in the applicability of language models. In this paper, we investigate the influence of the stability and performance of these methods. We defer an analysis experiment to the experiment section. Here, we would like to give the conclusion: (1) There is only a small performance gap between them; (2) interpolation can reach better stability than the stacking method. As a result, we adopt interpolation instead of the stacking method as the expanding method of the proposed Apollo training for language models.
2402.14606/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff