<!DOCTYPE html>

<head>
  <meta charset="utf-8">
  <script>
  !function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).Vue=e()}(this,(function(){"use strict";var t=Object.freeze({}),e=Array.isArray;function n(t){return null==t}function r(t){return null!=t}function o(t){return!0===t}function i(t){return"string"==typeof t||"number"==typeof t||"symbol"==typeof t||"boolean"==typeof t}function a(t){return"function"==typeof t}function s(t){return null!==t&&"object"==typeof t}var c=Object.prototype.toString;function u(t){return"[object Object]"===c.call(t)}function l(t){var e=parseFloat(String(t));return e>=0&&Math.floor(e)===e&&isFinite(t)}function f(t){return r(t)&&"function"==typeof t.then&&"function"==typeof t.catch}function d(t){return null==t?"":Array.isArray(t)||u(t)&&t.toString===c?JSON.stringify(t,null,2):String(t)}function p(t){var e=parseFloat(t);return isNaN(e)?t:e}function v(t,e){for(var n=Object.create(null),r=t.split(","),o=0;o<r.length;o++)n[r[o]]=!0;return e?function(t){return n[t.toLowerCase()]}:function(t){return n[t]}}var h=v("slot,component",!0),m=v("key,ref,slot,slot-scope,is");function g(t,e){var n=t.length;if(n){if(e===t[n-1])return void(t.length=n-1);var r=t.indexOf(e);if(r>-1)return t.splice(r,1)}}var y=Object.prototype.hasOwnProperty;function _(t,e){return y.call(t,e)}function b(t){var e=Object.create(null);return function(n){return e[n]||(e[n]=t(n))}}var $=/-(\w)/g,w=b((function(t){return t.replace($,(function(t,e){return e?e.toUpperCase():""}))})),x=b((function(t){return t.charAt(0).toUpperCase()+t.slice(1)})),C=/\B([A-Z])/g,k=b((function(t){return t.replace(C,"-$1").toLowerCase()}));var S=Function.prototype.bind?function(t,e){return t.bind(e)}:function(t,e){function n(n){var r=arguments.length;return r?r>1?t.apply(e,arguments):t.call(e,n):t.call(e)}return n._length=t.length,n};function O(t,e){e=e||0;for(var n=t.length-e,r=new Array(n);n--;)r[n]=t[n+e];return r}function T(t,e){for(var n in e)t[n]=e[n];return t}function A(t){for(var e={},n=0;n<t.length;n++)t[n]&&T(e,t[n]);return e}function j(t,e,n){}var E=function(t,e,n){return!1},N=function(t){return t};function P(t,e){if(t===e)return!0;var n=s(t),r=s(e);if(!n||!r)return!n&&!r&&String(t)===String(e);try{var o=Array.isArray(t),i=Array.isArray(e);if(o&&i)return t.length===e.length&&t.every((function(t,n){return P(t,e[n])}));if(t instanceof Date&&e instanceof Date)return t.getTime()===e.getTime();if(o||i)return!1;var a=Object.keys(t),c=Object.keys(e);return a.length===c.length&&a.every((function(n){return P(t[n],e[n])}))}catch(t){return!1}}function D(t,e){for(var n=0;n<t.length;n++)if(P(t[n],e))return n;return-1}function M(t){var e=!1;return function(){e||(e=!0,t.apply(this,arguments))}}function I(t,e){return t===e?0===t&&1/t!=1/e:t==t||e==e}var L="data-server-rendered",R=["component","directive","filter"],F=["beforeCreate","created","beforeMount","mounted","beforeUpdate","updated","beforeDestroy","destroyed","activated","deactivated","errorCaptured","serverPrefetch","renderTracked","renderTriggered"],H={optionMergeStrategies:Object.create(null),silent:!1,productionTip:!1,devtools:!1,performance:!1,errorHandler:null,warnHandler:null,ignoredElements:[],keyCodes:Object.create(null),isReservedTag:E,isReservedAttr:E,isUnknownElement:E,getTagNamespace:j,parsePlatformTagName:N,mustUseProp:E,async:!0,_lifecycleHooks:F},B=/a-zA-Z\u00B7\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u037D\u037F-\u1FFF\u200C-\u200D\u203F-\u2040\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD/;function U(t){var e=(t+"").charCodeAt(0);return 36===e||95===e}function z(t,e,n,r){Object.defineProperty(t,e,{value:n,enumerable:!!r,writable:!0,configurable:!0})}var V=new RegExp("[^".concat(B.source,".$_\\d]"));var K="__proto__"in{},J="undefined"!=typeof window,q=J&&window.navigator.userAgent.toLowerCase(),W=q&&/msie|trident/.test(q),Z=q&&q.indexOf("msie 9.0")>0,G=q&&q.indexOf("edge/")>0;q&&q.indexOf("android");var X=q&&/iphone|ipad|ipod|ios/.test(q);q&&/chrome\/\d+/.test(q),q&&/phantomjs/.test(q);var Y,Q=q&&q.match(/firefox\/(\d+)/),tt={}.watch,et=!1;if(J)try{var nt={};Object.defineProperty(nt,"passive",{get:function(){et=!0}}),window.addEventListener("test-passive",null,nt)}catch(t){}var rt=function(){return void 0===Y&&(Y=!J&&"undefined"!=typeof global&&(global.process&&"server"===global.process.env.VUE_ENV)),Y},ot=J&&window.__VUE_DEVTOOLS_GLOBAL_HOOK__;function it(t){return"function"==typeof t&&/native code/.test(t.toString())}var at,st="undefined"!=typeof Symbol&&it(Symbol)&&"undefined"!=typeof Reflect&&it(Reflect.ownKeys);at="undefined"!=typeof Set&&it(Set)?Set:function(){function t(){this.set=Object.create(null)}return t.prototype.has=function(t){return!0===this.set[t]},t.prototype.add=function(t){this.set[t]=!0},t.prototype.clear=function(){this.set=Object.create(null)},t}();var ct=null;function ut(t){void 0===t&&(t=null),t||ct&&ct._scope.off(),ct=t,t&&t._scope.on()}var lt=function(){function t(t,e,n,r,o,i,a,s){this.tag=t,this.data=e,this.children=n,this.text=r,this.elm=o,this.ns=void 0,this.context=i,this.fnContext=void 0,this.fnOptions=void 0,this.fnScopeId=void 0,this.key=e&&e.key,this.componentOptions=a,this.componentInstance=void 0,this.parent=void 0,this.raw=!1,this.isStatic=!1,this.isRootInsert=!0,this.isComment=!1,this.isCloned=!1,this.isOnce=!1,this.asyncFactory=s,this.asyncMeta=void 0,this.isAsyncPlaceholder=!1}return Object.defineProperty(t.prototype,"child",{get:function(){return this.componentInstance},enumerable:!1,configurable:!0}),t}(),ft=function(t){void 0===t&&(t="");var e=new lt;return e.text=t,e.isComment=!0,e};function dt(t){return new lt(void 0,void 0,void 0,String(t))}function pt(t){var e=new lt(t.tag,t.data,t.children&&t.children.slice(),t.text,t.elm,t.context,t.componentOptions,t.asyncFactory);return e.ns=t.ns,e.isStatic=t.isStatic,e.key=t.key,e.isComment=t.isComment,e.fnContext=t.fnContext,e.fnOptions=t.fnOptions,e.fnScopeId=t.fnScopeId,e.asyncMeta=t.asyncMeta,e.isCloned=!0,e}var vt=0,ht=[],mt=function(){function t(){this._pending=!1,this.id=vt++,this.subs=[]}return t.prototype.addSub=function(t){this.subs.push(t)},t.prototype.removeSub=function(t){this.subs[this.subs.indexOf(t)]=null,this._pending||(this._pending=!0,ht.push(this))},t.prototype.depend=function(e){t.target&&t.target.addDep(this)},t.prototype.notify=function(t){for(var e=this.subs.filter((function(t){return t})),n=0,r=e.length;n<r;n++){e[n].update()}},t}();mt.target=null;var gt=[];function yt(t){gt.push(t),mt.target=t}function _t(){gt.pop(),mt.target=gt[gt.length-1]}var bt=Array.prototype,$t=Object.create(bt);["push","pop","shift","unshift","splice","sort","reverse"].forEach((function(t){var e=bt[t];z($t,t,(function(){for(var n=[],r=0;r<arguments.length;r++)n[r]=arguments[r];var o,i=e.apply(this,n),a=this.__ob__;switch(t){case"push":case"unshift":o=n;break;case"splice":o=n.slice(2)}return o&&a.observeArray(o),a.dep.notify(),i}))}));var wt=Object.getOwnPropertyNames($t),xt={},Ct=!0;function kt(t){Ct=t}var St={notify:j,depend:j,addSub:j,removeSub:j},Ot=function(){function t(t,n,r){if(void 0===n&&(n=!1),void 0===r&&(r=!1),this.value=t,this.shallow=n,this.mock=r,this.dep=r?St:new mt,this.vmCount=0,z(t,"__ob__",this),e(t)){if(!r)if(K)t.__proto__=$t;else for(var o=0,i=wt.length;o<i;o++){z(t,s=wt[o],$t[s])}n||this.observeArray(t)}else{var a=Object.keys(t);for(o=0;o<a.length;o++){var s;At(t,s=a[o],xt,void 0,n,r)}}}return t.prototype.observeArray=function(t){for(var e=0,n=t.length;e<n;e++)Tt(t[e],!1,this.mock)},t}();function Tt(t,n,r){return t&&_(t,"__ob__")&&t.__ob__ instanceof Ot?t.__ob__:!Ct||!r&&rt()||!e(t)&&!u(t)||!Object.isExtensible(t)||t.__v_skip||Ft(t)||t instanceof lt?void 0:new Ot(t,n,r)}function At(t,n,r,o,i,a){var s=new mt,c=Object.getOwnPropertyDescriptor(t,n);if(!c||!1!==c.configurable){var u=c&&c.get,l=c&&c.set;u&&!l||r!==xt&&2!==arguments.length||(r=t[n]);var f=!i&&Tt(r,!1,a);return Object.defineProperty(t,n,{enumerable:!0,configurable:!0,get:function(){var n=u?u.call(t):r;return mt.target&&(s.depend(),f&&(f.dep.depend(),e(n)&&Nt(n))),Ft(n)&&!i?n.value:n},set:function(e){var n=u?u.call(t):r;if(I(n,e)){if(l)l.call(t,e);else{if(u)return;if(!i&&Ft(n)&&!Ft(e))return void(n.value=e);r=e}f=!i&&Tt(e,!1,a),s.notify()}}}),s}}function jt(t,n,r){if(!Lt(t)){var o=t.__ob__;return e(t)&&l(n)?(t.length=Math.max(t.length,n),t.splice(n,1,r),o&&!o.shallow&&o.mock&&Tt(r,!1,!0),r):n in t&&!(n in Object.prototype)?(t[n]=r,r):t._isVue||o&&o.vmCount?r:o?(At(o.value,n,r,void 0,o.shallow,o.mock),o.dep.notify(),r):(t[n]=r,r)}}function Et(t,n){if(e(t)&&l(n))t.splice(n,1);else{var r=t.__ob__;t._isVue||r&&r.vmCount||Lt(t)||_(t,n)&&(delete t[n],r&&r.dep.notify())}}function Nt(t){for(var n=void 0,r=0,o=t.length;r<o;r++)(n=t[r])&&n.__ob__&&n.__ob__.dep.depend(),e(n)&&Nt(n)}function Pt(t){return Dt(t,!0),z(t,"__v_isShallow",!0),t}function Dt(t,e){Lt(t)||Tt(t,e,rt())}function Mt(t){return Lt(t)?Mt(t.__v_raw):!(!t||!t.__ob__)}function It(t){return!(!t||!t.__v_isShallow)}function Lt(t){return!(!t||!t.__v_isReadonly)}var Rt="__v_isRef";function Ft(t){return!(!t||!0!==t.__v_isRef)}function Ht(t,e){if(Ft(t))return t;var n={};return z(n,Rt,!0),z(n,"__v_isShallow",e),z(n,"dep",At(n,"value",t,null,e,rt())),n}function Bt(t,e,n){Object.defineProperty(t,n,{enumerable:!0,configurable:!0,get:function(){var t=e[n];if(Ft(t))return t.value;var r=t&&t.__ob__;return r&&r.dep.depend(),t},set:function(t){var r=e[n];Ft(r)&&!Ft(t)?r.value=t:e[n]=t}})}function Ut(t,e,n){var r=t[e];if(Ft(r))return r;var o={get value(){var r=t[e];return void 0===r?n:r},set value(n){t[e]=n}};return z(o,Rt,!0),o}function zt(t){return Vt(t,!1)}function Vt(t,e){if(!u(t))return t;if(Lt(t))return t;var n=e?"__v_rawToShallowReadonly":"__v_rawToReadonly",r=t[n];if(r)return r;var o=Object.create(Object.getPrototypeOf(t));z(t,n,o),z(o,"__v_isReadonly",!0),z(o,"__v_raw",t),Ft(t)&&z(o,Rt,!0),(e||It(t))&&z(o,"__v_isShallow",!0);for(var i=Object.keys(t),a=0;a<i.length;a++)Kt(o,t,i[a],e);return o}function Kt(t,e,n,r){Object.defineProperty(t,n,{enumerable:!0,configurable:!0,get:function(){var t=e[n];return r||!u(t)?t:zt(t)},set:function(){}})}var Jt=b((function(t){var e="&"===t.charAt(0),n="~"===(t=e?t.slice(1):t).charAt(0),r="!"===(t=n?t.slice(1):t).charAt(0);return{name:t=r?t.slice(1):t,once:n,capture:r,passive:e}}));function qt(t,n){function r(){var t=r.fns;if(!e(t))return dn(t,null,arguments,n,"v-on handler");for(var o=t.slice(),i=0;i<o.length;i++)dn(o[i],null,arguments,n,"v-on handler")}return r.fns=t,r}function Wt(t,e,r,i,a,s){var c,u,l,f;for(c in t)u=t[c],l=e[c],f=Jt(c),n(u)||(n(l)?(n(u.fns)&&(u=t[c]=qt(u,s)),o(f.once)&&(u=t[c]=a(f.name,u,f.capture)),r(f.name,u,f.capture,f.passive,f.params)):u!==l&&(l.fns=u,t[c]=l));for(c in e)n(t[c])&&i((f=Jt(c)).name,e[c],f.capture)}function Zt(t,e,i){var a;t instanceof lt&&(t=t.data.hook||(t.data.hook={}));var s=t[e];function c(){i.apply(this,arguments),g(a.fns,c)}n(s)?a=qt([c]):r(s.fns)&&o(s.merged)?(a=s).fns.push(c):a=qt([s,c]),a.merged=!0,t[e]=a}function Gt(t,e,n,o,i){if(r(e)){if(_(e,n))return t[n]=e[n],i||delete e[n],!0;if(_(e,o))return t[n]=e[o],i||delete e[o],!0}return!1}function Xt(t){return i(t)?[dt(t)]:e(t)?Qt(t):void 0}function Yt(t){return r(t)&&r(t.text)&&!1===t.isComment}function Qt(t,a){var s,c,u,l,f=[];for(s=0;s<t.length;s++)n(c=t[s])||"boolean"==typeof c||(l=f[u=f.length-1],e(c)?c.length>0&&(Yt((c=Qt(c,"".concat(a||"","_").concat(s)))[0])&&Yt(l)&&(f[u]=dt(l.text+c[0].text),c.shift()),f.push.apply(f,c)):i(c)?Yt(l)?f[u]=dt(l.text+c):""!==c&&f.push(dt(c)):Yt(c)&&Yt(l)?f[u]=dt(l.text+c.text):(o(t._isVList)&&r(c.tag)&&n(c.key)&&r(a)&&(c.key="__vlist".concat(a,"_").concat(s,"__")),f.push(c)));return f}function te(t,n,c,u,l,f){return(e(c)||i(c))&&(l=u,u=c,c=void 0),o(f)&&(l=2),function(t,n,o,i,c){if(r(o)&&r(o.__ob__))return ft();r(o)&&r(o.is)&&(n=o.is);if(!n)return ft();e(i)&&a(i[0])&&((o=o||{}).scopedSlots={default:i[0]},i.length=0);2===c?i=Xt(i):1===c&&(i=function(t){for(var n=0;n<t.length;n++)if(e(t[n]))return Array.prototype.concat.apply([],t);return t}(i));var u,l;if("string"==typeof n){var f=void 0;l=t.$vnode&&t.$vnode.ns||H.getTagNamespace(n),u=H.isReservedTag(n)?new lt(H.parsePlatformTagName(n),o,i,void 0,void 0,t):o&&o.pre||!r(f=yr(t.$options,"components",n))?new lt(n,o,i,void 0,void 0,t):cr(f,o,t,i,n)}else u=cr(n,o,t,i);return e(u)?u:r(u)?(r(l)&&ee(u,l),r(o)&&function(t){s(t.style)&&Bn(t.style);s(t.class)&&Bn(t.class)}(o),u):ft()}(t,n,c,u,l)}function ee(t,e,i){if(t.ns=e,"foreignObject"===t.tag&&(e=void 0,i=!0),r(t.children))for(var a=0,s=t.children.length;a<s;a++){var c=t.children[a];r(c.tag)&&(n(c.ns)||o(i)&&"svg"!==c.tag)&&ee(c,e,i)}}function ne(t,n){var o,i,a,c,u=null;if(e(t)||"string"==typeof t)for(u=new Array(t.length),o=0,i=t.length;o<i;o++)u[o]=n(t[o],o);else if("number"==typeof t)for(u=new Array(t),o=0;o<t;o++)u[o]=n(o+1,o);else if(s(t))if(st&&t[Symbol.iterator]){u=[];for(var l=t[Symbol.iterator](),f=l.next();!f.done;)u.push(n(f.value,u.length)),f=l.next()}else for(a=Object.keys(t),u=new Array(a.length),o=0,i=a.length;o<i;o++)c=a[o],u[o]=n(t[c],c,o);return r(u)||(u=[]),u._isVList=!0,u}function re(t,e,n,r){var o,i=this.$scopedSlots[t];i?(n=n||{},r&&(n=T(T({},r),n)),o=i(n)||(a(e)?e():e)):o=this.$slots[t]||(a(e)?e():e);var s=n&&n.slot;return s?this.$createElement("template",{slot:s},o):o}function oe(t){return yr(this.$options,"filters",t)||N}function ie(t,n){return e(t)?-1===t.indexOf(n):t!==n}function ae(t,e,n,r,o){var i=H.keyCodes[e]||n;return o&&r&&!H.keyCodes[e]?ie(o,r):i?ie(i,t):r?k(r)!==e:void 0===t}function se(t,n,r,o,i){if(r)if(s(r)){e(r)&&(r=A(r));var a=void 0,c=function(e){if("class"===e||"style"===e||m(e))a=t;else{var s=t.attrs&&t.attrs.type;a=o||H.mustUseProp(n,s,e)?t.domProps||(t.domProps={}):t.attrs||(t.attrs={})}var c=w(e),u=k(e);c in a||u in a||(a[e]=r[e],i&&((t.on||(t.on={}))["update:".concat(e)]=function(t){r[e]=t}))};for(var u in r)c(u)}else;return t}function ce(t,e){var n=this._staticTrees||(this._staticTrees=[]),r=n[t];return r&&!e||le(r=n[t]=this.$options.staticRenderFns[t].call(this._renderProxy,this._c,this),"__static__".concat(t),!1),r}function ue(t,e,n){return le(t,"__once__".concat(e).concat(n?"_".concat(n):""),!0),t}function le(t,n,r){if(e(t))for(var o=0;o<t.length;o++)t[o]&&"string"!=typeof t[o]&&fe(t[o],"".concat(n,"_").concat(o),r);else fe(t,n,r)}function fe(t,e,n){t.isStatic=!0,t.key=e,t.isOnce=n}function de(t,e){if(e)if(u(e)){var n=t.on=t.on?T({},t.on):{};for(var r in e){var o=n[r],i=e[r];n[r]=o?[].concat(o,i):i}}else;return t}function pe(t,n,r,o){n=n||{$stable:!r};for(var i=0;i<t.length;i++){var a=t[i];e(a)?pe(a,n,r):a&&(a.proxy&&(a.fn.proxy=!0),n[a.key]=a.fn)}return o&&(n.$key=o),n}function ve(t,e){for(var n=0;n<e.length;n+=2){var r=e[n];"string"==typeof r&&r&&(t[e[n]]=e[n+1])}return t}function he(t,e){return"string"==typeof t?e+t:t}function me(t){t._o=ue,t._n=p,t._s=d,t._l=ne,t._t=re,t._q=P,t._i=D,t._m=ce,t._f=oe,t._k=ae,t._b=se,t._v=dt,t._e=ft,t._u=pe,t._g=de,t._d=ve,t._p=he}function ge(t,e){if(!t||!t.length)return{};for(var n={},r=0,o=t.length;r<o;r++){var i=t[r],a=i.data;if(a&&a.attrs&&a.attrs.slot&&delete a.attrs.slot,i.context!==e&&i.fnContext!==e||!a||null==a.slot)(n.default||(n.default=[])).push(i);else{var s=a.slot,c=n[s]||(n[s]=[]);"template"===i.tag?c.push.apply(c,i.children||[]):c.push(i)}}for(var u in n)n[u].every(ye)&&delete n[u];return n}function ye(t){return t.isComment&&!t.asyncFactory||" "===t.text}function _e(t){return t.isComment&&t.asyncFactory}function be(e,n,r,o){var i,a=Object.keys(r).length>0,s=n?!!n.$stable:!a,c=n&&n.$key;if(n){if(n._normalized)return n._normalized;if(s&&o&&o!==t&&c===o.$key&&!a&&!o.$hasNormal)return o;for(var u in i={},n)n[u]&&"$"!==u[0]&&(i[u]=$e(e,r,u,n[u]))}else i={};for(var l in r)l in i||(i[l]=we(r,l));return n&&Object.isExtensible(n)&&(n._normalized=i),z(i,"$stable",s),z(i,"$key",c),z(i,"$hasNormal",a),i}function $e(t,n,r,o){var i=function(){var n=ct;ut(t);var r=arguments.length?o.apply(null,arguments):o({}),i=(r=r&&"object"==typeof r&&!e(r)?[r]:Xt(r))&&r[0];return ut(n),r&&(!i||1===r.length&&i.isComment&&!_e(i))?void 0:r};return o.proxy&&Object.defineProperty(n,r,{get:i,enumerable:!0,configurable:!0}),i}function we(t,e){return function(){return t[e]}}function xe(e){return{get attrs(){if(!e._attrsProxy){var n=e._attrsProxy={};z(n,"_v_attr_proxy",!0),Ce(n,e.$attrs,t,e,"$attrs")}return e._attrsProxy},get listeners(){e._listenersProxy||Ce(e._listenersProxy={},e.$listeners,t,e,"$listeners");return e._listenersProxy},get slots(){return function(t){t._slotsProxy||Se(t._slotsProxy={},t.$scopedSlots);return t._slotsProxy}(e)},emit:S(e.$emit,e),expose:function(t){t&&Object.keys(t).forEach((function(n){return Bt(e,t,n)}))}}}function Ce(t,e,n,r,o){var i=!1;for(var a in e)a in t?e[a]!==n[a]&&(i=!0):(i=!0,ke(t,a,r,o));for(var a in t)a in e||(i=!0,delete t[a]);return i}function ke(t,e,n,r){Object.defineProperty(t,e,{enumerable:!0,configurable:!0,get:function(){return n[r][e]}})}function Se(t,e){for(var n in e)t[n]=e[n];for(var n in t)n in e||delete t[n]}function Oe(){var t=ct;return t._setupContext||(t._setupContext=xe(t))}var Te,Ae=null;function je(t,e){return(t.__esModule||st&&"Module"===t[Symbol.toStringTag])&&(t=t.default),s(t)?e.extend(t):t}function Ee(t){if(e(t))for(var n=0;n<t.length;n++){var o=t[n];if(r(o)&&(r(o.componentOptions)||_e(o)))return o}}function Ne(t,e){Te.$on(t,e)}function Pe(t,e){Te.$off(t,e)}function De(t,e){var n=Te;return function r(){var o=e.apply(null,arguments);null!==o&&n.$off(t,r)}}function Me(t,e,n){Te=t,Wt(e,n||{},Ne,Pe,De,t),Te=void 0}var Ie=null;function Le(t){var e=Ie;return Ie=t,function(){Ie=e}}function Re(t){for(;t&&(t=t.$parent);)if(t._inactive)return!0;return!1}function Fe(t,e){if(e){if(t._directInactive=!1,Re(t))return}else if(t._directInactive)return;if(t._inactive||null===t._inactive){t._inactive=!1;for(var n=0;n<t.$children.length;n++)Fe(t.$children[n]);Be(t,"activated")}}function He(t,e){if(!(e&&(t._directInactive=!0,Re(t))||t._inactive)){t._inactive=!0;for(var n=0;n<t.$children.length;n++)He(t.$children[n]);Be(t,"deactivated")}}function Be(t,e,n,r){void 0===r&&(r=!0),yt();var o=ct;r&&ut(t);var i=t.$options[e],a="".concat(e," hook");if(i)for(var s=0,c=i.length;s<c;s++)dn(i[s],t,n||null,t,a);t._hasHookEvent&&t.$emit("hook:"+e),r&&ut(o),_t()}var Ue=[],ze=[],Ve={},Ke=!1,Je=!1,qe=0;var We=0,Ze=Date.now;if(J&&!W){var Ge=window.performance;Ge&&"function"==typeof Ge.now&&Ze()>document.createEvent("Event").timeStamp&&(Ze=function(){return Ge.now()})}var Xe=function(t,e){if(t.post){if(!e.post)return 1}else if(e.post)return-1;return t.id-e.id};function Ye(){var t,e;for(We=Ze(),Je=!0,Ue.sort(Xe),qe=0;qe<Ue.length;qe++)(t=Ue[qe]).before&&t.before(),e=t.id,Ve[e]=null,t.run();var n=ze.slice(),r=Ue.slice();qe=Ue.length=ze.length=0,Ve={},Ke=Je=!1,function(t){for(var e=0;e<t.length;e++)t[e]._inactive=!0,Fe(t[e],!0)}(n),function(t){var e=t.length;for(;e--;){var n=t[e],r=n.vm;r&&r._watcher===n&&r._isMounted&&!r._isDestroyed&&Be(r,"updated")}}(r),function(){for(var t=0;t<ht.length;t++){var e=ht[t];e.subs=e.subs.filter((function(t){return t})),e._pending=!1}ht.length=0}(),ot&&H.devtools&&ot.emit("flush")}function Qe(t){var e=t.id;if(null==Ve[e]&&(t!==mt.target||!t.noRecurse)){if(Ve[e]=!0,Je){for(var n=Ue.length-1;n>qe&&Ue[n].id>t.id;)n--;Ue.splice(n+1,0,t)}else Ue.push(t);Ke||(Ke=!0,Cn(Ye))}}var tn="watcher",en="".concat(tn," callback"),nn="".concat(tn," getter"),rn="".concat(tn," cleanup");function on(t,e){return cn(t,null,{flush:"post"})}var an,sn={};function cn(n,r,o){var i=void 0===o?t:o,s=i.immediate,c=i.deep,u=i.flush,l=void 0===u?"pre":u;i.onTrack,i.onTrigger;var f,d,p=ct,v=function(t,e,n){return void 0===n&&(n=null),dn(t,null,n,p,e)},h=!1,m=!1;if(Ft(n)?(f=function(){return n.value},h=It(n)):Mt(n)?(f=function(){return n.__ob__.dep.depend(),n},c=!0):e(n)?(m=!0,h=n.some((function(t){return Mt(t)||It(t)})),f=function(){return n.map((function(t){return Ft(t)?t.value:Mt(t)?Bn(t):a(t)?v(t,nn):void 0}))}):f=a(n)?r?function(){return v(n,nn)}:function(){if(!p||!p._isDestroyed)return d&&d(),v(n,tn,[y])}:j,r&&c){var g=f;f=function(){return Bn(g())}}var y=function(t){d=_.onStop=function(){v(t,rn)}};if(rt())return y=j,r?s&&v(r,en,[f(),m?[]:void 0,y]):f(),j;var _=new Vn(ct,f,j,{lazy:!0});_.noRecurse=!r;var b=m?[]:sn;return _.run=function(){if(_.active)if(r){var t=_.get();(c||h||(m?t.some((function(t,e){return I(t,b[e])})):I(t,b)))&&(d&&d(),v(r,en,[t,b===sn?void 0:b,y]),b=t)}else _.get()},"sync"===l?_.update=_.run:"post"===l?(_.post=!0,_.update=function(){return Qe(_)}):_.update=function(){if(p&&p===ct&&!p._isMounted){var t=p._preWatchers||(p._preWatchers=[]);t.indexOf(_)<0&&t.push(_)}else Qe(_)},r?s?_.run():b=_.get():"post"===l&&p?p.$once("hook:mounted",(function(){return _.get()})):_.get(),function(){_.teardown()}}var un=function(){function t(t){void 0===t&&(t=!1),this.detached=t,this.active=!0,this.effects=[],this.cleanups=[],this.parent=an,!t&&an&&(this.index=(an.scopes||(an.scopes=[])).push(this)-1)}return t.prototype.run=function(t){if(this.active){var e=an;try{return an=this,t()}finally{an=e}}},t.prototype.on=function(){an=this},t.prototype.off=function(){an=this.parent},t.prototype.stop=function(t){if(this.active){var e=void 0,n=void 0;for(e=0,n=this.effects.length;e<n;e++)this.effects[e].teardown();for(e=0,n=this.cleanups.length;e<n;e++)this.cleanups[e]();if(this.scopes)for(e=0,n=this.scopes.length;e<n;e++)this.scopes[e].stop(!0);if(!this.detached&&this.parent&&!t){var r=this.parent.scopes.pop();r&&r!==this&&(this.parent.scopes[this.index]=r,r.index=this.index)}this.parent=void 0,this.active=!1}},t}();function ln(t){var e=t._provided,n=t.$parent&&t.$parent._provided;return n===e?t._provided=Object.create(n):e}function fn(t,e,n){yt();try{if(e)for(var r=e;r=r.$parent;){var o=r.$options.errorCaptured;if(o)for(var i=0;i<o.length;i++)try{if(!1===o[i].call(r,t,e,n))return}catch(t){pn(t,r,"errorCaptured hook")}}pn(t,e,n)}finally{_t()}}function dn(t,e,n,r,o){var i;try{(i=n?t.apply(e,n):t.call(e))&&!i._isVue&&f(i)&&!i._handled&&(i.catch((function(t){return fn(t,r,o+" (Promise/async)")})),i._handled=!0)}catch(t){fn(t,r,o)}return i}function pn(t,e,n){if(H.errorHandler)try{return H.errorHandler.call(null,t,e,n)}catch(e){e!==t&&vn(e)}vn(t)}function vn(t,e,n){if(!J||"undefined"==typeof console)throw t;console.error(t)}var hn,mn=!1,gn=[],yn=!1;function _n(){yn=!1;var t=gn.slice(0);gn.length=0;for(var e=0;e<t.length;e++)t[e]()}if("undefined"!=typeof Promise&&it(Promise)){var bn=Promise.resolve();hn=function(){bn.then(_n),X&&setTimeout(j)},mn=!0}else if(W||"undefined"==typeof MutationObserver||!it(MutationObserver)&&"[object MutationObserverConstructor]"!==MutationObserver.toString())hn="undefined"!=typeof setImmediate&&it(setImmediate)?function(){setImmediate(_n)}:function(){setTimeout(_n,0)};else{var $n=1,wn=new MutationObserver(_n),xn=document.createTextNode(String($n));wn.observe(xn,{characterData:!0}),hn=function(){$n=($n+1)%2,xn.data=String($n)},mn=!0}function Cn(t,e){var n;if(gn.push((function(){if(t)try{t.call(e)}catch(t){fn(t,e,"nextTick")}else n&&n(e)})),yn||(yn=!0,hn()),!t&&"undefined"!=typeof Promise)return new Promise((function(t){n=t}))}function kn(t){return function(e,n){if(void 0===n&&(n=ct),n)return function(t,e,n){var r=t.$options;r[e]=vr(r[e],n)}(n,t,e)}}var Sn=kn("beforeMount"),On=kn("mounted"),Tn=kn("beforeUpdate"),An=kn("updated"),jn=kn("beforeDestroy"),En=kn("destroyed"),Nn=kn("activated"),Pn=kn("deactivated"),Dn=kn("serverPrefetch"),Mn=kn("renderTracked"),In=kn("renderTriggered"),Ln=kn("errorCaptured");var Rn="2.7.14";var Fn=Object.freeze({__proto__:null,version:Rn,defineComponent:function(t){return t},ref:function(t){return Ht(t,!1)},shallowRef:function(t){return Ht(t,!0)},isRef:Ft,toRef:Ut,toRefs:function(t){var n=e(t)?new Array(t.length):{};for(var r in t)n[r]=Ut(t,r);return n},unref:function(t){return Ft(t)?t.value:t},proxyRefs:function(t){if(Mt(t))return t;for(var e={},n=Object.keys(t),r=0;r<n.length;r++)Bt(e,t,n[r]);return e},customRef:function(t){var e=new mt,n=t((function(){e.depend()}),(function(){e.notify()})),r=n.get,o=n.set,i={get value(){return r()},set value(t){o(t)}};return z(i,Rt,!0),i},triggerRef:function(t){t.dep&&t.dep.notify()},reactive:function(t){return Dt(t,!1),t},isReactive:Mt,isReadonly:Lt,isShallow:It,isProxy:function(t){return Mt(t)||Lt(t)},shallowReactive:Pt,markRaw:function(t){return Object.isExtensible(t)&&z(t,"__v_skip",!0),t},toRaw:function t(e){var n=e&&e.__v_raw;return n?t(n):e},readonly:zt,shallowReadonly:function(t){return Vt(t,!0)},computed:function(t,e){var n,r,o=a(t);o?(n=t,r=j):(n=t.get,r=t.set);var i=rt()?null:new Vn(ct,n,j,{lazy:!0}),s={effect:i,get value(){return i?(i.dirty&&i.evaluate(),mt.target&&i.depend(),i.value):n()},set value(t){r(t)}};return z(s,Rt,!0),z(s,"__v_isReadonly",o),s},watch:function(t,e,n){return cn(t,e,n)},watchEffect:function(t,e){return cn(t,null,e)},watchPostEffect:on,watchSyncEffect:function(t,e){return cn(t,null,{flush:"sync"})},EffectScope:un,effectScope:function(t){return new un(t)},onScopeDispose:function(t){an&&an.cleanups.push(t)},getCurrentScope:function(){return an},provide:function(t,e){ct&&(ln(ct)[t]=e)},inject:function(t,e,n){void 0===n&&(n=!1);var r=ct;if(r){var o=r.$parent&&r.$parent._provided;if(o&&t in o)return o[t];if(arguments.length>1)return n&&a(e)?e.call(r):e}},h:function(t,e,n){return te(ct,t,e,n,2,!0)},getCurrentInstance:function(){return ct&&{proxy:ct}},useSlots:function(){return Oe().slots},useAttrs:function(){return Oe().attrs},useListeners:function(){return Oe().listeners},mergeDefaults:function(t,n){var r=e(t)?t.reduce((function(t,e){return t[e]={},t}),{}):t;for(var o in n){var i=r[o];i?e(i)||a(i)?r[o]={type:i,default:n[o]}:i.default=n[o]:null===i&&(r[o]={default:n[o]})}return r},nextTick:Cn,set:jt,del:Et,useCssModule:function(e){return t},useCssVars:function(t){if(J){var e=ct;e&&on((function(){var n=e.$el,r=t(e,e._setupProxy);if(n&&1===n.nodeType){var o=n.style;for(var i in r)o.setProperty("--".concat(i),r[i])}}))}},defineAsyncComponent:function(t){a(t)&&(t={loader:t});var e=t.loader,n=t.loadingComponent,r=t.errorComponent,o=t.delay,i=void 0===o?200:o,s=t.timeout;t.suspensible;var c=t.onError,u=null,l=0,f=function(){var t;return u||(t=u=e().catch((function(t){if(t=t instanceof Error?t:new Error(String(t)),c)return new Promise((function(e,n){c(t,(function(){return e((l++,u=null,f()))}),(function(){return n(t)}),l+1)}));throw t})).then((function(e){return t!==u&&u?u:(e&&(e.__esModule||"Module"===e[Symbol.toStringTag])&&(e=e.default),e)})))};return function(){return{component:f(),delay:i,timeout:s,error:r,loading:n}}},onBeforeMount:Sn,onMounted:On,onBeforeUpdate:Tn,onUpdated:An,onBeforeUnmount:jn,onUnmounted:En,onActivated:Nn,onDeactivated:Pn,onServerPrefetch:Dn,onRenderTracked:Mn,onRenderTriggered:In,onErrorCaptured:function(t,e){void 0===e&&(e=ct),Ln(t,e)}}),Hn=new at;function Bn(t){return Un(t,Hn),Hn.clear(),t}function Un(t,n){var r,o,i=e(t);if(!(!i&&!s(t)||t.__v_skip||Object.isFrozen(t)||t instanceof lt)){if(t.__ob__){var a=t.__ob__.dep.id;if(n.has(a))return;n.add(a)}if(i)for(r=t.length;r--;)Un(t[r],n);else if(Ft(t))Un(t.value,n);else for(r=(o=Object.keys(t)).length;r--;)Un(t[o[r]],n)}}var zn=0,Vn=function(){function t(t,e,n,r,o){!function(t,e){void 0===e&&(e=an),e&&e.active&&e.effects.push(t)}(this,an&&!an._vm?an:t?t._scope:void 0),(this.vm=t)&&o&&(t._watcher=this),r?(this.deep=!!r.deep,this.user=!!r.user,this.lazy=!!r.lazy,this.sync=!!r.sync,this.before=r.before):this.deep=this.user=this.lazy=this.sync=!1,this.cb=n,this.id=++zn,this.active=!0,this.post=!1,this.dirty=this.lazy,this.deps=[],this.newDeps=[],this.depIds=new at,this.newDepIds=new at,this.expression="",a(e)?this.getter=e:(this.getter=function(t){if(!V.test(t)){var e=t.split(".");return function(t){for(var n=0;n<e.length;n++){if(!t)return;t=t[e[n]]}return t}}}(e),this.getter||(this.getter=j)),this.value=this.lazy?void 0:this.get()}return t.prototype.get=function(){var t;yt(this);var e=this.vm;try{t=this.getter.call(e,e)}catch(t){if(!this.user)throw t;fn(t,e,'getter for watcher "'.concat(this.expression,'"'))}finally{this.deep&&Bn(t),_t(),this.cleanupDeps()}return t},t.prototype.addDep=function(t){var e=t.id;this.newDepIds.has(e)||(this.newDepIds.add(e),this.newDeps.push(t),this.depIds.has(e)||t.addSub(this))},t.prototype.cleanupDeps=function(){for(var t=this.deps.length;t--;){var e=this.deps[t];this.newDepIds.has(e.id)||e.removeSub(this)}var n=this.depIds;this.depIds=this.newDepIds,this.newDepIds=n,this.newDepIds.clear(),n=this.deps,this.deps=this.newDeps,this.newDeps=n,this.newDeps.length=0},t.prototype.update=function(){this.lazy?this.dirty=!0:this.sync?this.run():Qe(this)},t.prototype.run=function(){if(this.active){var t=this.get();if(t!==this.value||s(t)||this.deep){var e=this.value;if(this.value=t,this.user){var n='callback for watcher "'.concat(this.expression,'"');dn(this.cb,this.vm,[t,e],this.vm,n)}else this.cb.call(this.vm,t,e)}}},t.prototype.evaluate=function(){this.value=this.get(),this.dirty=!1},t.prototype.depend=function(){for(var t=this.deps.length;t--;)this.deps[t].depend()},t.prototype.teardown=function(){if(this.vm&&!this.vm._isBeingDestroyed&&g(this.vm._scope.effects,this),this.active){for(var t=this.deps.length;t--;)this.deps[t].removeSub(this);this.active=!1,this.onStop&&this.onStop()}},t}(),Kn={enumerable:!0,configurable:!0,get:j,set:j};function Jn(t,e,n){Kn.get=function(){return this[e][n]},Kn.set=function(t){this[e][n]=t},Object.defineProperty(t,n,Kn)}function qn(t){var n=t.$options;if(n.props&&function(t,e){var n=t.$options.propsData||{},r=t._props=Pt({}),o=t.$options._propKeys=[];t.$parent&&kt(!1);var i=function(i){o.push(i);var a=_r(i,e,n,t);At(r,i,a),i in t||Jn(t,"_props",i)};for(var a in e)i(a);kt(!0)}(t,n.props),function(t){var e=t.$options,n=e.setup;if(n){var r=t._setupContext=xe(t);ut(t),yt();var o=dn(n,null,[t._props||Pt({}),r],t,"setup");if(_t(),ut(),a(o))e.render=o;else if(s(o))if(t._setupState=o,o.__sfc){var i=t._setupProxy={};for(var c in o)"__sfc"!==c&&Bt(i,o,c)}else for(var c in o)U(c)||Bt(t,o,c)}}(t),n.methods&&function(t,e){for(var n in t.$options.props,e)t[n]="function"!=typeof e[n]?j:S(e[n],t)}(t,n.methods),n.data)!function(t){var e=t.$options.data;u(e=t._data=a(e)?function(t,e){yt();try{return t.call(e,e)}catch(t){return fn(t,e,"data()"),{}}finally{_t()}}(e,t):e||{})||(e={});var n=Object.keys(e),r=t.$options.props;t.$options.methods;var o=n.length;for(;o--;){var i=n[o];r&&_(r,i)||U(i)||Jn(t,"_data",i)}var s=Tt(e);s&&s.vmCount++}(t);else{var r=Tt(t._data={});r&&r.vmCount++}n.computed&&function(t,e){var n=t._computedWatchers=Object.create(null),r=rt();for(var o in e){var i=e[o],s=a(i)?i:i.get;r||(n[o]=new Vn(t,s||j,j,Wn)),o in t||Zn(t,o,i)}}(t,n.computed),n.watch&&n.watch!==tt&&function(t,n){for(var r in n){var o=n[r];if(e(o))for(var i=0;i<o.length;i++)Yn(t,r,o[i]);else Yn(t,r,o)}}(t,n.watch)}var Wn={lazy:!0};function Zn(t,e,n){var r=!rt();a(n)?(Kn.get=r?Gn(e):Xn(n),Kn.set=j):(Kn.get=n.get?r&&!1!==n.cache?Gn(e):Xn(n.get):j,Kn.set=n.set||j),Object.defineProperty(t,e,Kn)}function Gn(t){return function(){var e=this._computedWatchers&&this._computedWatchers[t];if(e)return e.dirty&&e.evaluate(),mt.target&&e.depend(),e.value}}function Xn(t){return function(){return t.call(this,this)}}function Yn(t,e,n,r){return u(n)&&(r=n,n=n.handler),"string"==typeof n&&(n=t[n]),t.$watch(e,n,r)}function Qn(t,e){if(t){for(var n=Object.create(null),r=st?Reflect.ownKeys(t):Object.keys(t),o=0;o<r.length;o++){var i=r[o];if("__ob__"!==i){var s=t[i].from;if(s in e._provided)n[i]=e._provided[s];else if("default"in t[i]){var c=t[i].default;n[i]=a(c)?c.call(e):c}}}return n}}var tr=0;function er(t){var e=t.options;if(t.super){var n=er(t.super);if(n!==t.superOptions){t.superOptions=n;var r=function(t){var e,n=t.options,r=t.sealedOptions;for(var o in n)n[o]!==r[o]&&(e||(e={}),e[o]=n[o]);return e}(t);r&&T(t.extendOptions,r),(e=t.options=gr(n,t.extendOptions)).name&&(e.components[e.name]=t)}}return e}function nr(n,r,i,a,s){var c,u=this,l=s.options;_(a,"_uid")?(c=Object.create(a))._original=a:(c=a,a=a._original);var f=o(l._compiled),d=!f;this.data=n,this.props=r,this.children=i,this.parent=a,this.listeners=n.on||t,this.injections=Qn(l.inject,a),this.slots=function(){return u.$slots||be(a,n.scopedSlots,u.$slots=ge(i,a)),u.$slots},Object.defineProperty(this,"scopedSlots",{enumerable:!0,get:function(){return be(a,n.scopedSlots,this.slots())}}),f&&(this.$options=l,this.$slots=this.slots(),this.$scopedSlots=be(a,n.scopedSlots,this.$slots)),l._scopeId?this._c=function(t,n,r,o){var i=te(c,t,n,r,o,d);return i&&!e(i)&&(i.fnScopeId=l._scopeId,i.fnContext=a),i}:this._c=function(t,e,n,r){return te(c,t,e,n,r,d)}}function rr(t,e,n,r,o){var i=pt(t);return i.fnContext=n,i.fnOptions=r,e.slot&&((i.data||(i.data={})).slot=e.slot),i}function or(t,e){for(var n in e)t[w(n)]=e[n]}function ir(t){return t.name||t.__name||t._componentTag}me(nr.prototype);var ar={init:function(t,e){if(t.componentInstance&&!t.componentInstance._isDestroyed&&t.data.keepAlive){var n=t;ar.prepatch(n,n)}else{(t.componentInstance=function(t,e){var n={_isComponent:!0,_parentVnode:t,parent:e},o=t.data.inlineTemplate;r(o)&&(n.render=o.render,n.staticRenderFns=o.staticRenderFns);return new t.componentOptions.Ctor(n)}(t,Ie)).$mount(e?t.elm:void 0,e)}},prepatch:function(e,n){var r=n.componentOptions;!function(e,n,r,o,i){var a=o.data.scopedSlots,s=e.$scopedSlots,c=!!(a&&!a.$stable||s!==t&&!s.$stable||a&&e.$scopedSlots.$key!==a.$key||!a&&e.$scopedSlots.$key),u=!!(i||e.$options._renderChildren||c),l=e.$vnode;e.$options._parentVnode=o,e.$vnode=o,e._vnode&&(e._vnode.parent=o),e.$options._renderChildren=i;var f=o.data.attrs||t;e._attrsProxy&&Ce(e._attrsProxy,f,l.data&&l.data.attrs||t,e,"$attrs")&&(u=!0),e.$attrs=f,r=r||t;var d=e.$options._parentListeners;if(e._listenersProxy&&Ce(e._listenersProxy,r,d||t,e,"$listeners"),e.$listeners=e.$options._parentListeners=r,Me(e,r,d),n&&e.$options.props){kt(!1);for(var p=e._props,v=e.$options._propKeys||[],h=0;h<v.length;h++){var m=v[h],g=e.$options.props;p[m]=_r(m,g,n,e)}kt(!0),e.$options.propsData=n}u&&(e.$slots=ge(i,o.context),e.$forceUpdate())}(n.componentInstance=e.componentInstance,r.propsData,r.listeners,n,r.children)},insert:function(t){var e,n=t.context,r=t.componentInstance;r._isMounted||(r._isMounted=!0,Be(r,"mounted")),t.data.keepAlive&&(n._isMounted?((e=r)._inactive=!1,ze.push(e)):Fe(r,!0))},destroy:function(t){var e=t.componentInstance;e._isDestroyed||(t.data.keepAlive?He(e,!0):e.$destroy())}},sr=Object.keys(ar);function cr(i,a,c,u,l){if(!n(i)){var d=c.$options._base;if(s(i)&&(i=d.extend(i)),"function"==typeof i){var p;if(n(i.cid)&&(i=function(t,e){if(o(t.error)&&r(t.errorComp))return t.errorComp;if(r(t.resolved))return t.resolved;var i=Ae;if(i&&r(t.owners)&&-1===t.owners.indexOf(i)&&t.owners.push(i),o(t.loading)&&r(t.loadingComp))return t.loadingComp;if(i&&!r(t.owners)){var a=t.owners=[i],c=!0,u=null,l=null;i.$on("hook:destroyed",(function(){return g(a,i)}));var d=function(t){for(var e=0,n=a.length;e<n;e++)a[e].$forceUpdate();t&&(a.length=0,null!==u&&(clearTimeout(u),u=null),null!==l&&(clearTimeout(l),l=null))},p=M((function(n){t.resolved=je(n,e),c?a.length=0:d(!0)})),v=M((function(e){r(t.errorComp)&&(t.error=!0,d(!0))})),h=t(p,v);return s(h)&&(f(h)?n(t.resolved)&&h.then(p,v):f(h.component)&&(h.component.then(p,v),r(h.error)&&(t.errorComp=je(h.error,e)),r(h.loading)&&(t.loadingComp=je(h.loading,e),0===h.delay?t.loading=!0:u=setTimeout((function(){u=null,n(t.resolved)&&n(t.error)&&(t.loading=!0,d(!1))}),h.delay||200)),r(h.timeout)&&(l=setTimeout((function(){l=null,n(t.resolved)&&v(null)}),h.timeout)))),c=!1,t.loading?t.loadingComp:t.resolved}}(p=i,d),void 0===i))return function(t,e,n,r,o){var i=ft();return i.asyncFactory=t,i.asyncMeta={data:e,context:n,children:r,tag:o},i}(p,a,c,u,l);a=a||{},er(i),r(a.model)&&function(t,n){var o=t.model&&t.model.prop||"value",i=t.model&&t.model.event||"input";(n.attrs||(n.attrs={}))[o]=n.model.value;var a=n.on||(n.on={}),s=a[i],c=n.model.callback;r(s)?(e(s)?-1===s.indexOf(c):s!==c)&&(a[i]=[c].concat(s)):a[i]=c}(i.options,a);var v=function(t,e,o){var i=e.options.props;if(!n(i)){var a={},s=t.attrs,c=t.props;if(r(s)||r(c))for(var u in i){var l=k(u);Gt(a,c,u,l,!0)||Gt(a,s,u,l,!1)}return a}}(a,i);if(o(i.options.functional))return function(n,o,i,a,s){var c=n.options,u={},l=c.props;if(r(l))for(var f in l)u[f]=_r(f,l,o||t);else r(i.attrs)&&or(u,i.attrs),r(i.props)&&or(u,i.props);var d=new nr(i,u,s,a,n),p=c.render.call(null,d._c,d);if(p instanceof lt)return rr(p,i,d.parent,c);if(e(p)){for(var v=Xt(p)||[],h=new Array(v.length),m=0;m<v.length;m++)h[m]=rr(v[m],i,d.parent,c);return h}}(i,v,a,c,u);var h=a.on;if(a.on=a.nativeOn,o(i.options.abstract)){var m=a.slot;a={},m&&(a.slot=m)}!function(t){for(var e=t.hook||(t.hook={}),n=0;n<sr.length;n++){var r=sr[n],o=e[r],i=ar[r];o===i||o&&o._merged||(e[r]=o?ur(i,o):i)}}(a);var y=ir(i.options)||l;return new lt("vue-component-".concat(i.cid).concat(y?"-".concat(y):""),a,void 0,void 0,void 0,c,{Ctor:i,propsData:v,listeners:h,tag:l,children:u},p)}}}function ur(t,e){var n=function(n,r){t(n,r),e(n,r)};return n._merged=!0,n}var lr=j,fr=H.optionMergeStrategies;function dr(t,e,n){if(void 0===n&&(n=!0),!e)return t;for(var r,o,i,a=st?Reflect.ownKeys(e):Object.keys(e),s=0;s<a.length;s++)"__ob__"!==(r=a[s])&&(o=t[r],i=e[r],n&&_(t,r)?o!==i&&u(o)&&u(i)&&dr(o,i):jt(t,r,i));return t}function pr(t,e,n){return n?function(){var r=a(e)?e.call(n,n):e,o=a(t)?t.call(n,n):t;return r?dr(r,o):o}:e?t?function(){return dr(a(e)?e.call(this,this):e,a(t)?t.call(this,this):t)}:e:t}function vr(t,n){var r=n?t?t.concat(n):e(n)?n:[n]:t;return r?function(t){for(var e=[],n=0;n<t.length;n++)-1===e.indexOf(t[n])&&e.push(t[n]);return e}(r):r}function hr(t,e,n,r){var o=Object.create(t||null);return e?T(o,e):o}fr.data=function(t,e,n){return n?pr(t,e,n):e&&"function"!=typeof e?t:pr(t,e)},F.forEach((function(t){fr[t]=vr})),R.forEach((function(t){fr[t+"s"]=hr})),fr.watch=function(t,n,r,o){if(t===tt&&(t=void 0),n===tt&&(n=void 0),!n)return Object.create(t||null);if(!t)return n;var i={};for(var a in T(i,t),n){var s=i[a],c=n[a];s&&!e(s)&&(s=[s]),i[a]=s?s.concat(c):e(c)?c:[c]}return i},fr.props=fr.methods=fr.inject=fr.computed=function(t,e,n,r){if(!t)return e;var o=Object.create(null);return T(o,t),e&&T(o,e),o},fr.provide=function(t,e){return t?function(){var n=Object.create(null);return dr(n,a(t)?t.call(this):t),e&&dr(n,a(e)?e.call(this):e,!1),n}:e};var mr=function(t,e){return void 0===e?t:e};function gr(t,n,r){if(a(n)&&(n=n.options),function(t,n){var r=t.props;if(r){var o,i,a={};if(e(r))for(o=r.length;o--;)"string"==typeof(i=r[o])&&(a[w(i)]={type:null});else if(u(r))for(var s in r)i=r[s],a[w(s)]=u(i)?i:{type:i};t.props=a}}(n),function(t,n){var r=t.inject;if(r){var o=t.inject={};if(e(r))for(var i=0;i<r.length;i++)o[r[i]]={from:r[i]};else if(u(r))for(var a in r){var s=r[a];o[a]=u(s)?T({from:a},s):{from:s}}}}(n),function(t){var e=t.directives;if(e)for(var n in e){var r=e[n];a(r)&&(e[n]={bind:r,update:r})}}(n),!n._base&&(n.extends&&(t=gr(t,n.extends,r)),n.mixins))for(var o=0,i=n.mixins.length;o<i;o++)t=gr(t,n.mixins[o],r);var s,c={};for(s in t)l(s);for(s in n)_(t,s)||l(s);function l(e){var o=fr[e]||mr;c[e]=o(t[e],n[e],r,e)}return c}function yr(t,e,n,r){if("string"==typeof n){var o=t[e];if(_(o,n))return o[n];var i=w(n);if(_(o,i))return o[i];var a=x(i);return _(o,a)?o[a]:o[n]||o[i]||o[a]}}function _r(t,e,n,r){var o=e[t],i=!_(n,t),s=n[t],c=xr(Boolean,o.type);if(c>-1)if(i&&!_(o,"default"))s=!1;else if(""===s||s===k(t)){var u=xr(String,o.type);(u<0||c<u)&&(s=!0)}if(void 0===s){s=function(t,e,n){if(!_(e,"default"))return;var r=e.default;if(t&&t.$options.propsData&&void 0===t.$options.propsData[n]&&void 0!==t._props[n])return t._props[n];return a(r)&&"Function"!==$r(e.type)?r.call(t):r}(r,o,t);var l=Ct;kt(!0),Tt(s),kt(l)}return s}var br=/^\s*function (\w+)/;function $r(t){var e=t&&t.toString().match(br);return e?e[1]:""}function wr(t,e){return $r(t)===$r(e)}function xr(t,n){if(!e(n))return wr(n,t)?0:-1;for(var r=0,o=n.length;r<o;r++)if(wr(n[r],t))return r;return-1}function Cr(t){this._init(t)}function kr(t){t.cid=0;var e=1;t.extend=function(t){t=t||{};var n=this,r=n.cid,o=t._Ctor||(t._Ctor={});if(o[r])return o[r];var i=ir(t)||ir(n.options),a=function(t){this._init(t)};return(a.prototype=Object.create(n.prototype)).constructor=a,a.cid=e++,a.options=gr(n.options,t),a.super=n,a.options.props&&function(t){var e=t.options.props;for(var n in e)Jn(t.prototype,"_props",n)}(a),a.options.computed&&function(t){var e=t.options.computed;for(var n in e)Zn(t.prototype,n,e[n])}(a),a.extend=n.extend,a.mixin=n.mixin,a.use=n.use,R.forEach((function(t){a[t]=n[t]})),i&&(a.options.components[i]=a),a.superOptions=n.options,a.extendOptions=t,a.sealedOptions=T({},a.options),o[r]=a,a}}function Sr(t){return t&&(ir(t.Ctor.options)||t.tag)}function Or(t,n){return e(t)?t.indexOf(n)>-1:"string"==typeof t?t.split(",").indexOf(n)>-1:(r=t,"[object RegExp]"===c.call(r)&&t.test(n));var r}function Tr(t,e){var n=t.cache,r=t.keys,o=t._vnode;for(var i in n){var a=n[i];if(a){var s=a.name;s&&!e(s)&&Ar(n,i,r,o)}}}function Ar(t,e,n,r){var o=t[e];!o||r&&o.tag===r.tag||o.componentInstance.$destroy(),t[e]=null,g(n,e)}!function(e){e.prototype._init=function(e){var n=this;n._uid=tr++,n._isVue=!0,n.__v_skip=!0,n._scope=new un(!0),n._scope._vm=!0,e&&e._isComponent?function(t,e){var n=t.$options=Object.create(t.constructor.options),r=e._parentVnode;n.parent=e.parent,n._parentVnode=r;var o=r.componentOptions;n.propsData=o.propsData,n._parentListeners=o.listeners,n._renderChildren=o.children,n._componentTag=o.tag,e.render&&(n.render=e.render,n.staticRenderFns=e.staticRenderFns)}(n,e):n.$options=gr(er(n.constructor),e||{},n),n._renderProxy=n,n._self=n,function(t){var e=t.$options,n=e.parent;if(n&&!e.abstract){for(;n.$options.abstract&&n.$parent;)n=n.$parent;n.$children.push(t)}t.$parent=n,t.$root=n?n.$root:t,t.$children=[],t.$refs={},t._provided=n?n._provided:Object.create(null),t._watcher=null,t._inactive=null,t._directInactive=!1,t._isMounted=!1,t._isDestroyed=!1,t._isBeingDestroyed=!1}(n),function(t){t._events=Object.create(null),t._hasHookEvent=!1;var e=t.$options._parentListeners;e&&Me(t,e)}(n),function(e){e._vnode=null,e._staticTrees=null;var n=e.$options,r=e.$vnode=n._parentVnode,o=r&&r.context;e.$slots=ge(n._renderChildren,o),e.$scopedSlots=r?be(e.$parent,r.data.scopedSlots,e.$slots):t,e._c=function(t,n,r,o){return te(e,t,n,r,o,!1)},e.$createElement=function(t,n,r,o){return te(e,t,n,r,o,!0)};var i=r&&r.data;At(e,"$attrs",i&&i.attrs||t,null,!0),At(e,"$listeners",n._parentListeners||t,null,!0)}(n),Be(n,"beforeCreate",void 0,!1),function(t){var e=Qn(t.$options.inject,t);e&&(kt(!1),Object.keys(e).forEach((function(n){At(t,n,e[n])})),kt(!0))}(n),qn(n),function(t){var e=t.$options.provide;if(e){var n=a(e)?e.call(t):e;if(!s(n))return;for(var r=ln(t),o=st?Reflect.ownKeys(n):Object.keys(n),i=0;i<o.length;i++){var c=o[i];Object.defineProperty(r,c,Object.getOwnPropertyDescriptor(n,c))}}}(n),Be(n,"created"),n.$options.el&&n.$mount(n.$options.el)}}(Cr),function(t){var e={get:function(){return this._data}},n={get:function(){return this._props}};Object.defineProperty(t.prototype,"$data",e),Object.defineProperty(t.prototype,"$props",n),t.prototype.$set=jt,t.prototype.$delete=Et,t.prototype.$watch=function(t,e,n){var r=this;if(u(e))return Yn(r,t,e,n);(n=n||{}).user=!0;var o=new Vn(r,t,e,n);if(n.immediate){var i='callback for immediate watcher "'.concat(o.expression,'"');yt(),dn(e,r,[o.value],r,i),_t()}return function(){o.teardown()}}}(Cr),function(t){var n=/^hook:/;t.prototype.$on=function(t,r){var o=this;if(e(t))for(var i=0,a=t.length;i<a;i++)o.$on(t[i],r);else(o._events[t]||(o._events[t]=[])).push(r),n.test(t)&&(o._hasHookEvent=!0);return o},t.prototype.$once=function(t,e){var n=this;function r(){n.$off(t,r),e.apply(n,arguments)}return r.fn=e,n.$on(t,r),n},t.prototype.$off=function(t,n){var r=this;if(!arguments.length)return r._events=Object.create(null),r;if(e(t)){for(var o=0,i=t.length;o<i;o++)r.$off(t[o],n);return r}var a,s=r._events[t];if(!s)return r;if(!n)return r._events[t]=null,r;for(var c=s.length;c--;)if((a=s[c])===n||a.fn===n){s.splice(c,1);break}return r},t.prototype.$emit=function(t){var e=this,n=e._events[t];if(n){n=n.length>1?O(n):n;for(var r=O(arguments,1),o='event handler for "'.concat(t,'"'),i=0,a=n.length;i<a;i++)dn(n[i],e,r,e,o)}return e}}(Cr),function(t){t.prototype._update=function(t,e){var n=this,r=n.$el,o=n._vnode,i=Le(n);n._vnode=t,n.$el=o?n.__patch__(o,t):n.__patch__(n.$el,t,e,!1),i(),r&&(r.__vue__=null),n.$el&&(n.$el.__vue__=n);for(var a=n;a&&a.$vnode&&a.$parent&&a.$vnode===a.$parent._vnode;)a.$parent.$el=a.$el,a=a.$parent},t.prototype.$forceUpdate=function(){this._watcher&&this._watcher.update()},t.prototype.$destroy=function(){var t=this;if(!t._isBeingDestroyed){Be(t,"beforeDestroy"),t._isBeingDestroyed=!0;var e=t.$parent;!e||e._isBeingDestroyed||t.$options.abstract||g(e.$children,t),t._scope.stop(),t._data.__ob__&&t._data.__ob__.vmCount--,t._isDestroyed=!0,t.__patch__(t._vnode,null),Be(t,"destroyed"),t.$off(),t.$el&&(t.$el.__vue__=null),t.$vnode&&(t.$vnode.parent=null)}}}(Cr),function(t){me(t.prototype),t.prototype.$nextTick=function(t){return Cn(t,this)},t.prototype._render=function(){var t,n=this,r=n.$options,o=r.render,i=r._parentVnode;i&&n._isMounted&&(n.$scopedSlots=be(n.$parent,i.data.scopedSlots,n.$slots,n.$scopedSlots),n._slotsProxy&&Se(n._slotsProxy,n.$scopedSlots)),n.$vnode=i;try{ut(n),Ae=n,t=o.call(n._renderProxy,n.$createElement)}catch(e){fn(e,n,"render"),t=n._vnode}finally{Ae=null,ut()}return e(t)&&1===t.length&&(t=t[0]),t instanceof lt||(t=ft()),t.parent=i,t}}(Cr);var jr=[String,RegExp,Array],Er={name:"keep-alive",abstract:!0,props:{include:jr,exclude:jr,max:[String,Number]},methods:{cacheVNode:function(){var t=this,e=t.cache,n=t.keys,r=t.vnodeToCache,o=t.keyToCache;if(r){var i=r.tag,a=r.componentInstance,s=r.componentOptions;e[o]={name:Sr(s),tag:i,componentInstance:a},n.push(o),this.max&&n.length>parseInt(this.max)&&Ar(e,n[0],n,this._vnode),this.vnodeToCache=null}}},created:function(){this.cache=Object.create(null),this.keys=[]},destroyed:function(){for(var t in this.cache)Ar(this.cache,t,this.keys)},mounted:function(){var t=this;this.cacheVNode(),this.$watch("include",(function(e){Tr(t,(function(t){return Or(e,t)}))})),this.$watch("exclude",(function(e){Tr(t,(function(t){return!Or(e,t)}))}))},updated:function(){this.cacheVNode()},render:function(){var t=this.$slots.default,e=Ee(t),n=e&&e.componentOptions;if(n){var r=Sr(n),o=this.include,i=this.exclude;if(o&&(!r||!Or(o,r))||i&&r&&Or(i,r))return e;var a=this.cache,s=this.keys,c=null==e.key?n.Ctor.cid+(n.tag?"::".concat(n.tag):""):e.key;a[c]?(e.componentInstance=a[c].componentInstance,g(s,c),s.push(c)):(this.vnodeToCache=e,this.keyToCache=c),e.data.keepAlive=!0}return e||t&&t[0]}},Nr={KeepAlive:Er};!function(t){var e={get:function(){return H}};Object.defineProperty(t,"config",e),t.util={warn:lr,extend:T,mergeOptions:gr,defineReactive:At},t.set=jt,t.delete=Et,t.nextTick=Cn,t.observable=function(t){return Tt(t),t},t.options=Object.create(null),R.forEach((function(e){t.options[e+"s"]=Object.create(null)})),t.options._base=t,T(t.options.components,Nr),function(t){t.use=function(t){var e=this._installedPlugins||(this._installedPlugins=[]);if(e.indexOf(t)>-1)return this;var n=O(arguments,1);return n.unshift(this),a(t.install)?t.install.apply(t,n):a(t)&&t.apply(null,n),e.push(t),this}}(t),function(t){t.mixin=function(t){return this.options=gr(this.options,t),this}}(t),kr(t),function(t){R.forEach((function(e){t[e]=function(t,n){return n?("component"===e&&u(n)&&(n.name=n.name||t,n=this.options._base.extend(n)),"directive"===e&&a(n)&&(n={bind:n,update:n}),this.options[e+"s"][t]=n,n):this.options[e+"s"][t]}}))}(t)}(Cr),Object.defineProperty(Cr.prototype,"$isServer",{get:rt}),Object.defineProperty(Cr.prototype,"$ssrContext",{get:function(){return this.$vnode&&this.$vnode.ssrContext}}),Object.defineProperty(Cr,"FunctionalRenderContext",{value:nr}),Cr.version=Rn;var Pr=v("style,class"),Dr=v("input,textarea,option,select,progress"),Mr=function(t,e,n){return"value"===n&&Dr(t)&&"button"!==e||"selected"===n&&"option"===t||"checked"===n&&"input"===t||"muted"===n&&"video"===t},Ir=v("contenteditable,draggable,spellcheck"),Lr=v("events,caret,typing,plaintext-only"),Rr=v("allowfullscreen,async,autofocus,autoplay,checked,compact,controls,declare,default,defaultchecked,defaultmuted,defaultselected,defer,disabled,enabled,formnovalidate,hidden,indeterminate,inert,ismap,itemscope,loop,multiple,muted,nohref,noresize,noshade,novalidate,nowrap,open,pauseonexit,readonly,required,reversed,scoped,seamless,selected,sortable,truespeed,typemustmatch,visible"),Fr="http://www.w3.org/1999/xlink",Hr=function(t){return":"===t.charAt(5)&&"xlink"===t.slice(0,5)},Br=function(t){return Hr(t)?t.slice(6,t.length):""},Ur=function(t){return null==t||!1===t};function zr(t){for(var e=t.data,n=t,o=t;r(o.componentInstance);)(o=o.componentInstance._vnode)&&o.data&&(e=Vr(o.data,e));for(;r(n=n.parent);)n&&n.data&&(e=Vr(e,n.data));return function(t,e){if(r(t)||r(e))return Kr(t,Jr(e));return""}(e.staticClass,e.class)}function Vr(t,e){return{staticClass:Kr(t.staticClass,e.staticClass),class:r(t.class)?[t.class,e.class]:e.class}}function Kr(t,e){return t?e?t+" "+e:t:e||""}function Jr(t){return Array.isArray(t)?function(t){for(var e,n="",o=0,i=t.length;o<i;o++)r(e=Jr(t[o]))&&""!==e&&(n&&(n+=" "),n+=e);return n}(t):s(t)?function(t){var e="";for(var n in t)t[n]&&(e&&(e+=" "),e+=n);return e}(t):"string"==typeof t?t:""}var qr={svg:"http://www.w3.org/2000/svg",math:"http://www.w3.org/1998/Math/MathML"},Wr=v("html,body,base,head,link,meta,style,title,address,article,aside,footer,header,h1,h2,h3,h4,h5,h6,hgroup,nav,section,div,dd,dl,dt,figcaption,figure,picture,hr,img,li,main,ol,p,pre,ul,a,b,abbr,bdi,bdo,br,cite,code,data,dfn,em,i,kbd,mark,q,rp,rt,rtc,ruby,s,samp,small,span,strong,sub,sup,time,u,var,wbr,area,audio,map,track,video,embed,object,param,source,canvas,script,noscript,del,ins,caption,col,colgroup,table,thead,tbody,td,th,tr,button,datalist,fieldset,form,input,label,legend,meter,optgroup,option,output,progress,select,textarea,details,dialog,menu,menuitem,summary,content,element,shadow,template,blockquote,iframe,tfoot"),Zr=v("svg,animate,circle,clippath,cursor,defs,desc,ellipse,filter,font-face,foreignobject,g,glyph,image,line,marker,mask,missing-glyph,path,pattern,polygon,polyline,rect,switch,symbol,text,textpath,tspan,use,view",!0),Gr=function(t){return Wr(t)||Zr(t)};function Xr(t){return Zr(t)?"svg":"math"===t?"math":void 0}var Yr=Object.create(null);var Qr=v("text,number,password,search,email,tel,url");function to(t){if("string"==typeof t){var e=document.querySelector(t);return e||document.createElement("div")}return t}var eo=Object.freeze({__proto__:null,createElement:function(t,e){var n=document.createElement(t);return"select"!==t||e.data&&e.data.attrs&&void 0!==e.data.attrs.multiple&&n.setAttribute("multiple","multiple"),n},createElementNS:function(t,e){return document.createElementNS(qr[t],e)},createTextNode:function(t){return document.createTextNode(t)},createComment:function(t){return document.createComment(t)},insertBefore:function(t,e,n){t.insertBefore(e,n)},removeChild:function(t,e){t.removeChild(e)},appendChild:function(t,e){t.appendChild(e)},parentNode:function(t){return t.parentNode},nextSibling:function(t){return t.nextSibling},tagName:function(t){return t.tagName},setTextContent:function(t,e){t.textContent=e},setStyleScope:function(t,e){t.setAttribute(e,"")}}),no={create:function(t,e){ro(e)},update:function(t,e){t.data.ref!==e.data.ref&&(ro(t,!0),ro(e))},destroy:function(t){ro(t,!0)}};function ro(t,n){var o=t.data.ref;if(r(o)){var i=t.context,s=t.componentInstance||t.elm,c=n?null:s,u=n?void 0:s;if(a(o))dn(o,i,[c],i,"template ref function");else{var l=t.data.refInFor,f="string"==typeof o||"number"==typeof o,d=Ft(o),p=i.$refs;if(f||d)if(l){var v=f?p[o]:o.value;n?e(v)&&g(v,s):e(v)?v.includes(s)||v.push(s):f?(p[o]=[s],oo(i,o,p[o])):o.value=[s]}else if(f){if(n&&p[o]!==s)return;p[o]=u,oo(i,o,c)}else if(d){if(n&&o.value!==s)return;o.value=c}}}}function oo(t,e,n){var r=t._setupState;r&&_(r,e)&&(Ft(r[e])?r[e].value=n:r[e]=n)}var io=new lt("",{},[]),ao=["create","activate","update","remove","destroy"];function so(t,e){return t.key===e.key&&t.asyncFactory===e.asyncFactory&&(t.tag===e.tag&&t.isComment===e.isComment&&r(t.data)===r(e.data)&&function(t,e){if("input"!==t.tag)return!0;var n,o=r(n=t.data)&&r(n=n.attrs)&&n.type,i=r(n=e.data)&&r(n=n.attrs)&&n.type;return o===i||Qr(o)&&Qr(i)}(t,e)||o(t.isAsyncPlaceholder)&&n(e.asyncFactory.error))}function co(t,e,n){var o,i,a={};for(o=e;o<=n;++o)r(i=t[o].key)&&(a[i]=o);return a}var uo={create:lo,update:lo,destroy:function(t){lo(t,io)}};function lo(t,e){(t.data.directives||e.data.directives)&&function(t,e){var n,r,o,i=t===io,a=e===io,s=po(t.data.directives,t.context),c=po(e.data.directives,e.context),u=[],l=[];for(n in c)r=s[n],o=c[n],r?(o.oldValue=r.value,o.oldArg=r.arg,ho(o,"update",e,t),o.def&&o.def.componentUpdated&&l.push(o)):(ho(o,"bind",e,t),o.def&&o.def.inserted&&u.push(o));if(u.length){var f=function(){for(var n=0;n<u.length;n++)ho(u[n],"inserted",e,t)};i?Zt(e,"insert",f):f()}l.length&&Zt(e,"postpatch",(function(){for(var n=0;n<l.length;n++)ho(l[n],"componentUpdated",e,t)}));if(!i)for(n in s)c[n]||ho(s[n],"unbind",t,t,a)}(t,e)}var fo=Object.create(null);function po(t,e){var n,r,o=Object.create(null);if(!t)return o;for(n=0;n<t.length;n++){if((r=t[n]).modifiers||(r.modifiers=fo),o[vo(r)]=r,e._setupState&&e._setupState.__sfc){var i=r.def||yr(e,"_setupState","v-"+r.name);r.def="function"==typeof i?{bind:i,update:i}:i}r.def=r.def||yr(e.$options,"directives",r.name)}return o}function vo(t){return t.rawName||"".concat(t.name,".").concat(Object.keys(t.modifiers||{}).join("."))}function ho(t,e,n,r,o){var i=t.def&&t.def[e];if(i)try{i(n.elm,t,n,r,o)}catch(r){fn(r,n.context,"directive ".concat(t.name," ").concat(e," hook"))}}var mo=[no,uo];function go(t,e){var i=e.componentOptions;if(!(r(i)&&!1===i.Ctor.options.inheritAttrs||n(t.data.attrs)&&n(e.data.attrs))){var a,s,c=e.elm,u=t.data.attrs||{},l=e.data.attrs||{};for(a in(r(l.__ob__)||o(l._v_attr_proxy))&&(l=e.data.attrs=T({},l)),l)s=l[a],u[a]!==s&&yo(c,a,s,e.data.pre);for(a in(W||G)&&l.value!==u.value&&yo(c,"value",l.value),u)n(l[a])&&(Hr(a)?c.removeAttributeNS(Fr,Br(a)):Ir(a)||c.removeAttribute(a))}}function yo(t,e,n,r){r||t.tagName.indexOf("-")>-1?_o(t,e,n):Rr(e)?Ur(n)?t.removeAttribute(e):(n="allowfullscreen"===e&&"EMBED"===t.tagName?"true":e,t.setAttribute(e,n)):Ir(e)?t.setAttribute(e,function(t,e){return Ur(e)||"false"===e?"false":"contenteditable"===t&&Lr(e)?e:"true"}(e,n)):Hr(e)?Ur(n)?t.removeAttributeNS(Fr,Br(e)):t.setAttributeNS(Fr,e,n):_o(t,e,n)}function _o(t,e,n){if(Ur(n))t.removeAttribute(e);else{if(W&&!Z&&"TEXTAREA"===t.tagName&&"placeholder"===e&&""!==n&&!t.__ieph){var r=function(e){e.stopImmediatePropagation(),t.removeEventListener("input",r)};t.addEventListener("input",r),t.__ieph=!0}t.setAttribute(e,n)}}var bo={create:go,update:go};function $o(t,e){var o=e.elm,i=e.data,a=t.data;if(!(n(i.staticClass)&&n(i.class)&&(n(a)||n(a.staticClass)&&n(a.class)))){var s=zr(e),c=o._transitionClasses;r(c)&&(s=Kr(s,Jr(c))),s!==o._prevClass&&(o.setAttribute("class",s),o._prevClass=s)}}var wo,xo,Co,ko,So,Oo,To={create:$o,update:$o},Ao=/[\w).+\-_$\]]/;function jo(t){var e,n,r,o,i,a=!1,s=!1,c=!1,u=!1,l=0,f=0,d=0,p=0;for(r=0;r<t.length;r++)if(n=e,e=t.charCodeAt(r),a)39===e&&92!==n&&(a=!1);else if(s)34===e&&92!==n&&(s=!1);else if(c)96===e&&92!==n&&(c=!1);else if(u)47===e&&92!==n&&(u=!1);else if(124!==e||124===t.charCodeAt(r+1)||124===t.charCodeAt(r-1)||l||f||d){switch(e){case 34:s=!0;break;case 39:a=!0;break;case 96:c=!0;break;case 40:d++;break;case 41:d--;break;case 91:f++;break;case 93:f--;break;case 123:l++;break;case 125:l--}if(47===e){for(var v=r-1,h=void 0;v>=0&&" "===(h=t.charAt(v));v--);h&&Ao.test(h)||(u=!0)}}else void 0===o?(p=r+1,o=t.slice(0,r).trim()):m();function m(){(i||(i=[])).push(t.slice(p,r).trim()),p=r+1}if(void 0===o?o=t.slice(0,r).trim():0!==p&&m(),i)for(r=0;r<i.length;r++)o=Eo(o,i[r]);return o}function Eo(t,e){var n=e.indexOf("(");if(n<0)return'_f("'.concat(e,'")(').concat(t,")");var r=e.slice(0,n),o=e.slice(n+1);return'_f("'.concat(r,'")(').concat(t).concat(")"!==o?","+o:o)}function No(t,e){console.error("[Vue compiler]: ".concat(t))}function Po(t,e){return t?t.map((function(t){return t[e]})).filter((function(t){return t})):[]}function Do(t,e,n,r,o){(t.props||(t.props=[])).push(zo({name:e,value:n,dynamic:o},r)),t.plain=!1}function Mo(t,e,n,r,o){(o?t.dynamicAttrs||(t.dynamicAttrs=[]):t.attrs||(t.attrs=[])).push(zo({name:e,value:n,dynamic:o},r)),t.plain=!1}function Io(t,e,n,r){t.attrsMap[e]=n,t.attrsList.push(zo({name:e,value:n},r))}function Lo(t,e,n,r,o,i,a,s){(t.directives||(t.directives=[])).push(zo({name:e,rawName:n,value:r,arg:o,isDynamicArg:i,modifiers:a},s)),t.plain=!1}function Ro(t,e,n){return n?"_p(".concat(e,',"').concat(t,'")'):t+e}function Fo(e,n,r,o,i,a,s,c){var u;(o=o||t).right?c?n="(".concat(n,")==='click'?'contextmenu':(").concat(n,")"):"click"===n&&(n="contextmenu",delete o.right):o.middle&&(c?n="(".concat(n,")==='click'?'mouseup':(").concat(n,")"):"click"===n&&(n="mouseup")),o.capture&&(delete o.capture,n=Ro("!",n,c)),o.once&&(delete o.once,n=Ro("~",n,c)),o.passive&&(delete o.passive,n=Ro("&",n,c)),o.native?(delete o.native,u=e.nativeEvents||(e.nativeEvents={})):u=e.events||(e.events={});var l=zo({value:r.trim(),dynamic:c},s);o!==t&&(l.modifiers=o);var f=u[n];Array.isArray(f)?i?f.unshift(l):f.push(l):u[n]=f?i?[l,f]:[f,l]:l,e.plain=!1}function Ho(t,e,n){var r=Bo(t,":"+e)||Bo(t,"v-bind:"+e);if(null!=r)return jo(r);if(!1!==n){var o=Bo(t,e);if(null!=o)return JSON.stringify(o)}}function Bo(t,e,n){var r;if(null!=(r=t.attrsMap[e]))for(var o=t.attrsList,i=0,a=o.length;i<a;i++)if(o[i].name===e){o.splice(i,1);break}return n&&delete t.attrsMap[e],r}function Uo(t,e){for(var n=t.attrsList,r=0,o=n.length;r<o;r++){var i=n[r];if(e.test(i.name))return n.splice(r,1),i}}function zo(t,e){return e&&(null!=e.start&&(t.start=e.start),null!=e.end&&(t.end=e.end)),t}function Vo(t,e,n){var r=n||{},o=r.number,i="$$v",a=i;r.trim&&(a="(typeof ".concat(i," === 'string'")+"? ".concat(i,".trim()")+": ".concat(i,")")),o&&(a="_n(".concat(a,")"));var s=Ko(e,a);t.model={value:"(".concat(e,")"),expression:JSON.stringify(e),callback:"function (".concat(i,") {").concat(s,"}")}}function Ko(t,e){var n=function(t){if(t=t.trim(),wo=t.length,t.indexOf("[")<0||t.lastIndexOf("]")<wo-1)return(ko=t.lastIndexOf("."))>-1?{exp:t.slice(0,ko),key:'"'+t.slice(ko+1)+'"'}:{exp:t,key:null};xo=t,ko=So=Oo=0;for(;!qo();)Wo(Co=Jo())?Go(Co):91===Co&&Zo(Co);return{exp:t.slice(0,So),key:t.slice(So+1,Oo)}}(t);return null===n.key?"".concat(t,"=").concat(e):"$set(".concat(n.exp,", ").concat(n.key,", ").concat(e,")")}function Jo(){return xo.charCodeAt(++ko)}function qo(){return ko>=wo}function Wo(t){return 34===t||39===t}function Zo(t){var e=1;for(So=ko;!qo();)if(Wo(t=Jo()))Go(t);else if(91===t&&e++,93===t&&e--,0===e){Oo=ko;break}}function Go(t){for(var e=t;!qo()&&(t=Jo())!==e;);}var Xo,Yo="__r";function Qo(t,e,n){var r=Xo;return function o(){var i=e.apply(null,arguments);null!==i&&ni(t,o,n,r)}}var ti=mn&&!(Q&&Number(Q[1])<=53);function ei(t,e,n,r){if(ti){var o=We,i=e;e=i._wrapper=function(t){if(t.target===t.currentTarget||t.timeStamp>=o||t.timeStamp<=0||t.target.ownerDocument!==document)return i.apply(this,arguments)}}Xo.addEventListener(t,e,et?{capture:n,passive:r}:n)}function ni(t,e,n,r){(r||Xo).removeEventListener(t,e._wrapper||e,n)}function ri(t,e){if(!n(t.data.on)||!n(e.data.on)){var o=e.data.on||{},i=t.data.on||{};Xo=e.elm||t.elm,function(t){if(r(t.__r)){var e=W?"change":"input";t[e]=[].concat(t.__r,t[e]||[]),delete t.__r}r(t.__c)&&(t.change=[].concat(t.__c,t.change||[]),delete t.__c)}(o),Wt(o,i,ei,ni,Qo,e.context),Xo=void 0}}var oi,ii={create:ri,update:ri,destroy:function(t){return ri(t,io)}};function ai(t,e){if(!n(t.data.domProps)||!n(e.data.domProps)){var i,a,s=e.elm,c=t.data.domProps||{},u=e.data.domProps||{};for(i in(r(u.__ob__)||o(u._v_attr_proxy))&&(u=e.data.domProps=T({},u)),c)i in u||(s[i]="");for(i in u){if(a=u[i],"textContent"===i||"innerHTML"===i){if(e.children&&(e.children.length=0),a===c[i])continue;1===s.childNodes.length&&s.removeChild(s.childNodes[0])}if("value"===i&&"PROGRESS"!==s.tagName){s._value=a;var l=n(a)?"":String(a);si(s,l)&&(s.value=l)}else if("innerHTML"===i&&Zr(s.tagName)&&n(s.innerHTML)){(oi=oi||document.createElement("div")).innerHTML="<svg>".concat(a,"</svg>");for(var f=oi.firstChild;s.firstChild;)s.removeChild(s.firstChild);for(;f.firstChild;)s.appendChild(f.firstChild)}else if(a!==c[i])try{s[i]=a}catch(t){}}}}function si(t,e){return!t.composing&&("OPTION"===t.tagName||function(t,e){var n=!0;try{n=document.activeElement!==t}catch(t){}return n&&t.value!==e}(t,e)||function(t,e){var n=t.value,o=t._vModifiers;if(r(o)){if(o.number)return p(n)!==p(e);if(o.trim)return n.trim()!==e.trim()}return n!==e}(t,e))}var ci={create:ai,update:ai},ui=b((function(t){var e={},n=/:(.+)/;return t.split(/;(?![^(]*\))/g).forEach((function(t){if(t){var r=t.split(n);r.length>1&&(e[r[0].trim()]=r[1].trim())}})),e}));function li(t){var e=fi(t.style);return t.staticStyle?T(t.staticStyle,e):e}function fi(t){return Array.isArray(t)?A(t):"string"==typeof t?ui(t):t}var di,pi=/^--/,vi=/\s*!important$/,hi=function(t,e,n){if(pi.test(e))t.style.setProperty(e,n);else if(vi.test(n))t.style.setProperty(k(e),n.replace(vi,""),"important");else{var r=gi(e);if(Array.isArray(n))for(var o=0,i=n.length;o<i;o++)t.style[r]=n[o];else t.style[r]=n}},mi=["Webkit","Moz","ms"],gi=b((function(t){if(di=di||document.createElement("div").style,"filter"!==(t=w(t))&&t in di)return t;for(var e=t.charAt(0).toUpperCase()+t.slice(1),n=0;n<mi.length;n++){var r=mi[n]+e;if(r in di)return r}}));function yi(t,e){var o=e.data,i=t.data;if(!(n(o.staticStyle)&&n(o.style)&&n(i.staticStyle)&&n(i.style))){var a,s,c=e.elm,u=i.staticStyle,l=i.normalizedStyle||i.style||{},f=u||l,d=fi(e.data.style)||{};e.data.normalizedStyle=r(d.__ob__)?T({},d):d;var p=function(t,e){var n,r={};if(e)for(var o=t;o.componentInstance;)(o=o.componentInstance._vnode)&&o.data&&(n=li(o.data))&&T(r,n);(n=li(t.data))&&T(r,n);for(var i=t;i=i.parent;)i.data&&(n=li(i.data))&&T(r,n);return r}(e,!0);for(s in f)n(p[s])&&hi(c,s,"");for(s in p)(a=p[s])!==f[s]&&hi(c,s,null==a?"":a)}}var _i={create:yi,update:yi},bi=/\s+/;function $i(t,e){if(e&&(e=e.trim()))if(t.classList)e.indexOf(" ")>-1?e.split(bi).forEach((function(e){return t.classList.add(e)})):t.classList.add(e);else{var n=" ".concat(t.getAttribute("class")||""," ");n.indexOf(" "+e+" ")<0&&t.setAttribute("class",(n+e).trim())}}function wi(t,e){if(e&&(e=e.trim()))if(t.classList)e.indexOf(" ")>-1?e.split(bi).forEach((function(e){return t.classList.remove(e)})):t.classList.remove(e),t.classList.length||t.removeAttribute("class");else{for(var n=" ".concat(t.getAttribute("class")||""," "),r=" "+e+" ";n.indexOf(r)>=0;)n=n.replace(r," ");(n=n.trim())?t.setAttribute("class",n):t.removeAttribute("class")}}function xi(t){if(t){if("object"==typeof t){var e={};return!1!==t.css&&T(e,Ci(t.name||"v")),T(e,t),e}return"string"==typeof t?Ci(t):void 0}}var Ci=b((function(t){return{enterClass:"".concat(t,"-enter"),enterToClass:"".concat(t,"-enter-to"),enterActiveClass:"".concat(t,"-enter-active"),leaveClass:"".concat(t,"-leave"),leaveToClass:"".concat(t,"-leave-to"),leaveActiveClass:"".concat(t,"-leave-active")}})),ki=J&&!Z,Si="transition",Oi="animation",Ti="transition",Ai="transitionend",ji="animation",Ei="animationend";ki&&(void 0===window.ontransitionend&&void 0!==window.onwebkittransitionend&&(Ti="WebkitTransition",Ai="webkitTransitionEnd"),void 0===window.onanimationend&&void 0!==window.onwebkitanimationend&&(ji="WebkitAnimation",Ei="webkitAnimationEnd"));var Ni=J?window.requestAnimationFrame?window.requestAnimationFrame.bind(window):setTimeout:function(t){return t()};function Pi(t){Ni((function(){Ni(t)}))}function Di(t,e){var n=t._transitionClasses||(t._transitionClasses=[]);n.indexOf(e)<0&&(n.push(e),$i(t,e))}function Mi(t,e){t._transitionClasses&&g(t._transitionClasses,e),wi(t,e)}function Ii(t,e,n){var r=Ri(t,e),o=r.type,i=r.timeout,a=r.propCount;if(!o)return n();var s=o===Si?Ai:Ei,c=0,u=function(){t.removeEventListener(s,l),n()},l=function(e){e.target===t&&++c>=a&&u()};setTimeout((function(){c<a&&u()}),i+1),t.addEventListener(s,l)}var Li=/\b(transform|all)(,|$)/;function Ri(t,e){var n,r=window.getComputedStyle(t),o=(r[Ti+"Delay"]||"").split(", "),i=(r[Ti+"Duration"]||"").split(", "),a=Fi(o,i),s=(r[ji+"Delay"]||"").split(", "),c=(r[ji+"Duration"]||"").split(", "),u=Fi(s,c),l=0,f=0;return e===Si?a>0&&(n=Si,l=a,f=i.length):e===Oi?u>0&&(n=Oi,l=u,f=c.length):f=(n=(l=Math.max(a,u))>0?a>u?Si:Oi:null)?n===Si?i.length:c.length:0,{type:n,timeout:l,propCount:f,hasTransform:n===Si&&Li.test(r[Ti+"Property"])}}function Fi(t,e){for(;t.length<e.length;)t=t.concat(t);return Math.max.apply(null,e.map((function(e,n){return Hi(e)+Hi(t[n])})))}function Hi(t){return 1e3*Number(t.slice(0,-1).replace(",","."))}function Bi(t,e){var o=t.elm;r(o._leaveCb)&&(o._leaveCb.cancelled=!0,o._leaveCb());var i=xi(t.data.transition);if(!n(i)&&!r(o._enterCb)&&1===o.nodeType){for(var c=i.css,u=i.type,l=i.enterClass,f=i.enterToClass,d=i.enterActiveClass,v=i.appearClass,h=i.appearToClass,m=i.appearActiveClass,g=i.beforeEnter,y=i.enter,_=i.afterEnter,b=i.enterCancelled,$=i.beforeAppear,w=i.appear,x=i.afterAppear,C=i.appearCancelled,k=i.duration,S=Ie,O=Ie.$vnode;O&&O.parent;)S=O.context,O=O.parent;var T=!S._isMounted||!t.isRootInsert;if(!T||w||""===w){var A=T&&v?v:l,j=T&&m?m:d,E=T&&h?h:f,N=T&&$||g,P=T&&a(w)?w:y,D=T&&x||_,I=T&&C||b,L=p(s(k)?k.enter:k),R=!1!==c&&!Z,F=Vi(P),H=o._enterCb=M((function(){R&&(Mi(o,E),Mi(o,j)),H.cancelled?(R&&Mi(o,A),I&&I(o)):D&&D(o),o._enterCb=null}));t.data.show||Zt(t,"insert",(function(){var e=o.parentNode,n=e&&e._pending&&e._pending[t.key];n&&n.tag===t.tag&&n.elm._leaveCb&&n.elm._leaveCb(),P&&P(o,H)})),N&&N(o),R&&(Di(o,A),Di(o,j),Pi((function(){Mi(o,A),H.cancelled||(Di(o,E),F||(zi(L)?setTimeout(H,L):Ii(o,u,H)))}))),t.data.show&&(e&&e(),P&&P(o,H)),R||F||H()}}}function Ui(t,e){var o=t.elm;r(o._enterCb)&&(o._enterCb.cancelled=!0,o._enterCb());var i=xi(t.data.transition);if(n(i)||1!==o.nodeType)return e();if(!r(o._leaveCb)){var a=i.css,c=i.type,u=i.leaveClass,l=i.leaveToClass,f=i.leaveActiveClass,d=i.beforeLeave,v=i.leave,h=i.afterLeave,m=i.leaveCancelled,g=i.delayLeave,y=i.duration,_=!1!==a&&!Z,b=Vi(v),$=p(s(y)?y.leave:y),w=o._leaveCb=M((function(){o.parentNode&&o.parentNode._pending&&(o.parentNode._pending[t.key]=null),_&&(Mi(o,l),Mi(o,f)),w.cancelled?(_&&Mi(o,u),m&&m(o)):(e(),h&&h(o)),o._leaveCb=null}));g?g(x):x()}function x(){w.cancelled||(!t.data.show&&o.parentNode&&((o.parentNode._pending||(o.parentNode._pending={}))[t.key]=t),d&&d(o),_&&(Di(o,u),Di(o,f),Pi((function(){Mi(o,u),w.cancelled||(Di(o,l),b||(zi($)?setTimeout(w,$):Ii(o,c,w)))}))),v&&v(o,w),_||b||w())}}function zi(t){return"number"==typeof t&&!isNaN(t)}function Vi(t){if(n(t))return!1;var e=t.fns;return r(e)?Vi(Array.isArray(e)?e[0]:e):(t._length||t.length)>1}function Ki(t,e){!0!==e.data.show&&Bi(e)}var Ji=function(t){var a,s,c={},u=t.modules,l=t.nodeOps;for(a=0;a<ao.length;++a)for(c[ao[a]]=[],s=0;s<u.length;++s)r(u[s][ao[a]])&&c[ao[a]].push(u[s][ao[a]]);function f(t){var e=l.parentNode(t);r(e)&&l.removeChild(e,t)}function d(t,e,n,i,a,s,u){if(r(t.elm)&&r(s)&&(t=s[u]=pt(t)),t.isRootInsert=!a,!function(t,e,n,i){var a=t.data;if(r(a)){var s=r(t.componentInstance)&&a.keepAlive;if(r(a=a.hook)&&r(a=a.init)&&a(t,!1),r(t.componentInstance))return p(t,e),h(n,t.elm,i),o(s)&&function(t,e,n,o){var i,a=t;for(;a.componentInstance;)if(r(i=(a=a.componentInstance._vnode).data)&&r(i=i.transition)){for(i=0;i<c.activate.length;++i)c.activate[i](io,a);e.push(a);break}h(n,t.elm,o)}(t,e,n,i),!0}}(t,e,n,i)){var f=t.data,d=t.children,v=t.tag;r(v)?(t.elm=t.ns?l.createElementNS(t.ns,v):l.createElement(v,t),_(t),m(t,d,e),r(f)&&y(t,e),h(n,t.elm,i)):o(t.isComment)?(t.elm=l.createComment(t.text),h(n,t.elm,i)):(t.elm=l.createTextNode(t.text),h(n,t.elm,i))}}function p(t,e){r(t.data.pendingInsert)&&(e.push.apply(e,t.data.pendingInsert),t.data.pendingInsert=null),t.elm=t.componentInstance.$el,g(t)?(y(t,e),_(t)):(ro(t),e.push(t))}function h(t,e,n){r(t)&&(r(n)?l.parentNode(n)===t&&l.insertBefore(t,e,n):l.appendChild(t,e))}function m(t,n,r){if(e(n))for(var o=0;o<n.length;++o)d(n[o],r,t.elm,null,!0,n,o);else i(t.text)&&l.appendChild(t.elm,l.createTextNode(String(t.text)))}function g(t){for(;t.componentInstance;)t=t.componentInstance._vnode;return r(t.tag)}function y(t,e){for(var n=0;n<c.create.length;++n)c.create[n](io,t);r(a=t.data.hook)&&(r(a.create)&&a.create(io,t),r(a.insert)&&e.push(t))}function _(t){var e;if(r(e=t.fnScopeId))l.setStyleScope(t.elm,e);else for(var n=t;n;)r(e=n.context)&&r(e=e.$options._scopeId)&&l.setStyleScope(t.elm,e),n=n.parent;r(e=Ie)&&e!==t.context&&e!==t.fnContext&&r(e=e.$options._scopeId)&&l.setStyleScope(t.elm,e)}function b(t,e,n,r,o,i){for(;r<=o;++r)d(n[r],i,t,e,!1,n,r)}function $(t){var e,n,o=t.data;if(r(o))for(r(e=o.hook)&&r(e=e.destroy)&&e(t),e=0;e<c.destroy.length;++e)c.destroy[e](t);if(r(e=t.children))for(n=0;n<t.children.length;++n)$(t.children[n])}function w(t,e,n){for(;e<=n;++e){var o=t[e];r(o)&&(r(o.tag)?(x(o),$(o)):f(o.elm))}}function x(t,e){if(r(e)||r(t.data)){var n,o=c.remove.length+1;for(r(e)?e.listeners+=o:e=function(t,e){function n(){0==--n.listeners&&f(t)}return n.listeners=e,n}(t.elm,o),r(n=t.componentInstance)&&r(n=n._vnode)&&r(n.data)&&x(n,e),n=0;n<c.remove.length;++n)c.remove[n](t,e);r(n=t.data.hook)&&r(n=n.remove)?n(t,e):e()}else f(t.elm)}function C(t,e,n,o){for(var i=n;i<o;i++){var a=e[i];if(r(a)&&so(t,a))return i}}function k(t,e,i,a,s,u){if(t!==e){r(e.elm)&&r(a)&&(e=a[s]=pt(e));var f=e.elm=t.elm;if(o(t.isAsyncPlaceholder))r(e.asyncFactory.resolved)?T(t.elm,e,i):e.isAsyncPlaceholder=!0;else if(o(e.isStatic)&&o(t.isStatic)&&e.key===t.key&&(o(e.isCloned)||o(e.isOnce)))e.componentInstance=t.componentInstance;else{var p,v=e.data;r(v)&&r(p=v.hook)&&r(p=p.prepatch)&&p(t,e);var h=t.children,m=e.children;if(r(v)&&g(e)){for(p=0;p<c.update.length;++p)c.update[p](t,e);r(p=v.hook)&&r(p=p.update)&&p(t,e)}n(e.text)?r(h)&&r(m)?h!==m&&function(t,e,o,i,a){for(var s,c,u,f=0,p=0,v=e.length-1,h=e[0],m=e[v],g=o.length-1,y=o[0],_=o[g],$=!a;f<=v&&p<=g;)n(h)?h=e[++f]:n(m)?m=e[--v]:so(h,y)?(k(h,y,i,o,p),h=e[++f],y=o[++p]):so(m,_)?(k(m,_,i,o,g),m=e[--v],_=o[--g]):so(h,_)?(k(h,_,i,o,g),$&&l.insertBefore(t,h.elm,l.nextSibling(m.elm)),h=e[++f],_=o[--g]):so(m,y)?(k(m,y,i,o,p),$&&l.insertBefore(t,m.elm,h.elm),m=e[--v],y=o[++p]):(n(s)&&(s=co(e,f,v)),n(c=r(y.key)?s[y.key]:C(y,e,f,v))?d(y,i,t,h.elm,!1,o,p):so(u=e[c],y)?(k(u,y,i,o,p),e[c]=void 0,$&&l.insertBefore(t,u.elm,h.elm)):d(y,i,t,h.elm,!1,o,p),y=o[++p]);f>v?b(t,n(o[g+1])?null:o[g+1].elm,o,p,g,i):p>g&&w(e,f,v)}(f,h,m,i,u):r(m)?(r(t.text)&&l.setTextContent(f,""),b(f,null,m,0,m.length-1,i)):r(h)?w(h,0,h.length-1):r(t.text)&&l.setTextContent(f,""):t.text!==e.text&&l.setTextContent(f,e.text),r(v)&&r(p=v.hook)&&r(p=p.postpatch)&&p(t,e)}}}function S(t,e,n){if(o(n)&&r(t.parent))t.parent.data.pendingInsert=e;else for(var i=0;i<e.length;++i)e[i].data.hook.insert(e[i])}var O=v("attrs,class,staticClass,staticStyle,key");function T(t,e,n,i){var a,s=e.tag,c=e.data,u=e.children;if(i=i||c&&c.pre,e.elm=t,o(e.isComment)&&r(e.asyncFactory))return e.isAsyncPlaceholder=!0,!0;if(r(c)&&(r(a=c.hook)&&r(a=a.init)&&a(e,!0),r(a=e.componentInstance)))return p(e,n),!0;if(r(s)){if(r(u))if(t.hasChildNodes())if(r(a=c)&&r(a=a.domProps)&&r(a=a.innerHTML)){if(a!==t.innerHTML)return!1}else{for(var l=!0,f=t.firstChild,d=0;d<u.length;d++){if(!f||!T(f,u[d],n,i)){l=!1;break}f=f.nextSibling}if(!l||f)return!1}else m(e,u,n);if(r(c)){var v=!1;for(var h in c)if(!O(h)){v=!0,y(e,n);break}!v&&c.class&&Bn(c.class)}}else t.data!==e.text&&(t.data=e.text);return!0}return function(t,e,i,a){if(!n(e)){var s,u=!1,f=[];if(n(t))u=!0,d(e,f);else{var p=r(t.nodeType);if(!p&&so(t,e))k(t,e,f,null,null,a);else{if(p){if(1===t.nodeType&&t.hasAttribute(L)&&(t.removeAttribute(L),i=!0),o(i)&&T(t,e,f))return S(e,f,!0),t;s=t,t=new lt(l.tagName(s).toLowerCase(),{},[],void 0,s)}var v=t.elm,h=l.parentNode(v);if(d(e,f,v._leaveCb?null:h,l.nextSibling(v)),r(e.parent))for(var m=e.parent,y=g(e);m;){for(var _=0;_<c.destroy.length;++_)c.destroy[_](m);if(m.elm=e.elm,y){for(var b=0;b<c.create.length;++b)c.create[b](io,m);var x=m.data.hook.insert;if(x.merged)for(var C=1;C<x.fns.length;C++)x.fns[C]()}else ro(m);m=m.parent}r(h)?w([t],0,0):r(t.tag)&&$(t)}}return S(e,f,u),e.elm}r(t)&&$(t)}}({nodeOps:eo,modules:[bo,To,ii,ci,_i,J?{create:Ki,activate:Ki,remove:function(t,e){!0!==t.data.show?Ui(t,e):e()}}:{}].concat(mo)});Z&&document.addEventListener("selectionchange",(function(){var t=document.activeElement;t&&t.vmodel&&ta(t,"input")}));var qi={inserted:function(t,e,n,r){"select"===n.tag?(r.elm&&!r.elm._vOptions?Zt(n,"postpatch",(function(){qi.componentUpdated(t,e,n)})):Wi(t,e,n.context),t._vOptions=[].map.call(t.options,Xi)):("textarea"===n.tag||Qr(t.type))&&(t._vModifiers=e.modifiers,e.modifiers.lazy||(t.addEventListener("compositionstart",Yi),t.addEventListener("compositionend",Qi),t.addEventListener("change",Qi),Z&&(t.vmodel=!0)))},componentUpdated:function(t,e,n){if("select"===n.tag){Wi(t,e,n.context);var r=t._vOptions,o=t._vOptions=[].map.call(t.options,Xi);if(o.some((function(t,e){return!P(t,r[e])})))(t.multiple?e.value.some((function(t){return Gi(t,o)})):e.value!==e.oldValue&&Gi(e.value,o))&&ta(t,"change")}}};function Wi(t,e,n){Zi(t,e),(W||G)&&setTimeout((function(){Zi(t,e)}),0)}function Zi(t,e,n){var r=e.value,o=t.multiple;if(!o||Array.isArray(r)){for(var i,a,s=0,c=t.options.length;s<c;s++)if(a=t.options[s],o)i=D(r,Xi(a))>-1,a.selected!==i&&(a.selected=i);else if(P(Xi(a),r))return void(t.selectedIndex!==s&&(t.selectedIndex=s));o||(t.selectedIndex=-1)}}function Gi(t,e){return e.every((function(e){return!P(e,t)}))}function Xi(t){return"_value"in t?t._value:t.value}function Yi(t){t.target.composing=!0}function Qi(t){t.target.composing&&(t.target.composing=!1,ta(t.target,"input"))}function ta(t,e){var n=document.createEvent("HTMLEvents");n.initEvent(e,!0,!0),t.dispatchEvent(n)}function ea(t){return!t.componentInstance||t.data&&t.data.transition?t:ea(t.componentInstance._vnode)}var na={bind:function(t,e,n){var r=e.value,o=(n=ea(n)).data&&n.data.transition,i=t.__vOriginalDisplay="none"===t.style.display?"":t.style.display;r&&o?(n.data.show=!0,Bi(n,(function(){t.style.display=i}))):t.style.display=r?i:"none"},update:function(t,e,n){var r=e.value;!r!=!e.oldValue&&((n=ea(n)).data&&n.data.transition?(n.data.show=!0,r?Bi(n,(function(){t.style.display=t.__vOriginalDisplay})):Ui(n,(function(){t.style.display="none"}))):t.style.display=r?t.__vOriginalDisplay:"none")},unbind:function(t,e,n,r,o){o||(t.style.display=t.__vOriginalDisplay)}},ra={model:qi,show:na},oa={name:String,appear:Boolean,css:Boolean,mode:String,type:String,enterClass:String,leaveClass:String,enterToClass:String,leaveToClass:String,enterActiveClass:String,leaveActiveClass:String,appearClass:String,appearActiveClass:String,appearToClass:String,duration:[Number,String,Object]};function ia(t){var e=t&&t.componentOptions;return e&&e.Ctor.options.abstract?ia(Ee(e.children)):t}function aa(t){var e={},n=t.$options;for(var r in n.propsData)e[r]=t[r];var o=n._parentListeners;for(var r in o)e[w(r)]=o[r];return e}function sa(t,e){if(/\d-keep-alive$/.test(e.tag))return t("keep-alive",{props:e.componentOptions.propsData})}var ca=function(t){return t.tag||_e(t)},ua=function(t){return"show"===t.name},la={name:"transition",props:oa,abstract:!0,render:function(t){var e=this,n=this.$slots.default;if(n&&(n=n.filter(ca)).length){var r=this.mode,o=n[0];if(function(t){for(;t=t.parent;)if(t.data.transition)return!0}(this.$vnode))return o;var a=ia(o);if(!a)return o;if(this._leaving)return sa(t,o);var s="__transition-".concat(this._uid,"-");a.key=null==a.key?a.isComment?s+"comment":s+a.tag:i(a.key)?0===String(a.key).indexOf(s)?a.key:s+a.key:a.key;var c=(a.data||(a.data={})).transition=aa(this),u=this._vnode,l=ia(u);if(a.data.directives&&a.data.directives.some(ua)&&(a.data.show=!0),l&&l.data&&!function(t,e){return e.key===t.key&&e.tag===t.tag}(a,l)&&!_e(l)&&(!l.componentInstance||!l.componentInstance._vnode.isComment)){var f=l.data.transition=T({},c);if("out-in"===r)return this._leaving=!0,Zt(f,"afterLeave",(function(){e._leaving=!1,e.$forceUpdate()})),sa(t,o);if("in-out"===r){if(_e(a))return u;var d,p=function(){d()};Zt(c,"afterEnter",p),Zt(c,"enterCancelled",p),Zt(f,"delayLeave",(function(t){d=t}))}}return o}}},fa=T({tag:String,moveClass:String},oa);delete fa.mode;var da={props:fa,beforeMount:function(){var t=this,e=this._update;this._update=function(n,r){var o=Le(t);t.__patch__(t._vnode,t.kept,!1,!0),t._vnode=t.kept,o(),e.call(t,n,r)}},render:function(t){for(var e=this.tag||this.$vnode.data.tag||"span",n=Object.create(null),r=this.prevChildren=this.children,o=this.$slots.default||[],i=this.children=[],a=aa(this),s=0;s<o.length;s++){(l=o[s]).tag&&null!=l.key&&0!==String(l.key).indexOf("__vlist")&&(i.push(l),n[l.key]=l,(l.data||(l.data={})).transition=a)}if(r){var c=[],u=[];for(s=0;s<r.length;s++){var l;(l=r[s]).data.transition=a,l.data.pos=l.elm.getBoundingClientRect(),n[l.key]?c.push(l):u.push(l)}this.kept=t(e,null,c),this.removed=u}return t(e,null,i)},updated:function(){var t=this.prevChildren,e=this.moveClass||(this.name||"v")+"-move";t.length&&this.hasMove(t[0].elm,e)&&(t.forEach(pa),t.forEach(va),t.forEach(ha),this._reflow=document.body.offsetHeight,t.forEach((function(t){if(t.data.moved){var n=t.elm,r=n.style;Di(n,e),r.transform=r.WebkitTransform=r.transitionDuration="",n.addEventListener(Ai,n._moveCb=function t(r){r&&r.target!==n||r&&!/transform$/.test(r.propertyName)||(n.removeEventListener(Ai,t),n._moveCb=null,Mi(n,e))})}})))},methods:{hasMove:function(t,e){if(!ki)return!1;if(this._hasMove)return this._hasMove;var n=t.cloneNode();t._transitionClasses&&t._transitionClasses.forEach((function(t){wi(n,t)})),$i(n,e),n.style.display="none",this.$el.appendChild(n);var r=Ri(n);return this.$el.removeChild(n),this._hasMove=r.hasTransform}}};function pa(t){t.elm._moveCb&&t.elm._moveCb(),t.elm._enterCb&&t.elm._enterCb()}function va(t){t.data.newPos=t.elm.getBoundingClientRect()}function ha(t){var e=t.data.pos,n=t.data.newPos,r=e.left-n.left,o=e.top-n.top;if(r||o){t.data.moved=!0;var i=t.elm.style;i.transform=i.WebkitTransform="translate(".concat(r,"px,").concat(o,"px)"),i.transitionDuration="0s"}}var ma={Transition:la,TransitionGroup:da};Cr.config.mustUseProp=Mr,Cr.config.isReservedTag=Gr,Cr.config.isReservedAttr=Pr,Cr.config.getTagNamespace=Xr,Cr.config.isUnknownElement=function(t){if(!J)return!0;if(Gr(t))return!1;if(t=t.toLowerCase(),null!=Yr[t])return Yr[t];var e=document.createElement(t);return t.indexOf("-")>-1?Yr[t]=e.constructor===window.HTMLUnknownElement||e.constructor===window.HTMLElement:Yr[t]=/HTMLUnknownElement/.test(e.toString())},T(Cr.options.directives,ra),T(Cr.options.components,ma),Cr.prototype.__patch__=J?Ji:j,Cr.prototype.$mount=function(t,e){return function(t,e,n){var r;t.$el=e,t.$options.render||(t.$options.render=ft),Be(t,"beforeMount"),r=function(){t._update(t._render(),n)},new Vn(t,r,j,{before:function(){t._isMounted&&!t._isDestroyed&&Be(t,"beforeUpdate")}},!0),n=!1;var o=t._preWatchers;if(o)for(var i=0;i<o.length;i++)o[i].run();return null==t.$vnode&&(t._isMounted=!0,Be(t,"mounted")),t}(this,t=t&&J?to(t):void 0,e)},J&&setTimeout((function(){H.devtools&&ot&&ot.emit("init",Cr)}),0);var ga=/\{\{((?:.|\r?\n)+?)\}\}/g,ya=/[-.*+?^${}()|[\]\/\\]/g,_a=b((function(t){var e=t[0].replace(ya,"\\$&"),n=t[1].replace(ya,"\\$&");return new RegExp(e+"((?:.|\\n)+?)"+n,"g")}));var ba={staticKeys:["staticClass"],transformNode:function(t,e){e.warn;var n=Bo(t,"class");n&&(t.staticClass=JSON.stringify(n.replace(/\s+/g," ").trim()));var r=Ho(t,"class",!1);r&&(t.classBinding=r)},genData:function(t){var e="";return t.staticClass&&(e+="staticClass:".concat(t.staticClass,",")),t.classBinding&&(e+="class:".concat(t.classBinding,",")),e}};var $a,wa={staticKeys:["staticStyle"],transformNode:function(t,e){e.warn;var n=Bo(t,"style");n&&(t.staticStyle=JSON.stringify(ui(n)));var r=Ho(t,"style",!1);r&&(t.styleBinding=r)},genData:function(t){var e="";return t.staticStyle&&(e+="staticStyle:".concat(t.staticStyle,",")),t.styleBinding&&(e+="style:(".concat(t.styleBinding,"),")),e}},xa=function(t){return($a=$a||document.createElement("div")).innerHTML=t,$a.textContent},Ca=v("area,base,br,col,embed,frame,hr,img,input,isindex,keygen,link,meta,param,source,track,wbr"),ka=v("colgroup,dd,dt,li,options,p,td,tfoot,th,thead,tr,source"),Sa=v("address,article,aside,base,blockquote,body,caption,col,colgroup,dd,details,dialog,div,dl,dt,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,head,header,hgroup,hr,html,legend,li,menuitem,meta,optgroup,option,param,rp,rt,source,style,summary,tbody,td,tfoot,th,thead,title,tr,track"),Oa=/^\s*([^\s"'<>\/=]+)(?:\s*(=)\s*(?:"([^"]*)"+|'([^']*)'+|([^\s"'=<>`]+)))?/,Ta=/^\s*((?:v-[\w-]+:|@|:|#)\[[^=]+?\][^\s"'<>\/=]*)(?:\s*(=)\s*(?:"([^"]*)"+|'([^']*)'+|([^\s"'=<>`]+)))?/,Aa="[a-zA-Z_][\\-\\.0-9_a-zA-Z".concat(B.source,"]*"),ja="((?:".concat(Aa,"\\:)?").concat(Aa,")"),Ea=new RegExp("^<".concat(ja)),Na=/^\s*(\/?)>/,Pa=new RegExp("^<\\/".concat(ja,"[^>]*>")),Da=/^<!DOCTYPE [^>]+>/i,Ma=/^<!\--/,Ia=/^<!\[/,La=v("script,style,textarea",!0),Ra={},Fa={"&lt;":"<","&gt;":">","&quot;":'"',"&amp;":"&","&#10;":"\n","&#9;":"\t","&#39;":"'"},Ha=/&(?:lt|gt|quot|amp|#39);/g,Ba=/&(?:lt|gt|quot|amp|#39|#10|#9);/g,Ua=v("pre,textarea",!0),za=function(t,e){return t&&Ua(t)&&"\n"===e[0]};function Va(t,e){var n=e?Ba:Ha;return t.replace(n,(function(t){return Fa[t]}))}function Ka(t,e){for(var n,r,o=[],i=e.expectHTML,a=e.isUnaryTag||E,s=e.canBeLeftOpenTag||E,c=0,u=function(){if(n=t,r&&La(r)){var u=0,d=r.toLowerCase(),p=Ra[d]||(Ra[d]=new RegExp("([\\s\\S]*?)(</"+d+"[^>]*>)","i"));w=t.replace(p,(function(t,n,r){return u=r.length,La(d)||"noscript"===d||(n=n.replace(/<!\--([\s\S]*?)-->/g,"$1").replace(/<!\[CDATA\[([\s\S]*?)]]>/g,"$1")),za(d,n)&&(n=n.slice(1)),e.chars&&e.chars(n),""}));c+=t.length-w.length,t=w,f(d,c-u,c)}else{var v=t.indexOf("<");if(0===v){if(Ma.test(t)){var h=t.indexOf("--\x3e");if(h>=0)return e.shouldKeepComment&&e.comment&&e.comment(t.substring(4,h),c,c+h+3),l(h+3),"continue"}if(Ia.test(t)){var m=t.indexOf("]>");if(m>=0)return l(m+2),"continue"}var g=t.match(Da);if(g)return l(g[0].length),"continue";var y=t.match(Pa);if(y){var _=c;return l(y[0].length),f(y[1],_,c),"continue"}var b=function(){var e=t.match(Ea);if(e){var n={tagName:e[1],attrs:[],start:c};l(e[0].length);for(var r=void 0,o=void 0;!(r=t.match(Na))&&(o=t.match(Ta)||t.match(Oa));)o.start=c,l(o[0].length),o.end=c,n.attrs.push(o);if(r)return n.unarySlash=r[1],l(r[0].length),n.end=c,n}}();if(b)return function(t){var n=t.tagName,c=t.unarySlash;i&&("p"===r&&Sa(n)&&f(r),s(n)&&r===n&&f(n));for(var u=a(n)||!!c,l=t.attrs.length,d=new Array(l),p=0;p<l;p++){var v=t.attrs[p],h=v[3]||v[4]||v[5]||"",m="a"===n&&"href"===v[1]?e.shouldDecodeNewlinesForHref:e.shouldDecodeNewlines;d[p]={name:v[1],value:Va(h,m)}}u||(o.push({tag:n,lowerCasedTag:n.toLowerCase(),attrs:d,start:t.start,end:t.end}),r=n);e.start&&e.start(n,d,u,t.start,t.end)}(b),za(b.tagName,t)&&l(1),"continue"}var $=void 0,w=void 0,x=void 0;if(v>=0){for(w=t.slice(v);!(Pa.test(w)||Ea.test(w)||Ma.test(w)||Ia.test(w)||(x=w.indexOf("<",1))<0);)v+=x,w=t.slice(v);$=t.substring(0,v)}v<0&&($=t),$&&l($.length),e.chars&&$&&e.chars($,c-$.length,c)}if(t===n)return e.chars&&e.chars(t),"break"};t;){if("break"===u())break}function l(e){c+=e,t=t.substring(e)}function f(t,n,i){var a,s;if(null==n&&(n=c),null==i&&(i=c),t)for(s=t.toLowerCase(),a=o.length-1;a>=0&&o[a].lowerCasedTag!==s;a--);else a=0;if(a>=0){for(var u=o.length-1;u>=a;u--)e.end&&e.end(o[u].tag,n,i);o.length=a,r=a&&o[a-1].tag}else"br"===s?e.start&&e.start(t,[],!0,n,i):"p"===s&&(e.start&&e.start(t,[],!1,n,i),e.end&&e.end(t,n,i))}f()}var Ja,qa,Wa,Za,Ga,Xa,Ya,Qa,ts=/^@|^v-on:/,es=/^v-|^@|^:|^#/,ns=/([\s\S]*?)\s+(?:in|of)\s+([\s\S]*)/,rs=/,([^,\}\]]*)(?:,([^,\}\]]*))?$/,os=/^\(|\)$/g,is=/^\[.*\]$/,as=/:(.*)$/,ss=/^:|^\.|^v-bind:/,cs=/\.[^.\]]+(?=[^\]]*$)/g,us=/^v-slot(:|$)|^#/,ls=/[\r\n]/,fs=/[ \f\t\r\n]+/g,ds=b(xa),ps="_empty_";function vs(t,e,n){return{type:1,tag:t,attrsList:e,attrsMap:$s(e),rawAttrsMap:{},parent:n,children:[]}}function hs(t,e){Ja=e.warn||No,Xa=e.isPreTag||E,Ya=e.mustUseProp||E,Qa=e.getTagNamespace||E,e.isReservedTag,Wa=Po(e.modules,"transformNode"),Za=Po(e.modules,"preTransformNode"),Ga=Po(e.modules,"postTransformNode"),qa=e.delimiters;var n,r,o=[],i=!1!==e.preserveWhitespace,a=e.whitespace,s=!1,c=!1;function u(t){if(l(t),s||t.processed||(t=ms(t,e)),o.length||t===n||n.if&&(t.elseif||t.else)&&ys(n,{exp:t.elseif,block:t}),r&&!t.forbidden)if(t.elseif||t.else)a=t,u=function(t){for(var e=t.length;e--;){if(1===t[e].type)return t[e];t.pop()}}(r.children),u&&u.if&&ys(u,{exp:a.elseif,block:a});else{if(t.slotScope){var i=t.slotTarget||'"default"';(r.scopedSlots||(r.scopedSlots={}))[i]=t}r.children.push(t),t.parent=r}var a,u;t.children=t.children.filter((function(t){return!t.slotScope})),l(t),t.pre&&(s=!1),Xa(t.tag)&&(c=!1);for(var f=0;f<Ga.length;f++)Ga[f](t,e)}function l(t){if(!c)for(var e=void 0;(e=t.children[t.children.length-1])&&3===e.type&&" "===e.text;)t.children.pop()}return Ka(t,{warn:Ja,expectHTML:e.expectHTML,isUnaryTag:e.isUnaryTag,canBeLeftOpenTag:e.canBeLeftOpenTag,shouldDecodeNewlines:e.shouldDecodeNewlines,shouldDecodeNewlinesForHref:e.shouldDecodeNewlinesForHref,shouldKeepComment:e.comments,outputSourceRange:e.outputSourceRange,start:function(t,i,a,l,f){var d=r&&r.ns||Qa(t);W&&"svg"===d&&(i=function(t){for(var e=[],n=0;n<t.length;n++){var r=t[n];ws.test(r.name)||(r.name=r.name.replace(xs,""),e.push(r))}return e}(i));var p,v=vs(t,i,r);d&&(v.ns=d),"style"!==(p=v).tag&&("script"!==p.tag||p.attrsMap.type&&"text/javascript"!==p.attrsMap.type)||rt()||(v.forbidden=!0);for(var h=0;h<Za.length;h++)v=Za[h](v,e)||v;s||(!function(t){null!=Bo(t,"v-pre")&&(t.pre=!0)}(v),v.pre&&(s=!0)),Xa(v.tag)&&(c=!0),s?function(t){var e=t.attrsList,n=e.length;if(n)for(var r=t.attrs=new Array(n),o=0;o<n;o++)r[o]={name:e[o].name,value:JSON.stringify(e[o].value)},null!=e[o].start&&(r[o].start=e[o].start,r[o].end=e[o].end);else t.pre||(t.plain=!0)}(v):v.processed||(gs(v),function(t){var e=Bo(t,"v-if");if(e)t.if=e,ys(t,{exp:e,block:t});else{null!=Bo(t,"v-else")&&(t.else=!0);var n=Bo(t,"v-else-if");n&&(t.elseif=n)}}(v),function(t){null!=Bo(t,"v-once")&&(t.once=!0)}(v)),n||(n=v),a?u(v):(r=v,o.push(v))},end:function(t,e,n){var i=o[o.length-1];o.length-=1,r=o[o.length-1],u(i)},chars:function(t,e,n){if(r&&(!W||"textarea"!==r.tag||r.attrsMap.placeholder!==t)){var o,u=r.children;if(t=c||t.trim()?"script"===(o=r).tag||"style"===o.tag?t:ds(t):u.length?a?"condense"===a&&ls.test(t)?"":" ":i?" ":"":""){c||"condense"!==a||(t=t.replace(fs," "));var l=void 0,f=void 0;!s&&" "!==t&&(l=function(t,e){var n=e?_a(e):ga;if(n.test(t)){for(var r,o,i,a=[],s=[],c=n.lastIndex=0;r=n.exec(t);){(o=r.index)>c&&(s.push(i=t.slice(c,o)),a.push(JSON.stringify(i)));var u=jo(r[1].trim());a.push("_s(".concat(u,")")),s.push({"@binding":u}),c=o+r[0].length}return c<t.length&&(s.push(i=t.slice(c)),a.push(JSON.stringify(i))),{expression:a.join("+"),tokens:s}}}(t,qa))?f={type:2,expression:l.expression,tokens:l.tokens,text:t}:" "===t&&u.length&&" "===u[u.length-1].text||(f={type:3,text:t}),f&&u.push(f)}}},comment:function(t,e,n){if(r){var o={type:3,text:t,isComment:!0};r.children.push(o)}}}),n}function ms(t,e){var n,r;(r=Ho(n=t,"key"))&&(n.key=r),t.plain=!t.key&&!t.scopedSlots&&!t.attrsList.length,function(t){var e=Ho(t,"ref");e&&(t.ref=e,t.refInFor=function(t){var e=t;for(;e;){if(void 0!==e.for)return!0;e=e.parent}return!1}(t))}(t),function(t){var e;"template"===t.tag?(e=Bo(t,"scope"),t.slotScope=e||Bo(t,"slot-scope")):(e=Bo(t,"slot-scope"))&&(t.slotScope=e);var n=Ho(t,"slot");n&&(t.slotTarget='""'===n?'"default"':n,t.slotTargetDynamic=!(!t.attrsMap[":slot"]&&!t.attrsMap["v-bind:slot"]),"template"===t.tag||t.slotScope||Mo(t,"slot",n,function(t,e){return t.rawAttrsMap[":"+e]||t.rawAttrsMap["v-bind:"+e]||t.rawAttrsMap[e]}(t,"slot")));if("template"===t.tag){if(a=Uo(t,us)){var r=_s(a),o=r.name,i=r.dynamic;t.slotTarget=o,t.slotTargetDynamic=i,t.slotScope=a.value||ps}}else{var a;if(a=Uo(t,us)){var s=t.scopedSlots||(t.scopedSlots={}),c=_s(a),u=c.name,l=(i=c.dynamic,s[u]=vs("template",[],t));l.slotTarget=u,l.slotTargetDynamic=i,l.children=t.children.filter((function(t){if(!t.slotScope)return t.parent=l,!0})),l.slotScope=a.value||ps,t.children=[],t.plain=!1}}}(t),function(t){"slot"===t.tag&&(t.slotName=Ho(t,"name"))}(t),function(t){var e;(e=Ho(t,"is"))&&(t.component=e);null!=Bo(t,"inline-template")&&(t.inlineTemplate=!0)}(t);for(var o=0;o<Wa.length;o++)t=Wa[o](t,e)||t;return function(t){var e,n,r,o,i,a,s,c,u=t.attrsList;for(e=0,n=u.length;e<n;e++)if(r=o=u[e].name,i=u[e].value,es.test(r))if(t.hasBindings=!0,(a=bs(r.replace(es,"")))&&(r=r.replace(cs,"")),ss.test(r))r=r.replace(ss,""),i=jo(i),(c=is.test(r))&&(r=r.slice(1,-1)),a&&(a.prop&&!c&&"innerHtml"===(r=w(r))&&(r="innerHTML"),a.camel&&!c&&(r=w(r)),a.sync&&(s=Ko(i,"$event"),c?Fo(t,'"update:"+('.concat(r,")"),s,null,!1,0,u[e],!0):(Fo(t,"update:".concat(w(r)),s,null,!1,0,u[e]),k(r)!==w(r)&&Fo(t,"update:".concat(k(r)),s,null,!1,0,u[e])))),a&&a.prop||!t.component&&Ya(t.tag,t.attrsMap.type,r)?Do(t,r,i,u[e],c):Mo(t,r,i,u[e],c);else if(ts.test(r))r=r.replace(ts,""),(c=is.test(r))&&(r=r.slice(1,-1)),Fo(t,r,i,a,!1,0,u[e],c);else{var l=(r=r.replace(es,"")).match(as),f=l&&l[1];c=!1,f&&(r=r.slice(0,-(f.length+1)),is.test(f)&&(f=f.slice(1,-1),c=!0)),Lo(t,r,o,i,f,c,a,u[e])}else Mo(t,r,JSON.stringify(i),u[e]),!t.component&&"muted"===r&&Ya(t.tag,t.attrsMap.type,r)&&Do(t,r,"true",u[e])}(t),t}function gs(t){var e;if(e=Bo(t,"v-for")){var n=function(t){var e=t.match(ns);if(!e)return;var n={};n.for=e[2].trim();var r=e[1].trim().replace(os,""),o=r.match(rs);o?(n.alias=r.replace(rs,"").trim(),n.iterator1=o[1].trim(),o[2]&&(n.iterator2=o[2].trim())):n.alias=r;return n}(e);n&&T(t,n)}}function ys(t,e){t.ifConditions||(t.ifConditions=[]),t.ifConditions.push(e)}function _s(t){var e=t.name.replace(us,"");return e||"#"!==t.name[0]&&(e="default"),is.test(e)?{name:e.slice(1,-1),dynamic:!0}:{name:'"'.concat(e,'"'),dynamic:!1}}function bs(t){var e=t.match(cs);if(e){var n={};return e.forEach((function(t){n[t.slice(1)]=!0})),n}}function $s(t){for(var e={},n=0,r=t.length;n<r;n++)e[t[n].name]=t[n].value;return e}var ws=/^xmlns:NS\d+/,xs=/^NS\d+:/;function Cs(t){return vs(t.tag,t.attrsList.slice(),t.parent)}var ks=[ba,wa,{preTransformNode:function(t,e){if("input"===t.tag){var n=t.attrsMap;if(!n["v-model"])return;var r=void 0;if((n[":type"]||n["v-bind:type"])&&(r=Ho(t,"type")),n.type||r||!n["v-bind"]||(r="(".concat(n["v-bind"],").type")),r){var o=Bo(t,"v-if",!0),i=o?"&&(".concat(o,")"):"",a=null!=Bo(t,"v-else",!0),s=Bo(t,"v-else-if",!0),c=Cs(t);gs(c),Io(c,"type","checkbox"),ms(c,e),c.processed=!0,c.if="(".concat(r,")==='checkbox'")+i,ys(c,{exp:c.if,block:c});var u=Cs(t);Bo(u,"v-for",!0),Io(u,"type","radio"),ms(u,e),ys(c,{exp:"(".concat(r,")==='radio'")+i,block:u});var l=Cs(t);return Bo(l,"v-for",!0),Io(l,":type",r),ms(l,e),ys(c,{exp:o,block:l}),a?c.else=!0:s&&(c.elseif=s),c}}}}];var Ss,Os,Ts={model:function(t,e,n){var r=e.value,o=e.modifiers,i=t.tag,a=t.attrsMap.type;if(t.component)return Vo(t,r,o),!1;if("select"===i)!function(t,e,n){var r=n&&n.number,o='Array.prototype.filter.call($event.target.options,function(o){return o.selected}).map(function(o){var val = "_value" in o ? o._value : o.value;'+"return ".concat(r?"_n(val)":"val","})"),i="$event.target.multiple ? $$selectedVal : $$selectedVal[0]",a="var $$selectedVal = ".concat(o,";");a="".concat(a," ").concat(Ko(e,i)),Fo(t,"change",a,null,!0)}(t,r,o);else if("input"===i&&"checkbox"===a)!function(t,e,n){var r=n&&n.number,o=Ho(t,"value")||"null",i=Ho(t,"true-value")||"true",a=Ho(t,"false-value")||"false";Do(t,"checked","Array.isArray(".concat(e,")")+"?_i(".concat(e,",").concat(o,")>-1")+("true"===i?":(".concat(e,")"):":_q(".concat(e,",").concat(i,")"))),Fo(t,"change","var $$a=".concat(e,",")+"$$el=$event.target,"+"$$c=$$el.checked?(".concat(i,"):(").concat(a,");")+"if(Array.isArray($$a)){"+"var $$v=".concat(r?"_n("+o+")":o,",")+"$$i=_i($$a,$$v);"+"if($$el.checked){$$i<0&&(".concat(Ko(e,"$$a.concat([$$v])"),")}")+"else{$$i>-1&&(".concat(Ko(e,"$$a.slice(0,$$i).concat($$a.slice($$i+1))"),")}")+"}else{".concat(Ko(e,"$$c"),"}"),null,!0)}(t,r,o);else if("input"===i&&"radio"===a)!function(t,e,n){var r=n&&n.number,o=Ho(t,"value")||"null";o=r?"_n(".concat(o,")"):o,Do(t,"checked","_q(".concat(e,",").concat(o,")")),Fo(t,"change",Ko(e,o),null,!0)}(t,r,o);else if("input"===i||"textarea"===i)!function(t,e,n){var r=t.attrsMap.type,o=n||{},i=o.lazy,a=o.number,s=o.trim,c=!i&&"range"!==r,u=i?"change":"range"===r?Yo:"input",l="$event.target.value";s&&(l="$event.target.value.trim()");a&&(l="_n(".concat(l,")"));var f=Ko(e,l);c&&(f="if($event.target.composing)return;".concat(f));Do(t,"value","(".concat(e,")")),Fo(t,u,f,null,!0),(s||a)&&Fo(t,"blur","$forceUpdate()")}(t,r,o);else if(!H.isReservedTag(i))return Vo(t,r,o),!1;return!0},text:function(t,e){e.value&&Do(t,"textContent","_s(".concat(e.value,")"),e)},html:function(t,e){e.value&&Do(t,"innerHTML","_s(".concat(e.value,")"),e)}},As={expectHTML:!0,modules:ks,directives:Ts,isPreTag:function(t){return"pre"===t},isUnaryTag:Ca,mustUseProp:Mr,canBeLeftOpenTag:ka,isReservedTag:Gr,getTagNamespace:Xr,staticKeys:function(t){return t.reduce((function(t,e){return t.concat(e.staticKeys||[])}),[]).join(",")}(ks)},js=b((function(t){return v("type,tag,attrsList,attrsMap,plain,parent,children,attrs,start,end,rawAttrsMap"+(t?","+t:""))}));function Es(t,e){t&&(Ss=js(e.staticKeys||""),Os=e.isReservedTag||E,Ns(t),Ps(t,!1))}function Ns(t){if(t.static=function(t){if(2===t.type)return!1;if(3===t.type)return!0;return!(!t.pre&&(t.hasBindings||t.if||t.for||h(t.tag)||!Os(t.tag)||function(t){for(;t.parent;){if("template"!==(t=t.parent).tag)return!1;if(t.for)return!0}return!1}(t)||!Object.keys(t).every(Ss)))}(t),1===t.type){if(!Os(t.tag)&&"slot"!==t.tag&&null==t.attrsMap["inline-template"])return;for(var e=0,n=t.children.length;e<n;e++){var r=t.children[e];Ns(r),r.static||(t.static=!1)}if(t.ifConditions)for(e=1,n=t.ifConditions.length;e<n;e++){var o=t.ifConditions[e].block;Ns(o),o.static||(t.static=!1)}}}function Ps(t,e){if(1===t.type){if((t.static||t.once)&&(t.staticInFor=e),t.static&&t.children.length&&(1!==t.children.length||3!==t.children[0].type))return void(t.staticRoot=!0);if(t.staticRoot=!1,t.children)for(var n=0,r=t.children.length;n<r;n++)Ps(t.children[n],e||!!t.for);if(t.ifConditions)for(n=1,r=t.ifConditions.length;n<r;n++)Ps(t.ifConditions[n].block,e)}}var Ds=/^([\w$_]+|\([^)]*?\))\s*=>|^function(?:\s+[\w$]+)?\s*\(/,Ms=/\([^)]*?\);*$/,Is=/^[A-Za-z_$][\w$]*(?:\.[A-Za-z_$][\w$]*|\['[^']*?']|\["[^"]*?"]|\[\d+]|\[[A-Za-z_$][\w$]*])*$/,Ls={esc:27,tab:9,enter:13,space:32,up:38,left:37,right:39,down:40,delete:[8,46]},Rs={esc:["Esc","Escape"],tab:"Tab",enter:"Enter",space:[" ","Spacebar"],up:["Up","ArrowUp"],left:["Left","ArrowLeft"],right:["Right","ArrowRight"],down:["Down","ArrowDown"],delete:["Backspace","Delete","Del"]},Fs=function(t){return"if(".concat(t,")return null;")},Hs={stop:"$event.stopPropagation();",prevent:"$event.preventDefault();",self:Fs("$event.target !== $event.currentTarget"),ctrl:Fs("!$event.ctrlKey"),shift:Fs("!$event.shiftKey"),alt:Fs("!$event.altKey"),meta:Fs("!$event.metaKey"),left:Fs("'button' in $event && $event.button !== 0"),middle:Fs("'button' in $event && $event.button !== 1"),right:Fs("'button' in $event && $event.button !== 2")};function Bs(t,e){var n=e?"nativeOn:":"on:",r="",o="";for(var i in t){var a=Us(t[i]);t[i]&&t[i].dynamic?o+="".concat(i,",").concat(a,","):r+='"'.concat(i,'":').concat(a,",")}return r="{".concat(r.slice(0,-1),"}"),o?n+"_d(".concat(r,",[").concat(o.slice(0,-1),"])"):n+r}function Us(t){if(!t)return"function(){}";if(Array.isArray(t))return"[".concat(t.map((function(t){return Us(t)})).join(","),"]");var e=Is.test(t.value),n=Ds.test(t.value),r=Is.test(t.value.replace(Ms,""));if(t.modifiers){var o="",i="",a=[],s=function(e){if(Hs[e])i+=Hs[e],Ls[e]&&a.push(e);else if("exact"===e){var n=t.modifiers;i+=Fs(["ctrl","shift","alt","meta"].filter((function(t){return!n[t]})).map((function(t){return"$event.".concat(t,"Key")})).join("||"))}else a.push(e)};for(var c in t.modifiers)s(c);a.length&&(o+=function(t){return"if(!$event.type.indexOf('key')&&"+"".concat(t.map(zs).join("&&"),")return null;")}(a)),i&&(o+=i);var u=e?"return ".concat(t.value,".apply(null, arguments)"):n?"return (".concat(t.value,").apply(null, arguments)"):r?"return ".concat(t.value):t.value;return"function($event){".concat(o).concat(u,"}")}return e||n?t.value:"function($event){".concat(r?"return ".concat(t.value):t.value,"}")}function zs(t){var e=parseInt(t,10);if(e)return"$event.keyCode!==".concat(e);var n=Ls[t],r=Rs[t];return"_k($event.keyCode,"+"".concat(JSON.stringify(t),",")+"".concat(JSON.stringify(n),",")+"$event.key,"+"".concat(JSON.stringify(r))+")"}var Vs={on:function(t,e){t.wrapListeners=function(t){return"_g(".concat(t,",").concat(e.value,")")}},bind:function(t,e){t.wrapData=function(n){return"_b(".concat(n,",'").concat(t.tag,"',").concat(e.value,",").concat(e.modifiers&&e.modifiers.prop?"true":"false").concat(e.modifiers&&e.modifiers.sync?",true":"",")")}},cloak:j},Ks=function(t){this.options=t,this.warn=t.warn||No,this.transforms=Po(t.modules,"transformCode"),this.dataGenFns=Po(t.modules,"genData"),this.directives=T(T({},Vs),t.directives);var e=t.isReservedTag||E;this.maybeComponent=function(t){return!!t.component||!e(t.tag)},this.onceId=0,this.staticRenderFns=[],this.pre=!1};function Js(t,e){var n=new Ks(e),r=t?"script"===t.tag?"null":qs(t,n):'_c("div")';return{render:"with(this){return ".concat(r,"}"),staticRenderFns:n.staticRenderFns}}function qs(t,e){if(t.parent&&(t.pre=t.pre||t.parent.pre),t.staticRoot&&!t.staticProcessed)return Ws(t,e);if(t.once&&!t.onceProcessed)return Zs(t,e);if(t.for&&!t.forProcessed)return Ys(t,e);if(t.if&&!t.ifProcessed)return Gs(t,e);if("template"!==t.tag||t.slotTarget||e.pre){if("slot"===t.tag)return function(t,e){var n=t.slotName||'"default"',r=nc(t,e),o="_t(".concat(n).concat(r?",function(){return ".concat(r,"}"):""),i=t.attrs||t.dynamicAttrs?ic((t.attrs||[]).concat(t.dynamicAttrs||[]).map((function(t){return{name:w(t.name),value:t.value,dynamic:t.dynamic}}))):null,a=t.attrsMap["v-bind"];!i&&!a||r||(o+=",null");i&&(o+=",".concat(i));a&&(o+="".concat(i?"":",null",",").concat(a));return o+")"}(t,e);var n=void 0;if(t.component)n=function(t,e,n){var r=e.inlineTemplate?null:nc(e,n,!0);return"_c(".concat(t,",").concat(Qs(e,n)).concat(r?",".concat(r):"",")")}(t.component,t,e);else{var r=void 0,o=e.maybeComponent(t);(!t.plain||t.pre&&o)&&(r=Qs(t,e));var i=void 0,a=e.options.bindings;o&&a&&!1!==a.__isScriptSetup&&(i=function(t,e){var n=w(e),r=x(n),o=function(o){return t[e]===o?e:t[n]===o?n:t[r]===o?r:void 0},i=o("setup-const")||o("setup-reactive-const");if(i)return i;var a=o("setup-let")||o("setup-ref")||o("setup-maybe-ref");if(a)return a}(a,t.tag)),i||(i="'".concat(t.tag,"'"));var s=t.inlineTemplate?null:nc(t,e,!0);n="_c(".concat(i).concat(r?",".concat(r):"").concat(s?",".concat(s):"",")")}for(var c=0;c<e.transforms.length;c++)n=e.transforms[c](t,n);return n}return nc(t,e)||"void 0"}function Ws(t,e){t.staticProcessed=!0;var n=e.pre;return t.pre&&(e.pre=t.pre),e.staticRenderFns.push("with(this){return ".concat(qs(t,e),"}")),e.pre=n,"_m(".concat(e.staticRenderFns.length-1).concat(t.staticInFor?",true":"",")")}function Zs(t,e){if(t.onceProcessed=!0,t.if&&!t.ifProcessed)return Gs(t,e);if(t.staticInFor){for(var n="",r=t.parent;r;){if(r.for){n=r.key;break}r=r.parent}return n?"_o(".concat(qs(t,e),",").concat(e.onceId++,",").concat(n,")"):qs(t,e)}return Ws(t,e)}function Gs(t,e,n,r){return t.ifProcessed=!0,Xs(t.ifConditions.slice(),e,n,r)}function Xs(t,e,n,r){if(!t.length)return r||"_e()";var o=t.shift();return o.exp?"(".concat(o.exp,")?").concat(i(o.block),":").concat(Xs(t,e,n,r)):"".concat(i(o.block));function i(t){return n?n(t,e):t.once?Zs(t,e):qs(t,e)}}function Ys(t,e,n,r){var o=t.for,i=t.alias,a=t.iterator1?",".concat(t.iterator1):"",s=t.iterator2?",".concat(t.iterator2):"";return t.forProcessed=!0,"".concat(r||"_l","((").concat(o,"),")+"function(".concat(i).concat(a).concat(s,"){")+"return ".concat((n||qs)(t,e))+"})"}function Qs(t,e){var n="{",r=function(t,e){var n=t.directives;if(!n)return;var r,o,i,a,s="directives:[",c=!1;for(r=0,o=n.length;r<o;r++){i=n[r],a=!0;var u=e.directives[i.name];u&&(a=!!u(t,i,e.warn)),a&&(c=!0,s+='{name:"'.concat(i.name,'",rawName:"').concat(i.rawName,'"').concat(i.value?",value:(".concat(i.value,"),expression:").concat(JSON.stringify(i.value)):"").concat(i.arg?",arg:".concat(i.isDynamicArg?i.arg:'"'.concat(i.arg,'"')):"").concat(i.modifiers?",modifiers:".concat(JSON.stringify(i.modifiers)):"","},"))}if(c)return s.slice(0,-1)+"]"}(t,e);r&&(n+=r+","),t.key&&(n+="key:".concat(t.key,",")),t.ref&&(n+="ref:".concat(t.ref,",")),t.refInFor&&(n+="refInFor:true,"),t.pre&&(n+="pre:true,"),t.component&&(n+='tag:"'.concat(t.tag,'",'));for(var o=0;o<e.dataGenFns.length;o++)n+=e.dataGenFns[o](t);if(t.attrs&&(n+="attrs:".concat(ic(t.attrs),",")),t.props&&(n+="domProps:".concat(ic(t.props),",")),t.events&&(n+="".concat(Bs(t.events,!1),",")),t.nativeEvents&&(n+="".concat(Bs(t.nativeEvents,!0),",")),t.slotTarget&&!t.slotScope&&(n+="slot:".concat(t.slotTarget,",")),t.scopedSlots&&(n+="".concat(function(t,e,n){var r=t.for||Object.keys(e).some((function(t){var n=e[t];return n.slotTargetDynamic||n.if||n.for||tc(n)})),o=!!t.if;if(!r)for(var i=t.parent;i;){if(i.slotScope&&i.slotScope!==ps||i.for){r=!0;break}i.if&&(o=!0),i=i.parent}var a=Object.keys(e).map((function(t){return ec(e[t],n)})).join(",");return"scopedSlots:_u([".concat(a,"]").concat(r?",null,true":"").concat(!r&&o?",null,false,".concat(function(t){var e=5381,n=t.length;for(;n;)e=33*e^t.charCodeAt(--n);return e>>>0}(a)):"",")")}(t,t.scopedSlots,e),",")),t.model&&(n+="model:{value:".concat(t.model.value,",callback:").concat(t.model.callback,",expression:").concat(t.model.expression,"},")),t.inlineTemplate){var i=function(t,e){var n=t.children[0];if(n&&1===n.type){var r=Js(n,e.options);return"inlineTemplate:{render:function(){".concat(r.render,"},staticRenderFns:[").concat(r.staticRenderFns.map((function(t){return"function(){".concat(t,"}")})).join(","),"]}")}}(t,e);i&&(n+="".concat(i,","))}return n=n.replace(/,$/,"")+"}",t.dynamicAttrs&&(n="_b(".concat(n,',"').concat(t.tag,'",').concat(ic(t.dynamicAttrs),")")),t.wrapData&&(n=t.wrapData(n)),t.wrapListeners&&(n=t.wrapListeners(n)),n}function tc(t){return 1===t.type&&("slot"===t.tag||t.children.some(tc))}function ec(t,e){var n=t.attrsMap["slot-scope"];if(t.if&&!t.ifProcessed&&!n)return Gs(t,e,ec,"null");if(t.for&&!t.forProcessed)return Ys(t,e,ec);var r=t.slotScope===ps?"":String(t.slotScope),o="function(".concat(r,"){")+"return ".concat("template"===t.tag?t.if&&n?"(".concat(t.if,")?").concat(nc(t,e)||"undefined",":undefined"):nc(t,e)||"undefined":qs(t,e),"}"),i=r?"":",proxy:true";return"{key:".concat(t.slotTarget||'"default"',",fn:").concat(o).concat(i,"}")}function nc(t,e,n,r,o){var i=t.children;if(i.length){var a=i[0];if(1===i.length&&a.for&&"template"!==a.tag&&"slot"!==a.tag){var s=n?e.maybeComponent(a)?",1":",0":"";return"".concat((r||qs)(a,e)).concat(s)}var c=n?function(t,e){for(var n=0,r=0;r<t.length;r++){var o=t[r];if(1===o.type){if(rc(o)||o.ifConditions&&o.ifConditions.some((function(t){return rc(t.block)}))){n=2;break}(e(o)||o.ifConditions&&o.ifConditions.some((function(t){return e(t.block)})))&&(n=1)}}return n}(i,e.maybeComponent):0,u=o||oc;return"[".concat(i.map((function(t){return u(t,e)})).join(","),"]").concat(c?",".concat(c):"")}}function rc(t){return void 0!==t.for||"template"===t.tag||"slot"===t.tag}function oc(t,e){return 1===t.type?qs(t,e):3===t.type&&t.isComment?function(t){return"_e(".concat(JSON.stringify(t.text),")")}(t):function(t){return"_v(".concat(2===t.type?t.expression:ac(JSON.stringify(t.text)),")")}(t)}function ic(t){for(var e="",n="",r=0;r<t.length;r++){var o=t[r],i=ac(o.value);o.dynamic?n+="".concat(o.name,",").concat(i,","):e+='"'.concat(o.name,'":').concat(i,",")}return e="{".concat(e.slice(0,-1),"}"),n?"_d(".concat(e,",[").concat(n.slice(0,-1),"])"):e}function ac(t){return t.replace(/\u2028/g,"\\u2028").replace(/\u2029/g,"\\u2029")}function sc(t,e){try{return new Function(t)}catch(n){return e.push({err:n,code:t}),j}}function cc(t){var e=Object.create(null);return function(n,r,o){(r=T({},r)).warn,delete r.warn;var i=r.delimiters?String(r.delimiters)+n:n;if(e[i])return e[i];var a=t(n,r),s={},c=[];return s.render=sc(a.render,c),s.staticRenderFns=a.staticRenderFns.map((function(t){return sc(t,c)})),e[i]=s}}new RegExp("\\b"+"do,if,for,let,new,try,var,case,else,with,await,break,catch,class,const,super,throw,while,yield,delete,export,import,return,switch,default,extends,finally,continue,debugger,function,arguments".split(",").join("\\b|\\b")+"\\b"),new RegExp("\\b"+"delete,typeof,void".split(",").join("\\s*\\([^\\)]*\\)|\\b")+"\\s*\\([^\\)]*\\)");var uc,lc,fc=(uc=function(t,e){var n=hs(t.trim(),e);!1!==e.optimize&&Es(n,e);var r=Js(n,e);return{ast:n,render:r.render,staticRenderFns:r.staticRenderFns}},function(t){function e(e,n){var r=Object.create(t),o=[],i=[];if(n)for(var a in n.modules&&(r.modules=(t.modules||[]).concat(n.modules)),n.directives&&(r.directives=T(Object.create(t.directives||null),n.directives)),n)"modules"!==a&&"directives"!==a&&(r[a]=n[a]);r.warn=function(t,e,n){(n?i:o).push(t)};var s=uc(e.trim(),r);return s.errors=o,s.tips=i,s}return{compile:e,compileToFunctions:cc(e)}}),dc=fc(As).compileToFunctions;function pc(t){return(lc=lc||document.createElement("div")).innerHTML=t?'<a href="\n"/>':'<div a="\n"/>',lc.innerHTML.indexOf("&#10;")>0}var vc=!!J&&pc(!1),hc=!!J&&pc(!0),mc=b((function(t){var e=to(t);return e&&e.innerHTML})),gc=Cr.prototype.$mount;return Cr.prototype.$mount=function(t,e){if((t=t&&to(t))===document.body||t===document.documentElement)return this;var n=this.$options;if(!n.render){var r=n.template;if(r)if("string"==typeof r)"#"===r.charAt(0)&&(r=mc(r));else{if(!r.nodeType)return this;r=r.innerHTML}else t&&(r=function(t){if(t.outerHTML)return t.outerHTML;var e=document.createElement("div");return e.appendChild(t.cloneNode(!0)),e.innerHTML}(t));if(r){var o=dc(r,{outputSourceRange:!1,shouldDecodeNewlines:vc,shouldDecodeNewlinesForHref:hc,delimiters:n.delimiters,comments:n.comments},this),i=o.render,a=o.staticRenderFns;n.render=i,n.staticRenderFns=a}}return gc.call(this,t,e)},Cr.compile=dc,T(Cr,Fn),Cr.effect=function(t,e){var n=new Vn(ct,t,j,{sync:!0});e&&(n.update=function(){e((function(){return n.run()}))})},Cr}));
  </script>
  <title>__TITLE__</title>
  <style>
    .row {
      display: flex;
      flex-wrap: wrap;
    }

    .column {
      flex: 1;
      padding: 10px;
    }

    .table-header {
      font-weight: bold;
      border-bottom: 1px solid black;
    }

    /* 更换卡片背景色 */
    .table-row {
      border-bottom: 1px solid lightgray;
    }

    .table-cell {
      padding: 5px;
    }
  </style>
</head>
<!-- 更换整体页面背景色 -->

<body style="padding: 0 200px;background-color: #f5f5f5;">
  <div id="app">
    <h1 style="padding-left: 20px;font-size: 40px;">文章目录</h1>

    <ul>
      <li v-if="num == 1 || num == 0" v-for="(i,index) in contentList" :key="index"><a :href="`#${i.primary_col.header}`">{{ i.primary_col.header ? i.primary_col.header : '' }}</a></li>
      <li v-if="num == 2 " v-for="(i,index) in contentList" :key="index"><a :href="`#${i.secondary_rol.header}`">{{i.secondary_rol.header ?i.secondary_rol.header : '' }}</a></li </ul>
      <!-- 按钮的样式调整 -->
      <button style="cursor: pointer;cursor: pointer;height: 30px;" @click="showStatus">{{ text }}</button>
      <!-- border-radius调整圆角弧度 box-shadow调整阴影-->
      <div class="row table-row" v-for="(i,index) in contentList" :key="index" style="border-radius: 10px;background-color: rgb(255, 255, 255);margin: 40px 0px;padding: 20px 40px;position: relative;box-shadow: 0px 0px 15px -8px;">
        <div class="column table-cell" v-if="num == 1 || num == 0">
          <div class="markdown-body">
            <h1 :id="i.primary_col.header">{{ i.primary_col.header }}</h1>
            <div v-html="i.primary_col.msg"></div>
          </div>
        </div>
        <div class="column table-cell" v-if="num == 2 || num == 0">
          <div class="markdown-body">
            <h1 :id="i.secondary_rol.header">{{ i.secondary_rol.header }}</h1>
            <div v-html="i.secondary_rol.msg"></div>
          </div>
        </div>
      </div>
  </div>
</body>

<script>
  new Vue({
    el: '#app',
    data() {
      return {
        // 添加内容的数组，示例请看数组底部
        contentList: [
          
            {
                primary_col: {
                    header: String.raw``,
                    msg: String.raw`<div class="markdown-body"></div>`,
                },
                secondary_rol: {
                    header: String.raw``,
                    msg: String.raw`<div class="markdown-body"></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`一、论文概况`,
                    msg: String.raw`<div class="markdown-body"><p>一、论文概况</p></div>`,
                },
                secondary_rol: {
                    header: String.raw``,
                    msg: String.raw`<div class="markdown-body"></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`Abstract`,
                    msg: String.raw`<div class="markdown-body"><p>Abstract</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`标题：KAN：柯尔莫戈洛夫-阿诺德网络`,
                    msg: String.raw`<div class="markdown-body"><p>作者：Ziming Liu；Yixuan Wang；Sachin Vaidya；Fabian Ruehle；James Halverson；Marin Soljačić；Thomas Y Hou</p>
<p>摘要：受到柯尔莫戈洛夫-阿诺德表示定理的启发，我们提出了柯尔莫戈洛夫-阿诺德网络（KANs）作为多层感知器（MLPs）的有潜力替代方案。当MLPs在网络节点（“神经元”）上有固定的激活函数时，KANs在网络边（“权重”）上具有可学习的激活函数。KANs完全不包含线性权重——每个权重参数都由作为样条函数参数化的单变量函数替代。我们表明，这一看似简单的改变使KANs在准确性与可解释性方面超越了MLPs，尤其是在小型AI+科学任务中。对于准确性，较小规模的KANs在函数拟合任务中能够达到与较大规模MLPs相当或更优的准确度。理论上和经验上，KANs拥有关于神经元缩放律相较于MLPs更快的优势。对于可解释性，KANs可以直观地进行可视化，并能轻松地与人类用户交互。通过数学和物理学中的两个示例，KANs被展示为有用的“合作者”，帮助科学家（重新）发现数学和物理定律。总之，KANs作为MLPs的有前景替代品，为改进当今依赖MLPs的深度学习模型开辟了新的可能性。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`二、论文翻译`,
                    msg: String.raw`<div class="markdown-body"><p>二、论文翻译</p></div>`,
                },
                secondary_rol: {
                    header: String.raw``,
                    msg: String.raw`<div class="markdown-body"></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`#  Part-1`,
                    msg: String.raw`<div class="markdown-body"><p>MLP(x) = (W 3 ∘ σ 2 ∘ W 2 ∘ σ 1 ∘ W 1 )(x) KAN(x) = (Φ 3 ∘ Φ 2 ∘ Φ 1 )(x) W 1 σ 1 W 2 σ 2 W 3 Φ 3 Φ 2 Φ 1 x x MLP(x) KAN(x)
1 Introduction
Multi-layer perceptrons (MLPs) [1,2,3], also known as fully-connected feedforward neural networks, are foundational building blocks of today's deep learning models. The importance of MLPs can never be overstated, since they are the default models in machine learning for approximating nonlinear functions, due to their expressive power guaranteed by the universal approximation theorem [3]. However, are MLPs the best nonlinear regressors we can build? Despite the prevalent use of MLPs, they have significant drawbacks. In transformers [4] for example, MLPs consume almost all non-embedding parameters and are typically less interpretable (relative to attention layers) without post-analysis tools [5].
We propose a promising alternative to MLPs, called Kolmogorov-Arnold Networks (KANs). Whereas MLPs are inspired by the universal approximation theorem, KANs are inspired by the Kolmogorov-Arnold representation theorem [6,7,8]. Like MLPs, KANs have fully-connected structures. However, while MLPs place fixed activation functions on nodes ("neurons"), KANs place learnable activation functions on edges ("weights"), as illustrated in Figure 0.1. As a result, KANs have no linear weight matrices at all: instead, each weight parameter is replaced by a learnable 1D function parametrized as a spline. KANs' nodes simply sum incoming signals without applying any non-linearities. One might worry that KANs are hopelessly expensive, since each MLP's weight parameter becomes KAN's spline function. Fortunately, KANs usually allow much smaller computation graphs than MLPs.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# `,
                    msg: String.raw`<div class="markdown-body"><h3>第1部分</h3>
<p><strong>MLP(x) = (W₃ ◦ σ₂ ◦ W₂ ◦ σ₁ ◦ W₁)(x)</strong><br />
<strong>KAN(x) = (Φ₃ ◦ Φ₂ ◦ Φ₁)(x)</strong>  </p>
<p>在这里，<code>W₁, W₂, W₃</code> 表示多层感知器（MLP）中的权重矩阵，而<code>σ₁, σ₂</code>代表固定的激活函数。相反，在Kolmogorov-Arnold网络（KAN）中，<code>Φ₁, Φ₂, Φ₃</code> 是边缘（权重）上的可学习激活函数，这些函数被可视化为从<code>x</code>到<code>x</code>的连续转换过程。</p>
<p><strong>第1章 引言</strong></p>
<p>多层感知器（MLP）[1,2,3]，也称为全连接前馈神经网络，是现代深度学习模型的基本构建单元。MLP的重要性怎么强调都不为过，因为它们凭借通用近似定理[3]保障的表达能力，成为了机器学习中非线性函数逼近的默认模型。然而，MLP是否是我们能构建的最佳非线性回归器？尽管MLP应用广泛，但存在显著缺陷。例如，在变压器[4]中，MLP几乎消耗了所有非嵌入参数，并且在没有后分析工具的情况下解释性通常较低（相比于注意力层）[5]。</p>
<p>我们提出了一种有前景的MLP替代方案——称为Kolmogorov-Arnold网络（KAN）。MLP的设计灵感源自通用近似定理，而KAN的灵感则来源于Kolmogorov-Arnold表示定理[6,7,8]。与MLP类似，KAN同样具有全连接的结构。不同之处在于，MLP在节点（“神经元”）上放置了固定的激活函数，而KAN则在边（“权重”）上设定了可学习的激活函数，如图0.1所示。结果，KAN完全没有线性权重矩阵：每一个权重参数都被一个作为样条曲线参数化的单变量可学习函数所取代。KAN的节点只是简单地累加输入信号，不施加任何非线性变换。人们可能担忧KAN会异常昂贵，因为MLP中的每个权重参数在KAN中变成了一个样条函数。幸运的是，实际上KAN通常允许比MLP小得多的计算图。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`#  Part-2`,
                    msg: String.raw`<div class="markdown-body"><p>Unsurprisingly, the possibility of using Kolmogorov-Arnold representation theorem to build neural networks has been studied [9,10,11,12,13,14,15,16]. However, most work has stuck with the original depth-2 width-(2n + 1) representation, and many did not have the chance to leverage more modern techniques (e.g., back propagation) to train the networks. In [12], a depth-2 width-(2n + 1) representation was investigated, with breaking of the curse of dimensionality observed both empirically and with an approximation theory given compositional structures of the function.
Our contribution lies in generalizing the original Kolmogorov-Arnold representation to arbitrary widths and depths, revitalizing and contextualizing it in today's deep learning world, as well as using extensive empirical experiments to highlight its potential for AI + Science due to its accuracy and interpretability.
Despite their elegant mathematical interpretation, KANs are nothing more than combinations of splines and MLPs, leveraging their respective strengths and avoiding their respective weaknesses. Splines are accurate for low-dimensional functions, easy to adjust locally, and able to switch between different resolutions. However, splines have a serious curse of dimensionality (COD) problem, because of their inability to exploit compositional structures. MLPs, On the other hand, suffer less from COD thanks to their feature learning, but are less accurate than splines in low dimensions, because of their inability to optimize univariate functions. The link between MLPs using ReLU-k as activation functions and splines have been established in [17,18]. To learn a function accurately, a model should not only learn the compositional structure (external degrees of freedom), but should also approximate well the univariate functions (internal degrees of freedom). KANs are such models since they have MLPs on the outside and splines on the inside. As a result, KANs can not only learn features (thanks to their external similarity to MLPs), but can also optimize these learned features to great accuracy (thanks to their internal similarity to splines). For example, given a high dimensional function splines would fail for large N due to COD; MLPs can potentially learn the the generalized additive structure, but they are very inefficient for approximating the exponential and sine functions with say, ReLU activations. In contrast, KANs can learn both the compositional structure and the univariate functions quite well, hence outperforming MLPs by a large margin (see Figure 3.1).</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# `,
                    msg: String.raw`<div class="markdown-body"><p>毫不意外地，利用柯尔莫戈洛夫-阿诺德表示定理构建神经网络的可能性已被研究[9,10,11,12,13,14,15,16]。然而，大多数工作仍停留在初始的深度2、宽度（2n + 1）的表示上，并且许多研究未能利用更多现代技术（如反向传播）来训练网络。在[12]中，研究了深度2、宽度（2n + 1）的表示形式，通过函数的组合结构，在经验上和近似理论上都观察到了维度灾难的缓解。
我们的贡献在于将原始的柯尔莫戈洛夫-阿诺德表示推广到任意宽度和深度，使其在当今深度学习的世界中焕发活力并赋予其新的背景，同时利用广泛的实证实验强调其由于高准确度和可解释性，在AI + 科学领域的潜在价值。
尽管KANs具有优雅的数学解释，但它们实质上不过是样条函数与多层感知器（MLPs）的结合，吸取了两者的优点而避免了各自的缺点。样条函数对于低维函数精确度高，易于局部调整，并能够在不同分辨率间切换。然而，由于不能利用函数的组合结构，样条函数存在严重的维度灾难（COD）问题。另一方面，多层感知器（MLPs）则因特征学习的关系而在COD问题上受到的影响较小，但由于它们优化单变量函数的能力较弱，在低维度下不如样条函数准确。ReLU-k作为激活函数的MLPs与样条函数之间的联系已在[17,18]中建立。为了准确学习一个函数，模型不仅应学习组合结构（外部自由度），还应当很好地逼近单变量函数（内部自由度）。KANs正是这样的模型，因为它们外部类似MLP，内部则是样条函数。因此，KANs既能学习特征（得益于其对MLPs的外在相似性），又能通过其内部类似于样条的特性，将学到的特征优化至很高的精度。例如，鉴于高维函数，当N很大时，样条函数会因COD而失效；MLPs有潜力学习广义加性结构，但对于使用ReLU激活函数来近似指数和正弦函数等问题，则效率极低。相比之下，KANs能很好地学习组合结构及单变量函数，从而在很大程度上超越MLPs的表现（见图3.1）。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`#  Part-3`,
                    msg: String.raw`<div class="markdown-body"><p>f (x 1 , • • • , x N ) = exp 1 N N i=1 sin 2 (x i ) ,(1.1)
Throughout this paper, we will use extensive numerical experiments to show that KANs can lead to accuracy and interpretability improvement over MLPs, at least on small-scale AI + Science tasks. The organization of the paper is illustrated in Figure 2.1. In Section 2, we introduce the KAN architecture and its mathematical foundation, introduce network simplification techniques to make KANs interpretable, and introduce a grid extension technique to make KANs more accurate. In Section 3, we show that KANs are more accurate than MLPs for data fitting: KANs can beat the curse of dimensionality when there is a compositional structure in data, achieving better scaling laws than MLPs.
We also demonstrate the potential of KANs in PDE solving via a simple example of the Poisson equation. In Section 4, we show that KANs are interpretable and can be used for scientific discoveries. We use two examples from mathematics (knot theory) and physics (Anderson localization) to demonstrate that KANs can be helpful "collaborators" for scientists to (re)discover math and physical laws. Section 5 summarizes related works. In Section 6, we conclude by discussing broad impacts and future directions. Codes are available at https://github.com/KindXiaoming/pykan and can also be installed via pip install pykan.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# `,
                    msg: String.raw`<div class="markdown-body"><p>函数 ( f(x_1, \ldots, x_N) = \exp\left(\frac{1}{N} \sum_{i=1}^{N} \sin^2(x_i)\right) ), (1.1)</p>
<p>在本文中，我们将通过广泛的数值实验来证明，在至少小型AI+科学任务上，KANs相比MLPs能够实现准确性和可解释性的提升。论文的组织结构如图2.1所示。在第二部分中，我们介绍KAN架构及其数学基础，引入网络简化技术以提高KANs的可解释性，并提出网格扩展技术以增强KANs的准确性。第三部分展示KANs在数据拟合任务上的更高准确性：当数据中存在组合结构时，KANs能克服维度灾难，展现出比MLPs更优的缩放定律。我们还通过泊松方程这一简单实例，展示了KANs在求解偏微分方程方面的潜力。第四部分说明KANs具有可解释性，可用于科研发现。我们利用来自数学（纽结理论）和物理（安德森局域化）领域的两个例子，展示KANs能作为科学家有益的“合作伙伴”，帮助（重新）发现数学与物理规律。第五部分总结了相关工作。第六部分通过讨论广泛影响及未来发展方向进行总结。代码可在 https://github.com/KindXiaoming/pykan 获取，也可通过pip install pykan进行安装。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Kolmogorov-Arnold Networks (KAN)`,
                    msg: String.raw`<div class="markdown-body"><p>Multi-Layer Perceptrons (MLPs) are inspired by the universal approximation theorem. We instead focus on the Kolmogorov-Arnold representation theorem, which can be realized by a new type of neural network called Kolmogorov-Arnold networks (KAN). We review the Kolmogorov-Arnold theorem in Section 2.1, to inspire the design of Kolmogorov-Arnold Networks in Section 2.2. In Section 2.3, we provide theoretical guarantees for the expressive power of KANs and their neural scaling laws, relating them to existing approximation and generalization theories in the literature. In Section 2.4, we propose a grid extension technique to make KANs increasingly more accurate. In Section 2.5, we propose simplification techniques to make KANs interpretable.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Kolmogorov-Arnold Networks (KAN)`,
                    msg: String.raw`<div class="markdown-body"><p>多层感知器(MLPs)的设计灵感源自于通用近似定理。相反，我们关注的是可以由一种新型神经网络——Kolmogorov-Arnold网络（KAN）实现的Kolmogorov-Arnold表示定理。我们在第2.1节中回顾Kolmogorov-Arnold定理，旨在启发第2.2节中Kolmogorov-Arnold网络的设计。在第2.3节中，我们为KANs的表达能力及其神经缩放定律提供理论保障，并将其与文献中现有的近似和泛化理论相联系。第2.4节中，我们提出了一种网格扩展技术，以不断提高KANs的准确性。在第2.5节，我们提出了简化技术，使KANs具有可解释性。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Kolmogorov-Arnold Representation theorem`,
                    msg: String.raw`<div class="markdown-body"><p>Vladimir Arnold and Andrey Kolmogorov established that if f is a multivariate continuous function on a bounded domain, then f can be written as a finite composition of continuous functions of a single variable and the binary operation of addition. More specifically, for a smooth f : [0, 1] n → R,
f (x) = f (x 1 , • • • , x n ) = 2n+1 q=1 Φ q n p=1 ϕ q,p (x p ) ,(2.1)
where ϕ q,p : [0, 1] → R and Φ q : R → R. In a sense, they showed that the only true multivariate function is addition, since every other function can be written using univariate functions and sum.
One might naively consider this great news for machine learning: learning a high-dimensional function boils down to learning a polynomial number of 1D functions. However, these 1D functions can be non-smooth and even fractal, so they may not be learnable in practice [19]. Because of this pathological behavior, the Kolmogorov-Arnold representation theorem was basically sentenced to death in machine learning, regarded as theoretically sound but practically useless [19].
However, we are more optimistic about the usefulness of the Kolmogorov-Arnold theorem for machine learning. First of all, we need not stick to the original Eq. (2.1) which has only two-layer nonlinearities and a small number of terms (2n + 1) in the hidden layer: we will generalize the network to arbitrary widths and depths. Secondly, most functions in science and daily life are often smooth and have sparse compositional structures, potentially facilitating smooth Kolmogorov-Arnold representations. The philosophy here is close to the mindset of physicists, who often care more about typical cases rather than worst cases. After all, our physical world and machine learning tasks must have structures to make physics and machine learning useful or generalizable at all [20].</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Kolmogorov-Arnold Representation theorem`,
                    msg: String.raw`<div class="markdown-body"><p>Vladimir Arnold 与 Andrey Kolmogorov 建立了这样一个理论：如果f是一个在有界域上的多变量连续函数，那么f可以写成单变量连续函数的有限组合以及加法这一二元运算的形式。更具体地说，对于一个平滑的函数f : [0, 1]^n → R，</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF"> f(x) = f(x_1,\ldots,x_n) = \sum_{q=1}^{2^n+1} \Phi_q \left( \sum_{p=1}^n \phi_{q,p}(x_p) \right), \quad (2.1) </font><font color="#00FF00">$$</font>
</p>
<p>其中，<font color="#00FF00">$</font><font color="#FF00FF">\phi_{q,p} : [0, 1] → R</font><font color="#00FF00">$</font> 和 <font color="#00FF00">$</font><font color="#FF00FF">\Phi_q : R → R</font><font color="#00FF00">$</font>。从某种意义上说，他们证明了唯一真正的多变量函数是加法，因为任何其他函数都可以通过单变量函数和求和操作来表示。</p>
<p>乍看起来，这对于机器学习来说似乎是好消息：学习一个高维函数归结为学习多项式数量的一维函数。然而，这些一维函数可能是非光滑的，甚至是分形的，因此在实践中可能无法学习[19]。由于这种病态行为，Kolmogorov-Arnold表示定理在机器学习中几乎被判了死刑，被认为理论上正确但实际上无用[19]。</p>
<p>然而，我们对Kolmogorov-Arnold定理在机器学习中的实用性持更加乐观的态度。首先，我们不必拘泥于原始公式(2.1)，它仅具有两层非线性和隐藏层中数量较少的项（2^n + 1）：我们将网络推广到任意宽度和深度。其次，科学和日常生活中大多数函数往往是平滑的，并具有稀疏的组合结构，这可能有助于形成平滑的Kolmogorov-Arnold表示方法。这里的理念接近物理学家的心态，他们通常更关心典型情况而非最坏情况。毕竟，无论是我们的物理世界还是机器学习任务，都必然存在某种结构，这才能使物理学和机器学习在根本上有用或可泛化[20]。</p><hr /><p>Vladimir Arnold 与 Andrey Kolmogorov 建立了这样一个理论：如果f是一个在有界域上的多变量连续函数，那么f可以写成单变量连续函数的有限组合以及加法这一二元运算的形式。更具体地说，对于一个平滑的函数f : [0, 1]^n → R，</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><mi>&#x02026;</mi><mo>&#x0002C;</mo><msub><mi>x</mi><mi>n</mi></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><msubsup><mo>&#x02211;</mo><mrow><mi>q</mi><mo>&#x0003D;</mo><mn>1</mn></mrow><mrow><msup><mn>2</mn><mi>n</mi></msup><mo>&#x0002B;</mo><mn>1</mn></mrow></msubsup><msub><mi>&#x003A6;</mi><mi>q</mi></msub><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><msubsup><mo>&#x02211;</mo><mrow><mi>p</mi><mo>&#x0003D;</mo><mn>1</mn></mrow><mi>n</mi></msubsup><msub><mi>&#x003D5;</mi><mrow><mi>q</mi><mo>&#x0002C;</mo><mi>p</mi></mrow></msub><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mi>p</mi></msub><mo stretchy="false">&#x00029;</mo><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow><mo>&#x0002C;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.1</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p>
<p>其中，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003D5;</mi><mrow><mi>q</mi><mo>&#x0002C;</mo><mi>p</mi></mrow></msub><mi>:</mi><mo stretchy="false">[</mo><mn>0</mn><mo>&#x0002C;</mo><mn>1</mn><mo stretchy="false">]</mo><mi>→</mi><mi>R</mi></mrow></math> 和 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mi>q</mi></msub><mi>:</mi><mi>R</mi><mi>→</mi><mi>R</mi></mrow></math>。从某种意义上说，他们证明了唯一真正的多变量函数是加法，因为任何其他函数都可以通过单变量函数和求和操作来表示。</p>
<p>乍看起来，这对于机器学习来说似乎是好消息：学习一个高维函数归结为学习多项式数量的一维函数。然而，这些一维函数可能是非光滑的，甚至是分形的，因此在实践中可能无法学习[19]。由于这种病态行为，Kolmogorov-Arnold表示定理在机器学习中几乎被判了死刑，被认为理论上正确但实际上无用[19]。</p>
<p>然而，我们对Kolmogorov-Arnold定理在机器学习中的实用性持更加乐观的态度。首先，我们不必拘泥于原始公式(2.1)，它仅具有两层非线性和隐藏层中数量较少的项（2^n + 1）：我们将网络推广到任意宽度和深度。其次，科学和日常生活中大多数函数往往是平滑的，并具有稀疏的组合结构，这可能有助于形成平滑的Kolmogorov-Arnold表示方法。这里的理念接近物理学家的心态，他们通常更关心典型情况而非最坏情况。毕竟，无论是我们的物理世界还是机器学习任务，都必然存在某种结构，这才能使物理学和机器学习在根本上有用或可泛化[20]。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# KAN architecture Part-1`,
                    msg: String.raw`<div class="markdown-body"><p>Suppose we have a supervised learning task consisting of input-output pairs {x i , y i }, where we want to find f such that y i ≈ f (x i ) for all data points. Eq. (2.1) implies that we are done if we can find appropriate univariate functions ϕ q,p and Φ q . This inspires us to design a neural network which explicitly parametrizes Eq. (2.1). Since all functions to be learned are univariate functions, we can parametrize each 1D function as a B-spline curve, with learnable coefficients of local B-spline basis functions (see Figure 2.2 right). Now we have a prototype of KAN, whose computation graph is exactly specified by Eq. (2.1) and illustrated in Figure 0.1 (b) (with the input dimension n = 2), appearing as a two-layer neural network with activation functions placed on edges instead of nodes (simple summation is performed on nodes), and with width 2n + 1 in the middle layer.
As mentioned, such a network is known to be too simple to approximate any function arbitrarily well in practice with smooth splines! We therefore generalize our KAN to be wider and deeper. It is not immediately clear how to make KANs deeper, since Kolmogorov-Arnold representations correspond to two-layer KANs. To the best of our knowledge, there is not yet a "generalized" version of the theorem that corresponds to deeper KANs.
The breakthrough occurs when we notice the analogy between MLPs and KANs. In MLPs, once we define a layer (which is composed of a linear transformation and nonlinearties), we can stack more layers to make the network deeper. To build deep KANs, we should first answer: "what is a KAN layer?" It turns out that a KAN layer with n in -dimensional inputs and n out -dimensional outputs can be defined as a matrix of 1D functions
Φ = {ϕ q,p }, p = 1, 2, • • • , n in , q = 1, 2 • • • , n out ,(2.2)
where the functions ϕ q,p have trainable parameters, as detaild below. 
[n 0 , n 1 , • • • , n L ],(2.3)
where n i is the number of nodes in the i th layer of the computational graph. We denote the i th neuron in the l th layer by (l, i), and the activation value of the (l, i)-neuron by x l,i . Between layer l and layer l + 1, there are n l n l+1 activation functions: the activation function that connects (l, i) and (l + 1, j) is denoted by
ϕ l,j,i , l = 0, • • • , L -1, i = 1, • • • , n l , j = 1, • • • , n l+1 . (2.4)
The pre-activation of ϕ l,j,i is simply x l,i ; the post-activation of ϕ l,j,i is denoted by xl,j,i ≡ ϕ l,j,i (x l,i ). The activation value of the (l + 1, j) neuron is simply the sum of all incoming postactivations:
x l+1,j = n l i=1 xl,j,i = n l i=1 ϕ l,j,i (x l,i ), j = 1, • • • , n l+1 .(2.5)
In matrix form, this reads
x l+1 =       ϕ l,1,1 (•) ϕ l,1,2 (•) • • • ϕ l,1,n l (•) ϕ l,2,1 (•) ϕ l,2,2 (•) • • • ϕ l,2,n l (•) . . . . . . . . . ϕ l,n l+1 ,1 (•) ϕ l,n l+1 ,2 (•) • • • ϕ l,n l+1 ,n l (•)       Φ l x l ,(2.6)</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# KAN architecture`,
                    msg: String.raw`<div class="markdown-body"><p>假设我们有一个监督学习任务，由输入-输出对 ({x_i, y_i}) 组成，我们的目标是找到一个函数 (f) 使得对于所有数据点都有 (y_i \approx f(x_i))。式（2.1）表明，如果能找到合适的单变量函数 (\phi_{q,p}) 和 (\Phi_q)，问题即可解决。这启发我们设计一种神经网络，该网络明确地参数化了式（2.1）。由于所有需要学习的函数都是单变量函数，我们可以将每个一维函数参数化为B样条曲线，其中局部B样条基函数的系数是可以学习的（见图 2.2 右）。现在我们有了KAN的一个原型，其计算图完全由式（2.1）限定，并在图 0.1（b）中展示（其中输入维度 (n = 2)），表现为一个两层神经网络，但激活函数布置在边而非节点上（节点上执行简单的求和运算），并且中间层的宽度为 (2n + 1)。</p>
<p>如前所述，这种网络实则过于简单，以至于在实践中无法仅利用平滑的样条曲线任意逼近任何函数！因此，我们需要将KAN推广得更宽和更深。然而，如何使KAN变得更深并非显而易见，因为Kolmogorov-Arnold表示形式对应的是两层KAN。据我们所知，目前尚没有与更深的KAN相对应的“广义”定理版本。</p>
<p>突破出现在我们注意到多层感知机（MLP）与KAN之间的类比时。在MLP中，一旦定义了一层（包括线性变换和非线性部分），我们就可以堆叠更多层以使网络更深。为了构建深度KAN，我们首先需要回答：“什么是KAN的一层？” 现实中，一个具有 (n_{in}) 维输入和 (n_{out}) 维输出的KAN层可以被定义为一维函数的矩阵：</p>
<p>[
\Phi = {\phi_{q,p}}, \quad p = 1, 2, \ldots, n_{in}, \quad q = 1, 2, \ldots, n_{out}, \quad \text{(2.2)}
]</p>
<p>其中函数 (\phi_{q,p}) 具有可训练参数，具体细节如下。</p>
<p>对于一般的KAN架构，我们可以用序列标记层数和每层的节点数量：</p>
<p>[
[n_0, n_1, \ldots, n_L], \quad \text{(2.3)}
]</p>
<p>其中 (n_i) 表示计算图第 (i) 层的节点数。我们用 ((l, i)) 标记第 (l) 层的第 (i) 个神经元，其激活值记为 (x_{l,i})。在第 (l) 层与第 (l+1) 层之间，存在 (n_l \times n_{l+1}) 个激活函数：连接 ((l, i)) 和 ((l + 1, j)) 的激活函数标记为</p>
<p>[
\phi_{l,j,i}, \quad l = 0, \ldots, L - 1, \quad i = 1, \ldots, n_l, \quad j = 1, \ldots, n_{l+1}. \quad \text{(2.4)}
]</p>
<p>函数 (\phi_{l,j,i}) 的预激活值就是 (x_{l,i})；其后激活值标记为 (x_{l,j,i} \equiv \phi_{l,j,i}(x_{l,i}))。第 ((l + 1, j)) 个神经元的激活值仅仅是所有传入后激活值的总和：</p>
<p>[
x_{l+1,j} = \sum_{i=1}^{n_l} x_{l,j,i} = \sum_{i=1}^{n_l} \phi_{l,j,i}(x_{l,i}), \quad j = 1, \ldots, n_{l+1}. \quad \text{(2.5)}
]</p>
<p>以矩阵形式表述，则有</p>
<p>[
x_{l+1} = \begin{pmatrix}
\phi_{l,1,1}(\cdot) &amp; \phi_{l,1,2}(\cdot) &amp; \cdots &amp; \phi_{l,1,n_l}(\cdot) \
\phi_{l,2,1}(\cdot) &amp; \phi_{l,2,2}(\cdot) &amp; \cdots &amp; \phi_{l,2,n_l}(\cdot) \
\vdots &amp; \vdots &amp; \ddots &amp; \vdots \
\phi_{l,n_{l+1},1}(\cdot) &amp; \phi_{l,n_{l+1},2}(\cdot) &amp; \cdots &amp; \phi_{l,n_{l+1},n_l}(\cdot)
\end{pmatrix} \Phi_l x_{l}, \quad \text{(2.6)}
]</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# KAN architecture Part-2`,
                    msg: String.raw`<div class="markdown-body"><p>where Φ l is the function matrix corresponding to the l th KAN layer. A general KAN network is a composition of L layers: given an input vector x 0 ∈ R n0 , the output of KAN is
KAN(x) = (Φ L-1 • Φ L-2 • • • • • Φ 1 • Φ 0 )x. (2.7)
We can also rewrite the above equation to make it more analogous to Eq. (2.1), assuming output dimension n L = 1, and define f (x) ≡ KAN(x):
f (x) = n L-1 i L-1 =1 ϕ L-1,i L ,i L-1   n L-2 i L-2 =1 • • • n2 i2=1 ϕ 2,i3,i2 n1 i1=1 ϕ 1,i2,i1 n0 i0=1 ϕ 0,i1,i0 (x i0 ) • • •   ,(2.8
) which is quite cumbersome. In contrast, our abstraction of KAN layers and their visualizations are cleaner and intuitive. The original Kolmogorov-Arnold representation Eq. (2.1) corresponds to a 2-Layer KAN with shape [n, 2n + 1, 1]. Notice that all the operations are differentiable, so we can train KANs with back propagation. For comparison, an MLP can be written as interleaving of affine transformations W and non-linearities σ:
MLP(x) = (W L-1 • σ • W L-2 • σ • • • • • W 1 • σ • W 0 )x.
(2.9)
It is clear that MLPs treat linear transformations and nonlinearities separately as W and σ, while KANs treat them all together in Φ. In Figure 0.1 (c) and (d), we visualize a three-layer MLP and a three-layer KAN, to clarify their differences.
Implementation details. Although a KAN layer Eq. (2.5) looks extremely simple, it is non-trivial to make it well optimizable. The key tricks are:
(1) Residual activation functions. We include a basis function b(x) (similar to residual connections) such that the activation function ϕ(x) is the sum of the basis function b(x) and the spline function:
ϕ(x) = w b b(x) + w s spline(x).(2.10)
We set
b(x) = silu(x) = x/(1 + e -x ) (2.11)
in most cases. spline(x) is parametrized as a linear combination of B-splines such that
spline(x) = i c i B i (x)(2.12)
where c i s are trainable (see Figure 2.2 for an illustration). In principle w b and w s are redundant since it can be absorbed into b(x) and spline(x). However, we still include these factors (which are by default trainable) to better control the overall magnitude of the activation function.
(2) Initialization scales. Each activation function is initialized to have w s = 1 and spline(x) ≈ 02 . w b is initialized according to the Xavier initialization, which has been used to initialize linear layers in MLPs.
(3) Update of spline grids. We update each grid on the fly according to its input activations, to address the issue that splines are defined on bounded regions but activation values can evolve out of the fixed region during training3 .
Parameter count. For simplicity, let us assume a network
(1) of depth L,
(2) with layers of equal width
n 0 = n 1 = • • • = n L = N ,(3)
with each spline of order k (usually k = 3) on G intervals (for G + 1 grid points).
Then there are in total</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# KAN architecture`,
                    msg: String.raw`<div class="markdown-body"><p>在KAN架构中，Φ_l表示第l层KAN的函数矩阵。一个一般的KAN网络是由L层复合而成：给定输入向量 <font color="#00FF00">$</font><font color="#FF00FF">x_0 \in \mathbb{R}^{n_0}</font><font color="#00FF00">$</font>，KAN的输出为
<font color="#00FF00">$$</font><font color="#FF00FF"> KAN(x) = (\Phi_{L-1} \circ \Phi_{L-2} \circ \ldots \circ \Phi_1 \circ \Phi_0)x. \quad (2.7) </font><font color="#00FF00">$$</font>
我们也能重写上述等式，使其更类似于（2.1）式，假设输出维度 <font color="#00FF00">$</font><font color="#FF00FF">n_L = 1</font><font color="#00FF00">$</font>，并定义 <font color="#00FF00">$</font><font color="#FF00FF">f(x) \equiv KAN(x)</font><font color="#00FF00">$</font> 为：
<font color="#00FF00">$$</font><font color="#FF00FF"> f(x) = \sum_{i_{L-1}=1}^{n_{L-1}} \phi_{L-1,i_{L-1},i_{L}} \left( \sum_{i_{L-2}=1}^{n_{L-2}} \ldots \left( \sum_{i_1=1}^{n_1} \phi_{1,i_2,i_1} \left( \sum_{i_0=1}^{n_0} \phi_{0,i_1,i_0}(x_{i_0}) \right) \ldots \right) \right), \quad (2.8) </font><font color="#00FF00">$$</font>
这一表达式颇为烦琐。相比之下，我们对KAN层的抽象及其可视化更为简洁直观。原始的Kolmogorov-Arnold表示式（2.1）对应于一个形状为[n, 2n + 1, 1]的两层KAN网络。值得注意的是，所有操作都是可微分的，因此我们可以使用反向传播来训练KANs。为了比较，多层感知机（MLP）可以写作线性变换<font color="#00FF00">$</font><font color="#FF00FF">W</font><font color="#00FF00">$</font>与非线性激活<font color="#00FF00">$</font><font color="#FF00FF">\sigma</font><font color="#00FF00">$</font>的交织结构：
<font color="#00FF00">$$</font><font color="#FF00FF"> MLP(x) = (W_{L-1} \circ \sigma \circ W_{L-2} \circ \sigma \ldots \circ W_1 \circ \sigma \circ W_0)x. \quad (2.9) </font><font color="#00FF00">$$</font>
显然，MLP将线性变换和非线性分别处理，而KAN把它们统一在Φ中处理。图0.1(c)和(d)可视化了一个三层MLP和一个三层KAN，以明确它们的区别。</p>
<h3>实现细节</h3>
<p>虽然KAN层方程(2.5)看似极其简单，但要使其优化良好却非易事。关键技巧包括：</p>
<ol>
<li>
<p><strong>残差激活函数</strong>。我们包含一个基函数<font color="#00FF00">$</font><font color="#FF00FF">b(x)</font><font color="#00FF00">$</font>（类似残差连接），使得激活函数<font color="#00FF00">$</font><font color="#FF00FF">\phi(x)</font><font color="#00FF00">$</font>是基函数<font color="#00FF00">$</font><font color="#FF00FF">b(x)</font><font color="#00FF00">$</font>和样条函数的和：
<font color="#00FF00">$$</font><font color="#FF00FF"> \phi(x) = w_b b(x) + w_s \text{spline}(x). \quad (2.10) </font><font color="#00FF00">$$</font>
通常我们设置
<font color="#00FF00">$$</font><font color="#FF00FF"> b(x) = \text{silu}(x) = \frac{x}{1 + e^{-x}}. \quad (2.11) </font><font color="#00FF00">$$</font>
<font color="#00FF00">$</font><font color="#FF00FF">\text{spline}(x)</font><font color="#00FF00">$</font>被参数化为B样条的线性组合，即
<font color="#00FF00">$$</font><font color="#FF00FF"> \text{spline}(x) = \sum_i c_i B_i(x), \quad (2.12) </font><font color="#00FF00">$$</font>
其中<font color="#00FF00">$</font><font color="#FF00FF">c_i</font><font color="#00FF00">$</font>s是可训练的（见图2.2示例）。原则上讲，<font color="#00FF00">$</font><font color="#FF00FF">w_b</font><font color="#00FF00">$</font>和<font color="#00FF00">$</font><font color="#FF00FF">w_s</font><font color="#00FF00">$</font>是冗余的，因为它们可以被吸收进<font color="#00FF00">$</font><font color="#FF00FF">b(x)</font><font color="#00FF00">$</font>和<font color="#00FF00">$</font><font color="#FF00FF">\text{spline}(x)</font><font color="#00FF00">$</font>中。然而，我们仍保留这些因子（默认情况下它们是可训练的），以便更好地控制激活函数的整体幅度。</p>
</li>
<li>
<p><strong>初始化尺度</strong>。每个激活函数初始化时让<font color="#00FF00">$</font><font color="#FF00FF">w_s = 1</font><font color="#00FF00">$</font>且<font color="#00FF00">$</font><font color="#FF00FF">\text{spline}(x) \approx 0^2</font><font color="#00FF00">$</font>。<font color="#00FF00">$</font><font color="#FF00FF">w_b</font><font color="#00FF00">$</font>根据Xavier初始化进行设置，该方法已被用于初始化MLP中的线性层。</p>
</li>
<li>
<p><strong>样条网格的更新</strong>。我们根据输入激活的情况动态更新每个网格，以解决样条定义在有界的区域上，但在训练过程中激活值可能超出固定区域的问题。</p>
</li>
</ol>
<h3>参数数量</h3>
<p>为简化起见，假设网络：</p>
<ul>
<li>（1）深度为L；</li>
<li>（2）各层宽度相等，<font color="#00FF00">$</font><font color="#FF00FF">n_0 = n_1 = \ldots = n_L = N</font><font color="#00FF00">$</font>；</li>
<li>每个样条函数阶数为k（通常<font color="#00FF00">$</font><font color="#FF00FF">k=3</font><font color="#00FF00">$</font>），跨越G个区间（对G + 1个网格点）。</li>
</ul>
<p>那么总共的参数数量为...（此处因篇幅限制未给出直接计算结果，按照上述假设应包括基于KAN网络结构的具体参数统计公式，考虑到每层的激活函数数量、每个激活函数参数数量等因素进行综合评估）。</p><hr /><p>在KAN架构中，Φ_l表示第l层KAN的函数矩阵。一个一般的KAN网络是由L层复合而成：给定输入向量 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>x</mi><mn>0</mn></msub><mo>&#x02208;</mo><msup><mi>&#x0211D;</mi><mrow><msub><mi>n</mi><mn>0</mn></msub></mrow></msup></mrow></math>，KAN的输出为
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>K</mi><mi>A</mi><mi>N</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>&#x003A6;</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub><mo>&#x02218;</mo><mi>&#x02026;</mi><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mn>1</mn></msub><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mn>0</mn></msub><mo stretchy="false">&#x00029;</mo><mi>x</mi><mo>&#x0002E;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.7</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
我们也能重写上述等式，使其更类似于（2.1）式，假设输出维度 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>n</mi><mi>L</mi></msub><mo>&#x0003D;</mo><mn>1</mn></mrow></math>，并定义 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x02261;</mo><mi>K</mi><mi>A</mi><mi>N</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math> 为：
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><msubsup><mo>&#x02211;</mo><mrow><msub><mi>i</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x0003D;</mo><mn>1</mn></mrow><mrow><msub><mi>n</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub></mrow></msubsup><msub><mi>&#x003D5;</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn><mo>&#x0002C;</mo><msub><mi>i</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x0002C;</mo><msub><mi>i</mi><mrow><mi>L</mi></mrow></msub></mrow></msub><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><msubsup><mo>&#x02211;</mo><mrow><msub><mi>i</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub><mo>&#x0003D;</mo><mn>1</mn></mrow><mrow><msub><mi>n</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub></mrow></msubsup><mi>&#x02026;</mi><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><msubsup><mo>&#x02211;</mo><mrow><msub><mi>i</mi><mn>1</mn></msub><mo>&#x0003D;</mo><mn>1</mn></mrow><mrow><msub><mi>n</mi><mn>1</mn></msub></mrow></msubsup><msub><mi>&#x003D5;</mi><mrow><mn>1</mn><mo>&#x0002C;</mo><msub><mi>i</mi><mn>2</mn></msub><mo>&#x0002C;</mo><msub><mi>i</mi><mn>1</mn></msub></mrow></msub><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><msubsup><mo>&#x02211;</mo><mrow><msub><mi>i</mi><mn>0</mn></msub><mo>&#x0003D;</mo><mn>1</mn></mrow><mrow><msub><mi>n</mi><mn>0</mn></msub></mrow></msubsup><msub><mi>&#x003D5;</mi><mrow><mn>0</mn><mo>&#x0002C;</mo><msub><mi>i</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>i</mi><mn>0</mn></msub></mrow></msub><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mrow><msub><mi>i</mi><mn>0</mn></msub></mrow></msub><mo stretchy="false">&#x00029;</mo><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow><mi>&#x02026;</mi><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow><mo>&#x0002C;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.8</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
这一表达式颇为烦琐。相比之下，我们对KAN层的抽象及其可视化更为简洁直观。原始的Kolmogorov-Arnold表示式（2.1）对应于一个形状为[n, 2n + 1, 1]的两层KAN网络。值得注意的是，所有操作都是可微分的，因此我们可以使用反向传播来训练KANs。为了比较，多层感知机（MLP）可以写作线性变换<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>W</mi></mrow></math>与非线性激活<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003C3;</mi></mrow></math>的交织结构：
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>M</mi><mi>L</mi><mi>P</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>W</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x02218;</mo><mi>&#x003C3;</mi><mo>&#x02218;</mo><msub><mi>W</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub><mo>&#x02218;</mo><mi>&#x003C3;</mi><mi>&#x02026;</mi><mo>&#x02218;</mo><msub><mi>W</mi><mn>1</mn></msub><mo>&#x02218;</mo><mi>&#x003C3;</mi><mo>&#x02218;</mo><msub><mi>W</mi><mn>0</mn></msub><mo stretchy="false">&#x00029;</mo><mi>x</mi><mo>&#x0002E;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.9</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
显然，MLP将线性变换和非线性分别处理，而KAN把它们统一在Φ中处理。图0.1(c)和(d)可视化了一个三层MLP和一个三层KAN，以明确它们的区别。</p>
<h3>实现细节</h3>
<p>虽然KAN层方程(2.5)看似极其简单，但要使其优化良好却非易事。关键技巧包括：</p>
<ol>
<li>
<p><strong>残差激活函数</strong>。我们包含一个基函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>b</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>（类似残差连接），使得激活函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003D5;</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>是基函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>b</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>和样条函数的和：
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>&#x003D5;</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><msub><mi>w</mi><mi>b</mi></msub><mi>b</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><msub><mi>w</mi><mi>s</mi></msub><mtext>spline</mtext><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002E;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.10</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
通常我们设置
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>b</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mtext>silu</mtext><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mfrac><mrow><mi>x</mi></mrow><mrow><mn>1</mn><mo>&#x0002B;</mo><msup><mi>e</mi><mrow><mo>&#x02212;</mo><mi>x</mi></mrow></msup></mrow></mfrac><mo>&#x0002E;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.11</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mtext>spline</mtext><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>被参数化为B样条的线性组合，即
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mtext>spline</mtext><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><msub><mo>&#x02211;</mo><mi>i</mi></msub><msub><mi>c</mi><mi>i</mi></msub><msub><mi>B</mi><mi>i</mi></msub><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002C;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.12</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
其中<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>c</mi><mi>i</mi></msub></mrow></math>s是可训练的（见图2.2示例）。原则上讲，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>w</mi><mi>b</mi></msub></mrow></math>和<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>w</mi><mi>s</mi></msub></mrow></math>是冗余的，因为它们可以被吸收进<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>b</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>和<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mtext>spline</mtext><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>中。然而，我们仍保留这些因子（默认情况下它们是可训练的），以便更好地控制激活函数的整体幅度。</p>
</li>
<li>
<p><strong>初始化尺度</strong>。每个激活函数初始化时让<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>w</mi><mi>s</mi></msub><mo>&#x0003D;</mo><mn>1</mn></mrow></math>且<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mtext>spline</mtext><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x02248;</mo><msup><mn>0</mn><mn>2</mn></msup></mrow></math>。<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>w</mi><mi>b</mi></msub></mrow></math>根据Xavier初始化进行设置，该方法已被用于初始化MLP中的线性层。</p>
</li>
<li>
<p><strong>样条网格的更新</strong>。我们根据输入激活的情况动态更新每个网格，以解决样条定义在有界的区域上，但在训练过程中激活值可能超出固定区域的问题。</p>
</li>
</ol>
<h3>参数数量</h3>
<p>为简化起见，假设网络：</p>
<ul>
<li>（1）深度为L；</li>
<li>（2）各层宽度相等，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>n</mi><mn>0</mn></msub><mo>&#x0003D;</mo><msub><mi>n</mi><mn>1</mn></msub><mo>&#x0003D;</mo><mi>&#x02026;</mi><mo>&#x0003D;</mo><msub><mi>n</mi><mi>L</mi></msub><mo>&#x0003D;</mo><mi>N</mi></mrow></math>；</li>
<li>每个样条函数阶数为k（通常<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>k</mi><mo>&#x0003D;</mo><mn>3</mn></mrow></math>），跨越G个区间（对G + 1个网格点）。</li>
</ul>
<p>那么总共的参数数量为...（此处因篇幅限制未给出直接计算结果，按照上述假设应包括基于KAN网络结构的具体参数统计公式，考虑到每层的激活函数数量、每个激活函数参数数量等因素进行综合评估）。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# KAN architecture Part-3`,
                    msg: String.raw`<div class="markdown-body"><p>O(N 2 L(G + k)) ∼ O(N 2
LG) parameters. In contrast, an MLP with depth L and width N only needs O(N 2 L) parameters, which appears to be more efficient than KAN. Fortunately, KANs usually require much smaller N than MLPs, which not only saves parameters, but also achieves better generalization (see e.g., Figure 3.1 and 3.3) and facilitates interpretability. We remark that for 1D problems, we can take N = L = 1 and the KAN network in our implementation is nothing but a spline approximation. For higher dimensions, we characterize the generalization behavior of KANs with a theorem below.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# KAN architecture`,
                    msg: String.raw`<div class="markdown-body"><p>KAN架构 Part-3:</p>
<p>KAN需要大约(O(N^2L(G+k)) \sim O(N^2LG))个参数。相比之下，一个深度为L、宽度为N的多层感知器(MLP)仅需(O(N^2L))个参数，这看上去比KAN更有效率。幸运的是，KAN通常需要比MLP小得多的N值，这不仅节省了参数数量，还实现了更好的泛化性能（例如，参见图3.1和3.3）并促进了可解释性。我们特别指出，对于一维问题，我们可以取(N=L=1)，在这种情况下，我们实现的KAN网络实质上就是一条样条曲线近似。对于更高维度的问题，我们通过以下定理来描述KAN的泛化行为。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# KAN's Approximatio`,
                    msg: String.raw`<div class="markdown-body"><h1>KAN's Approximation Abilities and Scaling Laws Part-1</h1>
<p>Recall that in Eq. (2.1), the 2-Layer width-(2n + 1) representation may be non-smooth. However, deeper representations may bring the advantages of smoother activations. For example, the 4-variable function
f (x 1 , x 2 , x 3 , x 4 ) = exp sin(x 2 1 + x 2 2 ) + sin(x 2 3 + x 2 4 ) (2.13)
can be smoothly represented by a [4, 2, 1, 1] KAN which is 3-Layer, but may not admit a 2-Layer KAN with smooth activations. To facilitate an approximation analysis, we still assume smoothness of activations, but allow the representations to be arbitrarily wide and deep, as in Eq. (2.7). To emphasize the dependence of our KAN on the finite set of grid points, we use Φ G l and Φ G l,i,j below to replace the notation Φ l and Φ l,i,j used in Eq. (2.5) and (2.6).
Theorem 2.1 (Approximation theory, KAT). Let x = (x 1 , x 2 , • • • , x n ). Suppose that a function f (x) admits a representation f = (Φ L-1 • Φ L-2 • • • • • Φ 1 • Φ 0 )x ,(2.14)
as in Eq. (2.7), where each one of the Φ l,i,j are (k + 1)-times continuously differentiable. Then there exists a constant C depending on f and its representation, such that we have the following approximation bound in terms of the grid size G: there exist k-th order B-spline functions Φ G l,i,j such that for any 0 ≤ m ≤ k, we have the bound
∥f -(Φ G L-1 • Φ G L-2 • • • • • Φ G 1 • Φ G 0 )x∥ C m ≤ CG -k-1+m . (2.15)
Here we adopt the notation of C m -norm measuring the magnitude of derivatives up to order m:
∥g∥ C m = max |β|≤m sup x∈[0,1] n D β g(x) .</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# KAN's Approximation Abilities and Scaling Laws`,
                    msg: String.raw`<div class="markdown-body"><p>回忆在等式(2.1)中，两层宽度为(2n + 1)的表示可能不平滑。然而，更深的表示可以带来激活函数更平滑的优点。例如，四变量函数</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF">f(x_1, x_2, x_3, x_4) = \exp(\sin(x_1^2 + x_2^2)) + \sin(x_3^2 + x_4^2)\quad(2.13)</font><font color="#00FF00">$$</font>
</p>
<p>可以通过一个三层、宽度配置为[4, 2, 1, 1]的KAN平滑表示，而可能无法通过具有平滑激活函数的两层KAN来表示。为了便于进行逼近分析，我们仍然假设激活函数的平滑性，但允许表示形式任意宽和深，如等式(2.7)所示。为了强调我们的KAN依赖于有限的网格点集，我们在下文使用<font color="#00FF00">$</font><font color="#FF00FF">\Phi_{Gl}</font><font color="#00FF00">$</font>和<font color="#00FF00">$</font><font color="#FF00FF">\Phi_{Gl,i,j}</font><font color="#00FF00">$</font>替代等式(2.5)和(2.6)中使用的符号<font color="#00FF00">$</font><font color="#FF00FF">\Phi_l</font><font color="#00FF00">$</font>和<font color="#00FF00">$</font><font color="#FF00FF">\Phi_{l,i,j}</font><font color="#00FF00">$</font>。</p>
<p><strong>定理2.1</strong>（近似理论，KAT）。设<font color="#00FF00">$</font><font color="#FF00FF">x=(x_1, x_2, \ldots, x_n)</font><font color="#00FF00">$</font>。假设函数<font color="#00FF00">$</font><font color="#FF00FF">f(x)</font><font color="#00FF00">$</font>具有如下形式的表示</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF">f = (\Phi_{L-1} \circ \Phi_{L-2} \circ \cdots \circ \Phi_1 \circ \Phi_0)x\quad(2.14)</font><font color="#00FF00">$$</font>
</p>
<p>如同等式(2.7)所述，其中每个<font color="#00FF00">$</font><font color="#FF00FF">\Phi_{l,i,j}</font><font color="#00FF00">$</font>都是<font color="#00FF00">$</font><font color="#FF00FF">(k+1)</font><font color="#00FF00">$</font>阶连续可微的。则存在一个依赖于<font color="#00FF00">$</font><font color="#FF00FF">f</font><font color="#00FF00">$</font>及其表示的常数<font color="#00FF00">$</font><font color="#FF00FF">C</font><font color="#00FF00">$</font>，使得根据网格大小<font color="#00FF00">$</font><font color="#FF00FF">G</font><font color="#00FF00">$</font>存在以下近似界限：存在<font color="#00FF00">$</font><font color="#FF00FF">k</font><font color="#00FF00">$</font>阶B样条函数<font color="#00FF00">$</font><font color="#FF00FF">\Phi_{Gl,i,j}</font><font color="#00FF00">$</font>，使得对于任何<font color="#00FF00">$</font><font color="#FF00FF">0 \leq m \leq k</font><font color="#00FF00">$</font>，有界</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF">\|f - (\Phi_{G{L-1}} \circ \Phi_{G{L-2}} \circ \cdots \circ \Phi_{G1} \circ \Phi_{G0})x\|_{C^m} \leq CG^{-(k+1)+m}.\quad(2.15)</font><font color="#00FF00">$$</font>
</p>
<p>这里采用<font color="#00FF00">$</font><font color="#FF00FF">C^m</font><font color="#00FF00">$</font>范数的表示，用来衡量直到<font color="#00FF00">$</font><font color="#FF00FF">m</font><font color="#00FF00">$</font>阶导数的大小：</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF">\|g\|_{C^m} = \max_{|\beta| \leq m} \sup_{x \in [0,1]^n} |D^\beta g(x)|.</font><font color="#00FF00">$$</font>
</p><hr /><p>回忆在等式(2.1)中，两层宽度为(2n + 1)的表示可能不平滑。然而，更深的表示可以带来激活函数更平滑的优点。例如，四变量函数</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>2</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>3</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>4</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>exp</mi><mo stretchy="false">&#x00028;</mo><mi>sin</mi><mo stretchy="false">&#x00028;</mo><msubsup><mi>x</mi><mn>1</mn><mn>2</mn></msubsup><mo>&#x0002B;</mo><msubsup><mi>x</mi><mn>2</mn><mn>2</mn></msubsup><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><mi>sin</mi><mo stretchy="false">&#x00028;</mo><msubsup><mi>x</mi><mn>3</mn><mn>2</mn></msubsup><mo>&#x0002B;</mo><msubsup><mi>x</mi><mn>4</mn><mn>2</mn></msubsup><mo stretchy="false">&#x00029;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.13</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p>
<p>可以通过一个三层、宽度配置为[4, 2, 1, 1]的KAN平滑表示，而可能无法通过具有平滑激活函数的两层KAN来表示。为了便于进行逼近分析，我们仍然假设激活函数的平滑性，但允许表示形式任意宽和深，如等式(2.7)所示。为了强调我们的KAN依赖于有限的网格点集，我们在下文使用<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mrow><mi>G</mi><mi>l</mi></mrow></msub></mrow></math>和<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mrow><mi>G</mi><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub></mrow></math>替代等式(2.5)和(2.6)中使用的符号<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mi>l</mi></msub></mrow></math>和<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub></mrow></math>。</p>
<p><strong>定理2.1</strong>（近似理论，KAT）。设<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>x</mi><mo>&#x0003D;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>2</mn></msub><mo>&#x0002C;</mo><mi>&#x02026;</mi><mo>&#x0002C;</mo><msub><mi>x</mi><mi>n</mi></msub><mo stretchy="false">&#x00029;</mo></mrow></math>。假设函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>具有如下形式的表示</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>f</mi><mo>&#x0003D;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>&#x003A6;</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub><mo>&#x02218;</mo><mo>&#x022EF;</mo><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mn>1</mn></msub><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mn>0</mn></msub><mo stretchy="false">&#x00029;</mo><mi>x</mi><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.14</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p>
<p>如同等式(2.7)所述，其中每个<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub></mrow></math>都是<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mo stretchy="false">&#x00028;</mo><mi>k</mi><mo>&#x0002B;</mo><mn>1</mn><mo stretchy="false">&#x00029;</mo></mrow></math>阶连续可微的。则存在一个依赖于<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi></mrow></math>及其表示的常数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>C</mi></mrow></math>，使得根据网格大小<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>G</mi></mrow></math>存在以下近似界限：存在<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>k</mi></mrow></math>阶B样条函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mrow><mi>G</mi><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub></mrow></math>，使得对于任何<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mn>0</mn><mo>&#x02264;</mo><mi>m</mi><mo>&#x02264;</mo><mi>k</mi></mrow></math>，有界</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mo fence="false" stretchy="false">&#x02016;</mo><mi>f</mi><mo>&#x02212;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>&#x003A6;</mi><mrow><mi>G</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></mrow></msub><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mrow><mi>G</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>2</mn></mrow></mrow></msub><mo>&#x02218;</mo><mo>&#x022EF;</mo><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mrow><mi>G</mi><mn>1</mn></mrow></msub><mo>&#x02218;</mo><msub><mi>&#x003A6;</mi><mrow><mi>G</mi><mn>0</mn></mrow></msub><mo stretchy="false">&#x00029;</mo><mi>x</mi><msub><mo fence="false" stretchy="false">&#x02016;</mo><mrow><msup><mi>C</mi><mi>m</mi></msup></mrow></msub><mo>&#x02264;</mo><mi>C</mi><msup><mi>G</mi><mrow><mo>&#x02212;</mo><mo stretchy="false">&#x00028;</mo><mi>k</mi><mo>&#x0002B;</mo><mn>1</mn><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><mi>m</mi></mrow></msup><mo>&#x0002E;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>2.15</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p>
<p>这里采用<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mi>C</mi><mi>m</mi></msup></mrow></math>范数的表示，用来衡量直到<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>m</mi></mrow></math>阶导数的大小：</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mo fence="false" stretchy="false">&#x02016;</mo><mi>g</mi><msub><mo fence="false" stretchy="false">&#x02016;</mo><mrow><msup><mi>C</mi><mi>m</mi></msup></mrow></msub><mo>&#x0003D;</mo><msub><mo>max</mo><mrow><mo stretchy="false">&#x0007C;</mo><mi>&#x003B2;</mi><mo stretchy="false">&#x0007C;</mo><mo>&#x02264;</mo><mi>m</mi></mrow></msub><msub><mo>sup</mo><mrow><mi>x</mi><mo>&#x02208;</mo><mo stretchy="false">[</mo><mn>0</mn><mo>&#x0002C;</mo><mn>1</mn><msup><mo stretchy="false">]</mo><mi>n</mi></msup></mrow></msub><mo stretchy="false">&#x0007C;</mo><msup><mi>D</mi><mi>&#x003B2;</mi></msup><mi>g</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x0007C;</mo><mo>&#x0002E;</mo></mrow></math>
</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# KAN's Approximatio`,
                    msg: String.raw`<div class="markdown-body"><h1>KAN's Approximation Abilities and Scaling Laws Part-2</h1>
<p>Proof. By the classical 1D B-spline theory [22] and the fact that Φ l,i,j as continuous functions can be uniformly bounded on a bounded domain, we know that there exist finite-grid B-spline functions Φ G l,i,j such that for any 0 ≤ m ≤ k,
∥(Φ l,i,j •Φ l-1 •Φ l-2 •• • ••Φ 1 •Φ 0 )x-(Φ G l,i,j •Φ l-1 •Φ l-2 •• • ••Φ 1 •Φ 0 )x∥ C m ≤ CG -k-1+m ,
with a constant C independent of G. We fix those B-spline approximations. Therefore we have that the residue R l defined via
R l := (Φ G L-1 • • • • • Φ G l+1 • Φ l • Φ l-1 • • • • • Φ 0 )x -(Φ G L-1 • • • • • Φ G l+1 • Φ G l • Φ l-1 • • • • • Φ 0 )x satisfies ∥R l ∥ C m ≤ CG -k-1+m , with a constant independent of G. Finally notice that f -(Φ G L-1 • Φ G L-2 • • • • • Φ G 1 • Φ G 0 )x = R L-1 + R L-2 + • • • + R 1 + R 0 ,
we know that (2.15) holds.
We know that asymptotically, provided that the assumption in Theorem 2.1 holds, KANs with finite grid size can approximate the function well with a residue rate independent of the dimension, hence beating curse of dimensionality! This comes naturally since we only use splines to approximate 1D functions. In particular, for m = 0, we recover the accuracy in L ∞ norm, which in turn provides a bound of RMSE on the finite domain, which gives a scaling exponent k + 1. Of course, the constant C is dependent on the representation; hence it will depend on the dimension. We will leave the discussion of the dependence of the constant on the dimension as a future work.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# KAN's Approximation Abilities and Scaling Laws`,
                    msg: String.raw`<div class="markdown-body"><p>证明。根据经典的1维B-样条理论[22]以及<font color="#00FF00">$</font><font color="#FF00FF">\Phi_{l,i,j}</font><font color="#00FF00">$</font>作为在有界域上连续的函数可以被一致有界这一事实，我们知道存在有限网格B-样条函数<font color="#00FF00">$</font><font color="#FF00FF">\Phi_{G_l,i,j}</font><font color="#00FF00">$</font>，使得对于任何<font color="#00FF00">$</font><font color="#FF00FF">0 \leq m \leq k</font><font color="#00FF00">$</font>，
<font color="#00FF00">$$</font><font color="#FF00FF"></br>\left\| \left(\Phi_{l,i,j} * \Phi_{l-1} * \Phi_{l-2} * \cdots * \Phi_1 * \Phi_0\right)(x) - \left(\Phi_{G_l,i,j} * \Phi_{l-1} * \Phi_{l-2} * \cdots * \Phi_1 * \Phi_0\right)(x) \right\|_{C^m} \leq CG^{-k-1+m},</br></font><font color="#00FF00">$$</font>
其中<font color="#00FF00">$</font><font color="#FF00FF">C</font><font color="#00FF00">$</font>是一个与<font color="#00FF00">$</font><font color="#FF00FF">G</font><font color="#00FF00">$</font>无关的常数。我们固定这些B-样条近似。因此，我们得到通过
<font color="#00FF00">$$</font><font color="#FF00FF"></br>R_l := (\Phi_{G_{L-1}} * \cdots * \Phi_{G_{l+1}} * \Phi_l * \Phi_{l-1} * \cdots * \Phi_0)(x) - (\Phi_{G_{L-1}} * \cdots * \Phi_{G_{l+1}} * \Phi_{G_l} * \Phi_{l-1} * \cdots * \Phi_0)(x)</br></font><font color="#00FF00">$$</font>
定义的残差<font color="#00FF00">$</font><font color="#FF00FF">R_l</font><font color="#00FF00">$</font>满足<font color="#00FF00">$</font><font color="#FF00FF">\|R_l\|_{C^m} \leq CG^{-k-1+m}</font><font color="#00FF00">$</font>，其中常数独立于<font color="#00FF00">$</font><font color="#FF00FF">G</font><font color="#00FF00">$</font>。最后注意到<font color="#00FF00">$</font><font color="#FF00FF">f - (\Phi_{G_{L-1}} * \Phi_{G_{L-2}} * \cdots * \Phi_{G_1} * \Phi_{G_0})(x) = R_{L-1} + R_{L-2} + \cdots + R_1 + R_0</font><font color="#00FF00">$</font>，
由此我们知道(2.15)式成立。</p>
<p>我们了解到，从渐进的角度看，假设定理2.1中的条件成立，具有有限网格大小的KAN能够以一个与维度无关的残差率很好地逼近函数，从而克服维度灾难！这一点自然而然地得出，因为我们仅使用样条来逼近1维函数。特别地，对于<font color="#00FF00">$</font><font color="#FF00FF">m=0</font><font color="#00FF00">$</font>时，我们在<font color="#00FF00">$</font><font color="#FF00FF">L^\infty</font><font color="#00FF00">$</font>范数下恢复了精度，进而为有限域上的均方根误差（RMSE）提供了一个界限，给出了尺度指数<font color="#00FF00">$</font><font color="#FF00FF">k+1</font><font color="#00FF00">$</font>。当然，常数<font color="#00FF00">$</font><font color="#FF00FF">C</font><font color="#00FF00">$</font>取决于表示形式，因此它将依赖于维度。我们将对常数关于维度的依赖性的讨论留给未来的研究工作。</p><hr /><p>证明。根据经典的1维B-样条理论[22]以及<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub></mrow></math>作为在有界域上连续的函数可以被一致有界这一事实，我们知道存在有限网格B-样条函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mi>l</mi></msub><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub></mrow></math>，使得对于任何<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mn>0</mn><mo>&#x02264;</mo><mi>m</mi><mo>&#x02264;</mo><mi>k</mi></mrow></math>，
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mrow><mo stretchy="true" fence="true" form="prefix">&#x02016;</mo><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub><mo>&#x0002A;</mo><mo>&#x022EF;</mo><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mn>1</mn></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mn>0</mn></msub><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x02212;</mo><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mi>l</mi></msub><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub><mo>&#x0002A;</mo><mo>&#x022EF;</mo><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mn>1</mn></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mn>0</mn></msub><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo stretchy="true" fence="true" form="postfix">&#x02016;</mo></mrow><mrow><msup><mi>C</mi><mi>m</mi></msup></mrow></msub><mo>&#x02264;</mo><mi>C</mi><msup><mi>G</mi><mrow><mo>&#x02212;</mo><mi>k</mi><mo>&#x02212;</mo><mn>1</mn><mo>&#x0002B;</mo><mi>m</mi></mrow></msup><mo>&#x0002C;</mo></mrow></math>
其中<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>C</mi></mrow></math>是一个与<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>G</mi></mrow></math>无关的常数。我们固定这些B-样条近似。因此，我们得到通过
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mi>R</mi><mi>l</mi></msub><mi>:</mi><mo>&#x0003D;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub></mrow></msub><mo>&#x0002A;</mo><mo>&#x022EF;</mo><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mrow><mi>l</mi><mo>&#x0002B;</mo><mn>1</mn></mrow></msub></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mi>l</mi></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x0002A;</mo><mo>&#x022EF;</mo><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mn>0</mn></msub><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x02212;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub></mrow></msub><mo>&#x0002A;</mo><mo>&#x022EF;</mo><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mrow><mi>l</mi><mo>&#x0002B;</mo><mn>1</mn></mrow></msub></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mi>l</mi></msub></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><mi>l</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x0002A;</mo><mo>&#x022EF;</mo><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mn>0</mn></msub><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>
定义的残差<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>R</mi><mi>l</mi></msub></mrow></math>满足<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mo fence="false" stretchy="false">&#x02016;</mo><msub><mi>R</mi><mi>l</mi></msub><msub><mo fence="false" stretchy="false">&#x02016;</mo><mrow><msup><mi>C</mi><mi>m</mi></msup></mrow></msub><mo>&#x02264;</mo><mi>C</mi><msup><mi>G</mi><mrow><mo>&#x02212;</mo><mi>k</mi><mo>&#x02212;</mo><mn>1</mn><mo>&#x0002B;</mo><mi>m</mi></mrow></msup></mrow></math>，其中常数独立于<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>G</mi></mrow></math>。最后注意到<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo>&#x02212;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub></mrow></msub><mo>&#x0002A;</mo><mo>&#x022EF;</mo><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mn>1</mn></msub></mrow></msub><mo>&#x0002A;</mo><msub><mi>&#x003A6;</mi><mrow><msub><mi>G</mi><mn>0</mn></msub></mrow></msub><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><msub><mi>R</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msub><mo>&#x0002B;</mo><msub><mi>R</mi><mrow><mi>L</mi><mo>&#x02212;</mo><mn>2</mn></mrow></msub><mo>&#x0002B;</mo><mo>&#x022EF;</mo><mo>&#x0002B;</mo><msub><mi>R</mi><mn>1</mn></msub><mo>&#x0002B;</mo><msub><mi>R</mi><mn>0</mn></msub></mrow></math>，
由此我们知道(2.15)式成立。</p>
<p>我们了解到，从渐进的角度看，假设定理2.1中的条件成立，具有有限网格大小的KAN能够以一个与维度无关的残差率很好地逼近函数，从而克服维度灾难！这一点自然而然地得出，因为我们仅使用样条来逼近1维函数。特别地，对于<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>m</mi><mo>&#x0003D;</mo><mn>0</mn></mrow></math>时，我们在<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mi>L</mi><mo>&#x0221E;</mo></msup></mrow></math>范数下恢复了精度，进而为有限域上的均方根误差（RMSE）提供了一个界限，给出了尺度指数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>k</mi><mo>&#x0002B;</mo><mn>1</mn></mrow></math>。当然，常数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>C</mi></mrow></math>取决于表示形式，因此它将依赖于维度。我们将对常数关于维度的依赖性的讨论留给未来的研究工作。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# KAN's Approximatio`,
                    msg: String.raw`<div class="markdown-body"><h1>KAN's Approximation Abilities and Scaling Laws Part-3</h1>
<p>We remark that although the Kolmogorov-Arnold theorem Eq. (2.1) corresponds to a KAN representation with shape [d, 2d + 1, 1], its functions are not necessarily smooth. On the other hand, if we are able to identify a smooth representation (maybe at the cost of extra layers or making the KAN wider than the theory prescribes), then Theorem 2.1 indicates that we can beat the curse of dimensionality (COD). This should not come as a surprise since we can inherently learn the structure of the function and make our finite-sample KAN approximation interpretable.
Neural scaling laws: comparison to other theories. Neural scaling laws are the phenomenon where test loss decreases with more model parameters, i.e., ℓ ∝ N -α where ℓ is test RMSE, N is the number of parameters, and α is the scaling exponent. A larger α promises more improvement by simply scaling up the model. Different theories have been proposed to predict α. Sharma &amp; Kaplan [23] suggest that α comes from data fitting on an input manifold of intrinsic dimensionality d. If the model function class is piecewise polynomials of order k (k = 1 for ReLU), then the standard approximation theory implies α = (k + 1)/d from the approximation theory. This bound suffers from the curse of dimensionality, so people have sought other bounds independent of d by leveraging compositional structures. In particular, Michaud et al. [24] considered computational graphs that only involve unary (e.g., squared, sine, exp) and binary (+ and ×) operations, finding α = (k + 1)/d * = (k + 1)/2, where d * = 2 is the maximum arity. Poggio et al. [19] leveraged the idea of compositional sparsity and proved that given function class W m (function whose derivatives are continuous up to m-th order), one needs N = O(ϵ -2 m ) number of parameters to achieve error ϵ, which is equivalent to α = m 2 . Our approach, which assumes the existence of smooth Kolmogorov-Arnold representations, decomposes the high-dimensional function into several 1D functions, giving α = k+1 (where k is the piecewise polynomial order of the splines). We choose k = 3 cubic splines so α = 4 which is the largest and best scaling exponent compared to other works. We will show in Section 3.1 that this bound α = 4 can in fact be achieved empirically with KANs, while previous work [24] reported that MLPs have problems even saturating slower bounds (e.g., α = 1) and plateau quickly. Of course, we can increase k to match the smoothness of functions, but too high k might be too oscillatory, leading to optimization issues.
Comparison between KAT and UAT. The power of fully-connected neural networks is justified by the universal approximation theorem (UAT), which states that given a function and error tolerance ϵ &gt; 0, a two-layer network with k &gt; N (ϵ) neurons can approximate the function within error ϵ. However, the UAT guarantees no bound for how N (ϵ) scales with ϵ. Indeed, it suffers from the COD, and N has been shown to grow exponentially with d in some cases [20]. The difference between test loss KAT and UAT is a consequence that KANs take advantage of the intrinsically low-dimensional representation of the function while MLPs do not. In KAT, we highlight quantifying the approximation error in the compositional space. In the literature, generalization error bounds, taking into account finite samples of training data, for a similar space have been studied for regression problems; see [25,26], and also specifically for MLPs with ReLU activations [27]. On the other hand, for general function spaces like Sobolev or Besov spaces, the nonlinear n-widths theory [28,29,30] indicates that we can never beat the curse of dimensionality, while MLPs with ReLU activations can achieve the tight rate [31,32,33]. This fact again motivates us to consider functions of compositional structure, the much "nicer" functions that we encounter in practice and in science, to overcome the COD.
G 2 G 3 G 4 KAN [2,
Compared with MLPs, we may use a smaller architecture in practice, since we learn general nonlinear activation functions; see also [27] where the depth of the ReLU MLPs needs to reach at least log n to have the desired rate, where n is the number of samples. Indeed, we will show that KANs are nicely aligned with symbolic functions while MLPs are not.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# KAN's Approximation Abilities and Scaling Laws`,
                    msg: String.raw`<div class="markdown-body"><p>我们注意到，虽然Kolmogorov-Arnold定理公式(2.1)对应于形状为[d, 2d + 1, 1]的KAN表示，但其函数不一定平滑。另一方面，如果我们能够识别出一个平滑的表示（哪怕是以增加层或使KAN比理论规定的更宽为代价），则定理2.1表明，我们可以克服维度诅咒(COD)。这一点不应感到意外，因为我们能够从本质上学习函数的结构，并使我们有限样本的KAN近似可解释。</p>
<p>神经尺度法则：与其他理论的比较。神经尺度法则是指随着模型参数增多，测试损失降低的现象，即 ℓ ∝ N^{-α}，其中ℓ是测试均方根误差(RMSE)，N是参数数量，α是尺度指数。较大的α意味着仅通过扩大模型就能带来更大的改进。不同的理论已被提出以预测α。Sharma &amp; Kaplan [23]认为α来自于在固有维数为d的输入流形上的数据拟合。如果模型函数类是由阶数为k（对于ReLU，k=1）的分段多项式构成，那么标准的近似理论意味着α=d^{-1}(k+1)。此界限受到维度诅咒的影响，因此人们寻求其他不依赖于d的界限，通过利用组合结构来实现。特别地，Michaud等人[24]考虑了只涉及一元（例如，平方、正弦、指数）和二元（加法和乘法）运算的计算图，发现α=d^{<em>^{-1}}(k+1)=2^{-1}(k+1)，其中最大元数d </em>= 2。Poggio等人[19]利用了组合稀疏性的概念，并证明了给定函数类W_m（其导数至多连续到m阶的函数），为了达到误差ε，需要N=O(ε^{-2m})个参数，这等价于α=m^2。我们的方法基于平滑的Kolmogorov-Arnold表示的存在性，将高维函数分解为多个一维函数，给出α=k+1（其中k是样条函数的分段多项式阶）。我们选择k=3的三次样条，因此α=4，这是与其他工作相比最大且最优的尺度指数。我们将在第3.1节中展示，这一α=4的界限实际上可以通过KAN在经验上实现，而之前的工作[24]报告称，MLP即使达到较慢的界限（如α=1）也会迅速达到平台期。</p>
<p>KAT与UAT之间的比较。全连接神经网络的力量得到了通用近似定理(UAT)的证明，该定理指出，给定一个函数及错误容忍度ϵ &gt; 0，一个两层网络k &gt; N(ϵ)个神经元可以将该函数在误差ϵ内近似。然而，UAT不能保证N(ϵ)是如何随ϵ缩放的。事实上，它也受到维度诅咒的影响，并且已显示在某些情况下N随着d呈指数增长[20]。测试损失KAT与UAT之间的区别在于KAN利用了函数本征的低维表示，而MLP则不行。在KAT中，我们强调在组合空间中量化近似误差。文献中，针对回归问题，考虑到有限训练样本的数据，对于类似空间的泛化误差界已有所研究；参见[25, 26]，以及特别针对ReLU激活的MLP[27]。另一方面，对于Sobolev或Besov空间这类一般函数空间，非线性n-宽度理论[28, 29, 30]表明，我们永远无法克服维度诅咒，而具有ReLU激活的MLP可以达到紧致率[31, 32, 33]。这一事实再次激发我们将注意力转向具有组合结构的函数，这类在实践和科学中遇到的更为“良态”的函数，以克服维度诅咒。</p>
<p>与MLP相比，在实际应用中我们可能会使用更小的架构，因为我们在学习一般的非线性激活函数；参看[27]，其中ReLU MLP的深度需至少达到对数n以达到预期的速度，其中n为样本数量。事实上，我们将展示KAN很好地与符号函数对齐，而MLP则没有做到。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# For accuracy: Grid Extension Part-1`,
                    msg: String.raw`<div class="markdown-body"><p>In principle, a spline can be made arbitrarily accurate to a target function as the grid can be made arbitrarily fine-grained. This good feature is inherited by KANs. By contrast, MLPs do not have the notion of "fine-graining". Admittedly, increasing the width and depth of MLPs can lead to improvement in performance ("neural scaling laws"). However, these neural scaling laws are slow (discussed in the last section). They are also expensive to obtain, because models of varying sizes are trained independently. By contrast, for KANs, one can first train a KAN with fewer parameters and then extend it to a KAN with more parameters by simply making its spline grids finer, without the need to retraining the larger model from scratch.
We next describe how to perform grid extension (illustrated in Figure 2 
(i = 0, • • • , G 1 + k -1)
. Then f on the coarse grid is expressed in terms of linear combination of these B-splines basis functions
f coarse (x) = G1+k-1 i=0 c i B i (x)
. Given a finer grid with G 2 intervals, f on the fine grid is correspondingly</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# For accuracy: Grid Extension`,
                    msg: String.raw`<div class="markdown-body"><p>为了提高准确性：网格扩展部分-1</p>
<p>原则上，随着网格变得越来越细粒度，样条可以被构造得无限接近目标函数，这一良好特性被KANs所继承。相比之下，MLP并没有“精细化”的概念。诚然，增加MLP的宽度和深度可以导致性能的提升（即“神经缩放定律”）。然而，这些神经缩放定律的效果缓慢（在上一节已有讨论）。并且，获取这些提升也比较昂贵，因为需要独立训练不同规模的模型。相反地，对于KAN而言，可以首先使用较少参数训练一个KAN，之后通过简单地将其样条网格细化来扩展到具有更多参数的KAN，而无需从头开始重新训练较大的模型。</p>
<p>接下来，我们将描述如何执行网格扩展操作（如图2所示）。在粗略网格上定义的$f$可以通过这些B样条基函数的线性组合来表示，
[f_{\text{粗}}(x) = \sum_{i=0}^{G_1+k-1} c_i B_i(x)]
给定一个更精细的网格，该网格有$G_2$个区间，处于精细网格上的函数$f$相应地...（此句似乎未完，根据上下文推测应继续描述$f$在精细网格上的表达形式，但原文未给出，请根据实际内容补充或确认）</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# For accuracy: Grid Extension Part-2`,
                    msg: String.raw`<div class="markdown-body"><p>f fine (x) = G2+k-1 j=0 c ′ j B ′ j (x).
The parameters c ′ j s can be initialized from the parameters c i by minimizing the distance between f fine (x) to f coarse (x) (over some distribution of x):
{c ′ j } = argmin {c ′ j } E x∼p(x)   G2+k-1 j=0 c ′ j B ′ j (x) - G1+k-1 i=0 c i B i (x)   2 ,(2.16)
which can be implemented by the least squares algorithm. We perform grid extension for all splines in a KAN independently.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# For accuracy: Grid Extension`,
                    msg: String.raw`<div class="markdown-body"><p>对于精度而言：精化函数 <font color="#00FF00">$</font><font color="#FF00FF">f_{\text{fine}}(x)</font><font color="#00FF00">$</font> 可以表示为</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF">f_{\text{fine}}(x) = \sum_{j=0}^{G_2+k-1} c'_j B'_j(x).</font><font color="#00FF00">$$</font>
</p>
<p>参数 <font color="#00FF00">$</font><font color="#FF00FF">c'_j</font><font color="#00FF00">$</font> 可以从 <font color="#00FF00">$</font><font color="#FF00FF">c_i</font><font color="#00FF00">$</font> 初始化，方法是通过最小化 <font color="#00FF00">$</font><font color="#FF00FF">f_{\text{fine}}(x)</font><font color="#00FF00">$</font> 和 <font color="#00FF00">$</font><font color="#FF00FF">f_{\text{coarse}}(x)</font><font color="#00FF00">$</font> 在某个 <font color="#00FF00">$</font><font color="#FF00FF">x</font><font color="#00FF00">$</font> 分布上的距离：</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF">\{c'_j\} = \argmin_{\{c'_j\}} \mathbb{E}_{x \sim p(x)} \left[ \left(\sum_{j=0}^{G_2+k-1} c'_j B'_j(x) - \sum_{i=0}^{G_1+k-1} c_i B_i(x)\right)^2 \right],</font><font color="#00FF00">$$</font>
</p>
<p>(2.16)</p>
<p>这一优化过程可以通过最小二乘算法实现。在KAN中，我们对所有样条函数独立执行网格扩展操作。</p><hr /><p>对于精度而言：精化函数 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>f</mi><mrow><mtext>fine</mtext></mrow></msub><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math> 可以表示为</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mi>f</mi><mrow><mtext>fine</mtext></mrow></msub><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><msubsup><mo>&#x02211;</mo><mrow><mi>j</mi><mo>&#x0003D;</mo><mn>0</mn></mrow><mrow><msub><mi>G</mi><mn>2</mn></msub><mo>&#x0002B;</mo><mi>k</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msubsup><msubsup><mi>c</mi><mi>j</mi><mi>&#x02032;</mi></msubsup><msubsup><mi>B</mi><mi>j</mi><mi>&#x02032;</mi></msubsup><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002E;</mo></mrow></math>
</p>
<p>参数 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msubsup><mi>c</mi><mi>j</mi><mi>&#x02032;</mi></msubsup></mrow></math> 可以从 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>c</mi><mi>i</mi></msub></mrow></math> 初始化，方法是通过最小化 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>f</mi><mrow><mtext>fine</mtext></mrow></msub><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math> 和 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>f</mi><mrow><mtext>coarse</mtext></mrow></msub><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math> 在某个 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>x</mi></mrow></math> 分布上的距离：</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mo stretchy="false">&#x0007B;</mo><msubsup><mi>c</mi><mi>j</mi><mi>&#x02032;</mi></msubsup><mo stretchy="false">&#x0007D;</mo><mo>&#x0003D;</mo><msub><mi>\argmin</mi><mrow><mo stretchy="false">&#x0007B;</mo><msubsup><mi>c</mi><mi>j</mi><mi>&#x02032;</mi></msubsup><mo stretchy="false">&#x0007D;</mo></mrow></msub><msub><mi>&#x1D53C;</mi><mrow><mi>x</mi><mi>&#x0007E;</mi><mi>p</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></msub><mrow><mo stretchy="true" fence="true" form="prefix">[</mo><msup><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><msubsup><mo>&#x02211;</mo><mrow><mi>j</mi><mo>&#x0003D;</mo><mn>0</mn></mrow><mrow><msub><mi>G</mi><mn>2</mn></msub><mo>&#x0002B;</mo><mi>k</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msubsup><msubsup><mi>c</mi><mi>j</mi><mi>&#x02032;</mi></msubsup><msubsup><mi>B</mi><mi>j</mi><mi>&#x02032;</mi></msubsup><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x02212;</mo><msubsup><mo>&#x02211;</mo><mrow><mi>i</mi><mo>&#x0003D;</mo><mn>0</mn></mrow><mrow><msub><mi>G</mi><mn>1</mn></msub><mo>&#x0002B;</mo><mi>k</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msubsup><msub><mi>c</mi><mi>i</mi></msub><msub><mi>B</mi><mi>i</mi></msub><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow><mn>2</mn></msup><mo stretchy="true" fence="true" form="postfix">]</mo></mrow><mo>&#x0002C;</mo></mrow></math>
</p>
<p>(2.16)</p>
<p>这一优化过程可以通过最小二乘算法实现。在KAN中，我们对所有样条函数独立执行网格扩展操作。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# For accuracy: Grid Extension Part-3`,
                    msg: String.raw`<div class="markdown-body"><p>Toy example: staricase-like loss curves. We use a toy example f (x, y) = exp(sin(πx) + y 2 ) to demonstrate the effect of grid extension. In Figure 2.3 (top left), we show the train and test RMSE for a [2, 5, 1] KAN. The number of grid points starts as 3, increases to a higher value every 200 LBFGS steps, ending up with 1000 grid points. It is clear that every time fine graining happens, the training loss drops faster than before (except for the finest grid with 1000 points, where optimization ceases to work probably due to bad loss landscapes). However, the test losses first go down then go up, displaying a U-shape, due to the bias-variance tradeoff (underfitting vs. overfitting). We conjecture that the optimal test loss is achieved at the interpolation threshold when the number of parameters match the number of data points. Since our training samples are 1000 and the total parameters of a [2, 5, 1] KAN is 15G (G is the number of grid intervals), we expect the interpolation threshold to be G = 1000/15 ≈ 67, which roughly agrees with our experimentally observed value G ∼ 50.
Small KANs generalize better. Is this the best test performance we can achieve? Notice that the synthetic task can be represented exactly by a [2, 1, 1] KAN, so we train a [2, 1, 1] KAN and present the training dynamics in Figure 2.3 top right. Interestingly, it can achieve even lower test losses than the [2, 5, 1] KAN, with clearer staircase structures and the interpolation threshold is delayed to a larger grid size as a result of fewer parameters. This highlights a subtlety of choosing KAN architectures. If we do not know the problem structure, how can we determine the minimal KAN shape? In Section 2.5, we will propose a method to auto-discover such minimal KAN architecture via regularization and pruning.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# For accuracy: Grid Extension`,
                    msg: String.raw`<div class="markdown-body"><p><strong>准确性：网格扩展示例——楼梯状损失曲线</strong></p>
<p><strong>玩具示例：楼梯状的损失曲线。</strong> 我们采用一个玩具函数 (f(x, y) = e^{\sin(\pi x) + y^2}) 来展示网格扩展的效果。在图2.3（左上）中，展示了对一个[2, 5, 1]结构的KAN训练和测试的RMSE（均方根误差）。起始时，网格点数为3，在每200次LBFGS优化步骤后增加，最终达到1000个网格点。明显地，每次精细化处理发生时，训练损失的下降速度都快于之前（除了达到最精细的1000点网格时，优化效果停止提升，这可能是由于损失景观不佳所致）。然而，测试损失先降后升，呈现出U形，这是由偏差-方差权衡（欠拟合与过拟合之间的平衡问题）引起的。我们推测，当参数数量与数据点数量匹配的插值阈值处可以达到最优的测试损失。鉴于我们的训练样本数为1000，而一个[2, 5, 1]结构的KAN总共有大约15G（G代表网格间隔的数量）个参数，我们预期插值阈值应为G=1000/15≈67，这一数值大致与实验观测到的结果G∼50相符。</p>
<p><strong>小型KAN具有更优的泛化能力。</strong> 这是否是我们能实现的最佳测试表现呢？请注意，该合成任务能被一个[2, 1, 1]结构的KAN精确表示，因此我们也训练了一个[2, 1, 1]结构的KAN，并在图2.3的右上方展示了其训练动态。有趣的是，它甚至能达到比[2, 5, 1]结构的KAN更低的测试损失，展现出更清晰的楼梯状结构，且由于参数较少，插值阈值推迟到了更大的网格尺寸。这突显了选择KAN结构的一个微妙之处。如果我们不知道问题的结构，应如何确定最小化的KAN形态呢？在第2.5节中，我们将提出一种通过正则化和剪枝自动生成这种最小化KAN结构的方法。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# For accuracy: Grid Extension Part-4`,
                    msg: String.raw`<div class="markdown-body"><p>Scaling laws: comparison with theory. We are also interested in how the test loss decreases as the number of grid parameters increases. In Figure 2.3 (bottom left), a [2,1,1] KAN scales roughly as test RMSE ∝ G -3 . However, according to the Theorem 2.1, we would expect test RMSE ∝ G -4 . We found that the errors across samples are not uniform. This is probably attributed to boundary effects [24]. In fact, there are a few samples that have significantly larger errors than others, making the overall scaling slow down. If we plot the square root of the median (not mean) of the squared losses, we get a scaling closer to G -4 . Despite this suboptimality (probably due to optimization), KANs still have much better scaling laws than MLPs, for data fitting (Figure 3.1) and PDE solving (Figure 3.3). In addition, the training time scales favorably with the number of grid points G, shown in Figure 2.3 bottom right 4 .
External vs Internal degrees of freedom. A new concept that KANs highlights is a distinction between external versus internal degrees of freedom (parameters). The computational graph of how nodes are connected represents external degrees of freedom ("dofs"), while the grid points inside an activation function are internal degrees of freedom. KANs benefit from the fact that they have both external dofs and internal dofs. External dofs (that MLPs also have but splines do not) are responsible for learning compositional structures of multiple variables. Internal dofs (that splines also have but MLPs do not) are responsible for learning univariate functions.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# For accuracy: Grid Extension`,
                    msg: String.raw`<div class="markdown-body"><p><strong>精度：网格扩展部分-4</strong></p>
<p><strong>缩放定律：与理论的比较。</strong> 我们也关注随着网格参数数量的增加，测试损失如何降低。在图2.3（底部左侧），一个[2,1,1]结构的KAN其测试RMSE（均方根误差）大约遵循G^(-3)的比例关系。然而，根据定理2.1，我们期望测试RMSE应遵循G^(-4)的比例。我们发现样本间的误差并不均匀，这可能归因于边界效应[24]。实际上，有少数样本所产生的误差远大于其他样本，这使得整体缩放速度减慢。如果我们绘制平方损失的中位数（而非平均数）的平方根，得到的缩放比例会更接近G^(-4)。尽管存在这种次优性（可能由于优化问题所致），KANs在数据拟合（图3.1）和偏微分方程求解（图3.3）方面仍然具有比MLP更优越的缩放规律。此外，训练时间随着网格点数量G的增加表现出有利的缩放，如图2.3底部右侧所示。</p>
<p><strong>外部与内部自由度。</strong> KANs强调了一个新概念，即外部自由度与内部自由度（参数）之间的区别。节点间连接方式所形成的计算图代表了外部自由度（"dofs"），而激活函数内部的网格点则代表了内部自由度。KANs得益于既拥有外部自由度也拥有内部自由度这一事实。外部自由度（MLP亦拥有但样条函数不具备）负责学习多变量的组合结构。内部自由度（样条函数具备而MLP不具备）则负责学习单变量函数。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# For Interpretabili`,
                    msg: String.raw`<div class="markdown-body"><h1>For Interpretability: Simplifying KANs and Making them interactive</h1>
<p>One loose end from the last subsection is that we do not know how to choose the KAN shape that best matches the structure of a dataset. For example, if we know that the dataset is generated via the symbolic formula f (x, y) = exp(sin(πx)+y 2 ), then we know that a [2, 1, 1] KAN is able to express this function. However, in practice we do not know the information a priori, so it would be nice to have approaches to determine this shape automatically. The idea is to start from a large enough KAN and train it with sparsity regularization followed by pruning. We will show that these pruned KANs are much more interpretable than non-pruned ones. To make KANs maximally interpretable, we propose a few simplification techniques in Section 2.5.1, and an example of how users can interact with KANs to make them more interpretable in Section 2.5.2.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# For Interpretabili`,
                    msg: String.raw`<div class="markdown-body"><h1>For Interpretability: Simplifying KANs and Making them interactive</h1>
<p><strong>可解释性：简化KAN并使其互动</strong></p>
<p>上一节遗留的一个待解问题是我们尚不清楚如何选择最能匹配数据集结构的KAN形态。例如，如果我们知道数据集是通过符号公式 (f(x, y) = \exp(\sin(\pi x) + y^2)) 生成的，那么我们知道一个形状为[2, 1, 1]的KAN能够表达这个函数。然而，在实际应用中，我们并不预先具备这些信息，因此能够自动确定这一形态的方法将非常有用。我们的想法是从足够大的KAN开始，使用稀疏正则化进行训练，并随后进行剪枝。我们将展示这些经过剪枝的KAN比未剪枝的在可解释性方面有显著提升。</p>
<p>为了使KAN达到最大化的可解释性，我们在2.5.1节提出了一些简化技巧。同时，在2.5.2节中，我们通过示例说明了用户如何与KAN互动，进一步增强其可解释性。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Simplification techniques`,
                    msg: String.raw`<div class="markdown-body"><ol>
<li>Sparsification. For MLPs, L1 regularization of linear weights is used to favor sparsity. KANs can adapt this high-level idea, but need two modifications:
(1) There is no linear "weight" in KANs. Linear weights are replaced by learnable activation functions, so we should define the L1 norm of these activation functions.
(2) We find L1 to be insufficient for sparsification of KANs; instead an additional entropy regularization is necessary (see Appendix C for more details).
We define the L1 norm of an activation function ϕ to be its average magnitude over its N p inputs, i.e.,
|ϕ| 1 ≡ 1 N p Np s=1 ϕ(x (s) ) .(2.17)
Then for a KAN layer Φ with n in inputs and n out outputs, we define the L1 norm of Φ to be the sum of L1 norms of all activation functions, i.e.,
|Φ| 1 ≡ nin i=1 nout j=1 |ϕ i,j | 1 . (2.18)
In addition, we define the entropy of Φ to be
S(Φ) ≡ - nin i=1 nout j=1 |ϕ i,j | 1 |Φ| 1 log |ϕ i,j | 1 |Φ| 1 . (2.19)
The total training objective ℓ total is the prediction loss ℓ pred plus L1 and entropy regularization of all KAN layers:
ℓ total = ℓ pred + λ µ 1 L-1 l=0 |Φ l | 1 + µ 2 L-1 l=0 S(Φ l ) ,(2.20)
where µ 1 , µ 2 are relative magnitudes usually set to µ 1 = µ 2 = 1, and λ controls overall regularization magnitude. 2. Visualization. When we visualize a KAN, to get a sense of magnitudes, we set the transparency of an activation function ϕ l,i,j proportional to tanh(βA l,i,j ) where β = 3 . Hence, functions with small magnitude appear faded out to allow us to focus on important ones.</li>
</ol></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Simplification techniques`,
                    msg: String.raw`<div class="markdown-body"><p><strong>简化技术</strong></p>
<ol>
<li><strong>稀疏化（Sparsification）</strong>。对于多层感知器（MLPs），通常采用线性权重的L1正则化来促进稀疏性。KANs可以借鉴这一高层思想，但需要进行两方面的调整：
   (1) KANs中并不存在线性“权重”。线性权重被可学习的激活函数所替代，因此我们需要定义这些激活函数的L1范数。
   (2) 我们发现仅使用L1范数对于KANs的稀疏化是不够的；还需要额外引入熵正则化（详情请参见附录C）。</li>
</ol>
<p>我们将激活函数φ的L1范数定义为其在Np个输入样本上的平均绝对值，即，
   |\phi|<em>1 \equiv \frac{1}{N_p} \sum</em>{s=1}^{N_p} |\phi(x^{(s)})|,\tag{2.17}</p>
<p>针对一个具有nin个输入和nout个输出的KAN层Φ，我们将其L1范数定义为所有激活函数L1范数的总和，即，
   |\Phi|<em>1 \equiv \sum</em>{i=1}^{nin} \sum_{j=1}^{nout} |\phi_{i,j}|_1.\tag{2.18}</p>
<p>此外，我们将Φ的熵定义为
   S(\Phi) \equiv -\sum_{i=1}^{nin} \sum_{j=1}^{nout} \frac{|\phi_{i,j}|<em>1}{|\Phi|_1} \log\left(\frac{|\phi</em>{i,j}|_1}{|\Phi|_1}\right).\tag{2.19}</p>
<p>总训练目标<font color="#00FF00">$</font><font color="#FF00FF">\ell_{total}</font><font color="#00FF00">$</font>是预测损失<font color="#00FF00">$</font><font color="#FF00FF">\ell_{pred}</font><font color="#00FF00">$</font>加上所有KAN层的L1范数及熵的正则化：
   <font color="#00FF00">$$</font><font color="#FF00FF">\ell_{total} = \ell_{pred} + \mu_1 \sum_{l=0}^{L-1} |\Phi_l|_1 + \mu_2 \sum_{l=0}^{L-1} S(\Phi_l),\tag{2.20}</font><font color="#00FF00">$$</font>
   其中<font color="#00FF00">$</font><font color="#FF00FF">\mu_1, \mu_2</font><font color="#00FF00">$</font>是相对大小，一般设置为<font color="#00FF00">$</font><font color="#FF00FF">\mu_1 = \mu_2 = 1</font><font color="#00FF00">$</font>，<font color="#00FF00">$</font><font color="#FF00FF">\lambda</font><font color="#00FF00">$</font>控制整体的正则化强度。</p>
<ol start="2">
<li><strong>可视化（Visualization）</strong>。在对KAN进行可视化以直观感受各部分重要性时，我们设置激活函数<font color="#00FF00">$</font><font color="#FF00FF">\phi_{l,i,j}</font><font color="#00FF00">$</font>的透明度与其绝对值<font color="#00FF00">$</font><font color="#FF00FF">\tanh(\beta A_{l,i,j})</font><font color="#00FF00">$</font>成比例，这里选用<font color="#00FF00">$</font><font color="#FF00FF">\beta = 3</font><font color="#00FF00">$</font>。这样一来，绝对值较小的函数会显得较淡，使我们能集中关注重要部分。</li>
</ol><hr /><p><strong>简化技术</strong></p>
<ol>
<li><strong>稀疏化（Sparsification）</strong>。对于多层感知器（MLPs），通常采用线性权重的L1正则化来促进稀疏性。KANs可以借鉴这一高层思想，但需要进行两方面的调整：
   (1) KANs中并不存在线性“权重”。线性权重被可学习的激活函数所替代，因此我们需要定义这些激活函数的L1范数。
   (2) 我们发现仅使用L1范数对于KANs的稀疏化是不够的；还需要额外引入熵正则化（详情请参见附录C）。</li>
</ol>
<p>我们将激活函数φ的L1范数定义为其在Np个输入样本上的平均绝对值，即，
   |\phi|<em>1 \equiv \frac{1}{N_p} \sum</em>{s=1}^{N_p} |\phi(x^{(s)})|,\tag{2.17}</p>
<p>针对一个具有nin个输入和nout个输出的KAN层Φ，我们将其L1范数定义为所有激活函数L1范数的总和，即，
   |\Phi|<em>1 \equiv \sum</em>{i=1}^{nin} \sum_{j=1}^{nout} |\phi_{i,j}|_1.\tag{2.18}</p>
<p>此外，我们将Φ的熵定义为
   S(\Phi) \equiv -\sum_{i=1}^{nin} \sum_{j=1}^{nout} \frac{|\phi_{i,j}|<em>1}{|\Phi|_1} \log\left(\frac{|\phi</em>{i,j}|_1}{|\Phi|_1}\right).\tag{2.19}</p>
<p>总训练目标<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x02113;</mi><mrow><mi>t</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>l</mi></mrow></msub></mrow></math>是预测损失<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x02113;</mi><mrow><mi>p</mi><mi>r</mi><mi>e</mi><mi>d</mi></mrow></msub></mrow></math>加上所有KAN层的L1范数及熵的正则化：
   <math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mi>&#x02113;</mi><mrow><mi>t</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>l</mi></mrow></msub><mo>&#x0003D;</mo><msub><mi>&#x02113;</mi><mrow><mi>p</mi><mi>r</mi><mi>e</mi><mi>d</mi></mrow></msub><mo>&#x0002B;</mo><msub><mi>&#x003BC;</mi><mn>1</mn></msub><msubsup><mo>&#x02211;</mo><mrow><mi>l</mi><mo>&#x0003D;</mo><mn>0</mn></mrow><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msubsup><mo stretchy="false">&#x0007C;</mo><msub><mi>&#x003A6;</mi><mi>l</mi></msub><msub><mo stretchy="false">&#x0007C;</mo><mn>1</mn></msub><mo>&#x0002B;</mo><msub><mi>&#x003BC;</mi><mn>2</mn></msub><msubsup><mo>&#x02211;</mo><mrow><mi>l</mi><mo>&#x0003D;</mo><mn>0</mn></mrow><mrow><mi>L</mi><mo>&#x02212;</mo><mn>1</mn></mrow></msubsup><mi>S</mi><mo stretchy="false">&#x00028;</mo><msub><mi>&#x003A6;</mi><mi>l</mi></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0002C;</mo><mi>\tag</mi><mrow><mn>2.20</mn></mrow></mrow></math>
   其中<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003BC;</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>&#x003BC;</mi><mn>2</mn></msub></mrow></math>是相对大小，一般设置为<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003BC;</mi><mn>1</mn></msub><mo>&#x0003D;</mo><msub><mi>&#x003BC;</mi><mn>2</mn></msub><mo>&#x0003D;</mo><mn>1</mn></mrow></math>，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003BB;</mi></mrow></math>控制整体的正则化强度。</p>
<ol start="2">
<li><strong>可视化（Visualization）</strong>。在对KAN进行可视化以直观感受各部分重要性时，我们设置激活函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>&#x003D5;</mi><mrow><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub></mrow></math>的透明度与其绝对值<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>tanh</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003B2;</mi><msub><mi>A</mi><mrow><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>j</mi></mrow></msub><mo stretchy="false">&#x00029;</mo></mrow></math>成比例，这里选用<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003B2;</mi><mo>&#x0003D;</mo><mn>3</mn></mrow></math>。这样一来，绝对值较小的函数会显得较淡，使我们能集中关注重要部分。</li>
</ol></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Pruning.`,
                    msg: String.raw`<div class="markdown-body"><p>After training with sparsification penalty, we may also want to prune the network to a smaller subnetwork. We sparsify KANs on the node level (rather than on the edge level). For each node (say the i th neuron in the l th layer), we define its incoming and outgoing score as
I l,i = max k (|ϕ l-1,i,k | 1 ), O l,i = max j (|ϕ l+1,j,i | 1 ),(2.21)
and consider a node to be important if both incoming and outgoing scores are greater than a threshold hyperparameter θ = 10 -2 by default. All unimportant neurons are pruned.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Pruning.`,
                    msg: String.raw`<div class="markdown-body"><p>在使用稀疏化惩罚进行训练之后，我们可能还想将网络修剪为更小的子网络。我们在节点级别（而非边级别）对KANs进行稀疏化处理。对于每个节点（比如说第l层的第i个神经元），我们定义其输入分数和输出分数分别为</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF"> I_{l,i} = \max_k(|\phi_{l-1,i,k}|_1),\, O_{l,i} = \max_j(|\phi_{l+1,j,i}|_1), \tag{2.21} </font><font color="#00FF00">$$</font>
</p>
<p>并认为当一个节点的输入分数和输出分数都大于一个阈值超参数 <font color="#00FF00">$</font><font color="#FF00FF">\theta</font><font color="#00FF00">$</font>（默认为<font color="#00FF00">$</font><font color="#FF00FF">10^{-2}</font><font color="#00FF00">$</font>）时，它是重要的。所有不重要的神经元都将被剪除。</p><hr /><p>在使用稀疏化惩罚进行训练之后，我们可能还想将网络修剪为更小的子网络。我们在节点级别（而非边级别）对KANs进行稀疏化处理。对于每个节点（比如说第l层的第i个神经元），我们定义其输入分数和输出分数分别为</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mi>I</mi><mrow><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi></mrow></msub><mo>&#x0003D;</mo><msub><mo>max</mo><mi>k</mi></msub><mo stretchy="false">&#x00028;</mo><mo stretchy="false">&#x0007C;</mo><msub><mi>&#x003D5;</mi><mrow><mi>l</mi><mo>&#x02212;</mo><mn>1</mn><mo>&#x0002C;</mo><mi>i</mi><mo>&#x0002C;</mo><mi>k</mi></mrow></msub><msub><mo stretchy="false">&#x0007C;</mo><mn>1</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0002C;</mo><mspace width="0.167em" /><msub><mi>O</mi><mrow><mi>l</mi><mo>&#x0002C;</mo><mi>i</mi></mrow></msub><mo>&#x0003D;</mo><msub><mo>max</mo><mi>j</mi></msub><mo stretchy="false">&#x00028;</mo><mo stretchy="false">&#x0007C;</mo><msub><mi>&#x003D5;</mi><mrow><mi>l</mi><mo>&#x0002B;</mo><mn>1</mn><mo>&#x0002C;</mo><mi>j</mi><mo>&#x0002C;</mo><mi>i</mi></mrow></msub><msub><mo stretchy="false">&#x0007C;</mo><mn>1</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0002C;</mo><mi>\tag</mi><mrow><mn>2.21</mn></mrow></mrow></math>
</p>
<p>并认为当一个节点的输入分数和输出分数都大于一个阈值超参数 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003B8;</mi></mrow></math>（默认为<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mn>10</mn><mrow><mo>&#x02212;</mo><mn>2</mn></mrow></msup></mrow></math>）时，它是重要的。所有不重要的神经元都将被剪除。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Symbolification.`,
                    msg: String.raw`<div class="markdown-body"><p>In cases where we suspect that some activation functions are in fact symbolic (e.g., cos or log), we provide an interface to set them to be a specified symbolic form, fix_symbolic(l,i,j,f) can set the (l, i, j) activation to be f . However, we cannot simply set the activation function to be the exact symbolic formula, since its inputs and outputs may have shifts and scalings. So, we obtain preactivations x and postactivations y from samples, and fit affine parameters (a, b, c, d) such that y ≈ cf (ax + b) + d. The fitting is done by iterative grid search of a, b and linear regression.
Besides these techniques, we provide additional tools that allow users to apply more fine-grained control to KANs, listed in Appendix A.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Symbolification.`,
                    msg: String.raw`<div class="markdown-body"><p>在某些情况下，当我们怀疑某些激活函数实际上是符号化的（如，余弦cos或对数log时），我们提供了一个接口来将它们设置为特定的符号形式，通过<code>fix_symbolic(l,i,j,f)</code>可以将第<code>l</code>层第<code>i</code>个节点到第<code>j</code>个节点的激活函数设置为<code>f</code>。然而，我们不能直接将激活函数设为确切的符号公式，因为其输入和输出可能涉及到平移和缩放。因此，我们从样本中获取预激活值x和后激活值y，并拟合仿射参数(a, b, c, d)，使得y≈cf(ax+b)+d。这一拟合过程通过迭代网格搜索a和b以及线性回归来完成。</p>
<p>除了这些技术外，我们还提供了额外的工具，允许用户对KAN进行更精细的控制，这些工具列于附录A中。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# A toy example: how humans can interact with KANs`,
                    msg: String.raw`<div class="markdown-body"><p>Above we have proposed a number of simplification techniques for KANs. We can view these simplification choices as buttons one can click on. A user interacting with these buttons can decide which button is most promising to click next to make KANs more interpretable. We use an example below to showcase how a user could interact with a KAN to obtain maximally interpretable results.
Let us again consider the regression task
f (x, y) = exp sin(πx) + y 2 . (2.22) Given data points (x i , y i , f i ), i = 1, 2, • • • , N p ,
a hypothetical user Alice is interested in figuring out the symbolic formula. The steps of Alice's interaction with the KANs are described below (illustrated in Figure 2.4):
Step 1: Training with sparsification. Starting from a fully-connected [2, 5, 1] KAN, training with sparsification regularization can make it quite sparse. 4 out of 5 neurons in the hidden layer appear useless, hence we want to prune them away.
Step 2: Pruning. Automatic pruning is seen to discard all hidden neurons except the last one, leaving a [2, 1, 1] KAN. The activation functions appear to be known symbolic functions.
Step 3: Setting symbolic functions. Assuming that the user can correctly guess these symbolic formulas from staring at the KAN plot, they can set fix_symbolic(0,0,0,'sin') fix_symbolic(0,1,0,'xˆ2') fix_symbolic(1,0,0,'exp').
(2.23)
In case the user has no domain knowledge or no idea which symbolic functions these activation functions might be, we provide a function suggest_symbolic to suggest symbolic candidates.
Step 4: Further training. After symbolifying all the activation functions in the network, the only remaining parameters are the affine parameters. We continue training these affine parameters, and when we see the loss dropping to machine precision, we know that we have found the correct symbolic expression.
Step 5: Output the symbolic formula. Sympy is used to compute the symbolic formula of the output node. The user obtains 1.0e 1.0y 2 +1.0sin (3.14x) , which is the true answer (we only displayed two decimals for π).
Remark: Why not symbolic regression (SR)? It is reasonable to use symbolic regression for this example. However, symbolic regression methods are in general brittle and hard to debug. They either return a success or a failure in the end without outputting interpretable intermediate results.
In contrast, KANs do continuous search (with gradient descent) in function space, so their results are more continuous and hence more robust. Moreover, users have more control over KANs as compared to SR due to KANs' transparency. The way we visualize KANs is like displaying KANs' "brain" to users, and users can perform "surgery" (debugging) on KANs. This level of control is typically unavailable for SR. We will show examples of this in Section 4.4. More generally, when the target function is not symbolic, symbolic regression will fail but KANs can still provide something meaningful. For example, a special function (e.g., a Bessel function) is impossible to SR to learn unless it is provided in advance, but KANs can use splines to approximate it numerically anyway (see Figure 4.1 (d)).</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# A toy example: how humans can interact with KANs`,
                    msg: String.raw`<div class="markdown-body"><p><strong>玩具示例：人类如何与KAN互动</strong></p>
<p>至上我们提出了一系列KAN的简化技术。可以将这些简化选择视为用户可点击的按钮。与这些按钮互动的用户可以决定接下来点击哪个按钮，以使KAN更易解释。下面我们将通过一个例子展示用户如何与KAN互动以获得最大程度可解释的结果。</p>
<p>再次考虑回归任务
[ f(x, y) = e^{\sin(\pi x)} + y^2 \, . \,(2.22) ]
给定数据点 ((x_i, y_i, f_i))，(i = 1, 2, \ldots, N_p)，
假设有一个名为Alice的用户希望了解其符号公式。以下是Alice与KAN互动的步骤（如图2.4所示）：</p>
<p><strong>步骤1：采用稀疏化训练。</strong>从一个完全连接的[2, 5, 1] KAN开始，通过稀疏化正则化训练，可以使网络变得相当稀疏。隐藏层中的4个神经元显得不必要，因此我们希望移除它们。</p>
<p><strong>步骤2：剪枝。</strong>自动剪枝后仅保留最后一个隐藏神经元，形成了[2, 1, 1]的KAN。激活函数看似已知的符号函数。</p>
<p><strong>步骤3：设定符号函数。</strong>假定用户能够通过观察KAN的图形准确猜测到这些符号公式，他们可以设定如下固定符号函数：
[ \text{fix_symbolic}(0,0,0,\text{'sin'}) ]
[ \text{fix_symbolic}(0,1,0,\text{'xˆ2'}) ]
[ \text{fix_symbolic}(1,0,0,\text{'exp'}) \, \, \, (2.23) ]
如果用户不具备领域知识或不确定这些激活函数可能对应的符号函数，我们可以提供一个函数<code>suggest\_symbolic</code>来提供建议。</p>
<p><strong>步骤4：进一步训练。</strong>在将网络中所有激活函数符号化后，剩下的唯一参数是仿射参数。我们继续训练这些仿射参数，当发现损失降低到机器精度时，即表明我们找到了正确的符号表达式。</p>
<p><strong>步骤5：输出符号公式。</strong> 使用Sympy计算输出节点的符号公式。用户得到 (1.0e\cdot 1.0y^2 + 1.0\sin(3.14x))，这是真实的答案（我们仅显示了(\pi)的两位小数）。</p>
<p><strong>备注：为何不采用符号回归(SR)?</strong> 对于这个示例，使用符号回归是合理的。但是，一般而言，符号回归方法较为脆弱且难以调试。它们最终只会给出成功或失败的结果，而不会输出可解释的中间结果。
相比之下，KANs在函数空间中进行连续搜索（使用梯度下降），因此其结果更加连贯且更为健壮。此外，相比于符号回归，用户对KANs有更多的控制权，这得益于KAN的透明性。我们的可视化KAN方式类似于向用户展示KAN的“思维”，而用户能在此基础上进行“手术”（调试）。这种级别的控制在符号回归中通常是不可得的。我们在第4.4节将进一步展示这方面的案例。更广泛地，当目标函数不是符号形式时，符号回归将会失败，但KAN仍能给出有意义的东西。例如，特殊函数（如贝塞尔函数）除非预先给出否则SR无法学习，但KAN仍可以通过样条函数数值上近似之（参见图4.1(d)）。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# KANs are accurate`,
                    msg: String.raw`<div class="markdown-body"><p>In this section, we demonstrate that KANs are more effective at representing functions than MLPs in various tasks (regression and PDE solving). When comparing two families of models, it is fair to compare both their accuracy (loss) and their complexity (number of parameters). We will show that KANs display more favorable Pareto Frontiers than MLPs. Moreover, in Section 3.5, we show that KANs can naturally work in continual learning without catastrophic forgetting.  </p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# KANs are accurate`,
                    msg: String.raw`<div class="markdown-body"><p>在本节中，我们通过多种任务（回归与偏微分方程求解）演示了KAN在函数表示上的效果优于MLP。在对比两个模型家族时，公平地评估它们的准确性（损失）和复杂性（参数数量）是必要的。我们将展示，KAN展现出比MLP更为有利的帕累托前沿。此外，在第3.5节中，我们进一步说明KAN能够天然适应持续学习场景，而不会出现灾难性的遗忘问题。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Toy datasets`,
                    msg: String.raw`<div class="markdown-body"><p>In Section 2.3, our theory suggested that test RMSE loss ℓ scales as ℓ ∝ N -4 with model parameters N . However, this relies on the existence of a Kolmogorov-Arnold representation. As a sanity check, we construct five examples we know have smooth KA representations:
(1) f (x) = J 0 (20x), which is the Bessel function. Since it is a univariate function, it can be represented by a spline, which is a [1, 1] KAN.
(2) f (x, y) = exp(sin(πx) + y 2 ). We know that it can be exactly represented by a [2, 1, 1] KAN.
(3) f (x, y) = xy. We know from Figure 4.1 that it can be exactly represented by a [2, 2, 1] KAN.
(
) A high-dimensional example f (x 1 , • • • , x 100 ) = exp( 1 100 100 i=1 sin 2 ( πxi 2 )) which can be rep- resented by a [100, 1, 1] KAN. (5) A four-dimensional example f (x 1 , x 2 , x 3 , x 4 ) = exp( 1 2 (sin(π(x 2 1 + x 2 2 )) + sin(π(x 2 3 + x 24
)))) which can be represented by a [4, 4, 2, 1] KAN.
We train these KANs by increasing grid points every 200 steps, in total covering G = {3, 5, 10, 20, 50, 100, 200, 500, 1000}. We train MLPs with different depths and widths as baselines. Both MLPs and KANs are trained with LBFGS for 1800 steps in total. We plot test RMSE as a function of the number of parameters for KANs and MLPs in Figure 3.1, showing that KANs have better scaling curves than MLPs, especially for the high-dimensional example. For comparison, we plot the lines predicted from our KAN theory as red dashed (α = k + 1 = 4), and the lines predicted from Sharma &amp; Kaplan [23] as black-dashed (α = (k + 1)/d = 4/d). KANs can almost saturate the steeper red lines, while MLPs struggle to converge even as fast as the slower black lines and plateau quickly. We also note that for the last example, the 2-Layer KAN [4, 9, 1] behaves much worse than the 3-Layer KAN (shape [4, 2, 2, 1]). This highlights the greater expressive power of deeper KANs, which is the same for MLPs: deeper MLPs have more expressive power than shallower ones. Note that we have adopted the vanilla setup where both KANs and MLPs are trained with LBFGS without advanced techniques, e.g., switching between Adam and LBFGS, or boosting [34]. We leave the comparison of KANs and MLPs in advanced setups for future work.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Toy datasets`,
                    msg: String.raw`<div class="markdown-body"><p>在第2.3节中，我们的理论预测测试均方根损失（RMSE）<font color="#00FF00">$</font><font color="#FF00FF">\ell</font><font color="#00FF00">$</font>随模型参数数量<font color="#00FF00">$</font><font color="#FF00FF">N</font><font color="#00FF00">$</font>按比例缩放为<font color="#00FF00">$</font><font color="#FF00FF">\ell \propto N^{-4}</font><font color="#00FF00">$</font>。然而，这一结论依赖于存在柯尔莫戈洛夫-阿诺德表示。作为稳健性检查，我们构建了五个已知具有平滑柯尔莫戈洛夫-阿诺德表示的示例：
(1) <font color="#00FF00">$</font><font color="#FF00FF">f(x) = J_0(20x)</font><font color="#00FF00">$</font>，这是贝塞尔函数。由于它是一个一元函数，可以被一个样条（本质上是一种[1, 1] KAN）所表示。
(2) <font color="#00FF00">$</font><font color="#FF00FF">f(x, y) = \exp(\sin(\pi x) + y^2)</font><font color="#00FF00">$</font>。我们知道它可以被一个精确的[2, 1, 1] KAN所表示。
(3) <font color="#00FF00">$</font><font color="#FF00FF">f(x, y) = xy</font><font color="#00FF00">$</font>。根据图4.1，我们知道它能被一个精确的[2, 2, 1] KAN所表示。
(4) 一个高维示例<font color="#00FF00">$</font><font color="#FF00FF">f(x_1, ..., x_{100}) = \exp\left(\frac{1}{100}\sum_{i=1}^{100}\sin^2\left(\frac{\pi x_i}{2}\right)\right)</font><font color="#00FF00">$</font>，可以通过一个[100, 1, 1] KAN进行表示。
(5) 一个四维示例<font color="#00FF00">$</font><font color="#FF00FF">f(x_1, x_2, x_3, x_4) = \exp\left(\frac{1}{2}\left[\sin^2(\pi(x_1^2 + x_2^2)) + \sin^2(\pi(x_3^2 + x_4^2))\right]\right)</font><font color="#00FF00">$</font>，能通过一个[4, 4, 2, 1] KAN进行表示。</p>
<p>我们通过每隔200步增加网格点的方式训练这些KAN，总共覆盖了<font color="#00FF00">$</font><font color="#FF00FF">G=\{3, 5, 10, 20, 50, 100, 200, 500, 1000\}</font><font color="#00FF00">$</font>。同时，我们使用不同深度和宽度的MLP作为基线进行训练。MLP和KAN都采用LBFGS优化器进行1800步的训练。在图3.1中，我们绘制了KAN和MLP的测试RMSE随参数数量变化的曲线，显示出KAN的缩放曲线比MLP更优，特别是在高维示例中。为了比较，我们以红色虚线标出了从我们的KAN理论预测出的线条（<font color="#00FF00">$</font><font color="#FF00FF">\alpha=k+1=4</font><font color="#00FF00">$</font>），以及根据Sharma &amp; Kaplan [23]预测出的黑虚线（<font color="#00FF00">$</font><font color="#FF00FF">\alpha=(k+1)/d=4/d</font><font color="#00FF00">$</font>）。KAN几乎可以达到较陡峭的红色线条的饱和点，而MLP即使要收敛到较缓慢的黑色线条的速度也显得困难，并且很快就达到了性能平台期。另外，我们注意到在最后一个例子中，2层的KAN[4, 9, 1]的表现远逊于3层的KAN（形态为[4, 2, 2, 1]）。这突显了更深KAN的更强表达能力，对于MLP而言亦是如此：更深的MLP具有比浅层MLP更强的表达力。需要注意的是，在此基础设置下，我们未采用高级技术，即KAN和MLP都仅用LBFGS进行训练，未在Adam与LBFGS之间切换或使用增强方法[34]。未来工作中，我们将进一步探讨在采用这些高级技术的情况下KAN和MLP的对比。</p><hr /><p>在第2.3节中，我们的理论预测测试均方根损失（RMSE）<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x02113;</mi></mrow></math>随模型参数数量<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>N</mi></mrow></math>按比例缩放为<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x02113;</mi><mo>&#x0221D;</mo><msup><mi>N</mi><mrow><mo>&#x02212;</mo><mn>4</mn></mrow></msup></mrow></math>。然而，这一结论依赖于存在柯尔莫戈洛夫-阿诺德表示。作为稳健性检查，我们构建了五个已知具有平滑柯尔莫戈洛夫-阿诺德表示的示例：
(1) <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><msub><mi>J</mi><mn>0</mn></msub><mo stretchy="false">&#x00028;</mo><mn>20</mn><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>，这是贝塞尔函数。由于它是一个一元函数，可以被一个样条（本质上是一种[1, 1] KAN）所表示。
(2) <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo>&#x0002C;</mo><mi>y</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>exp</mi><mo stretchy="false">&#x00028;</mo><mi>sin</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><msup><mi>y</mi><mn>2</mn></msup><mo stretchy="false">&#x00029;</mo></mrow></math>。我们知道它可以被一个精确的[2, 1, 1] KAN所表示。
(3) <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo>&#x0002C;</mo><mi>y</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>x</mi><mi>y</mi></mrow></math>。根据图4.1，我们知道它能被一个精确的[2, 2, 1] KAN所表示。
(4) 一个高维示例<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><mo>&#x0002E;</mo><mo>&#x0002E;</mo><mo>&#x0002E;</mo><mo>&#x0002C;</mo><msub><mi>x</mi><mrow><mn>100</mn></mrow></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>exp</mi><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>100</mn></mrow></mfrac><msubsup><mo>&#x02211;</mo><mrow><mi>i</mi><mo>&#x0003D;</mo><mn>1</mn></mrow><mrow><mn>100</mn></mrow></msubsup><msup><mi>sin</mi><mn>2</mn></msup><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><mfrac><mrow><mi>&#x003C0;</mi><msub><mi>x</mi><mi>i</mi></msub></mrow><mrow><mn>2</mn></mrow></mfrac><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow></mrow></math>，可以通过一个[100, 1, 1] KAN进行表示。
(5) 一个四维示例<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>2</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>3</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>4</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>exp</mi><mrow><mo stretchy="true" fence="true" form="prefix">&#x00028;</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mrow><mo stretchy="true" fence="true" form="prefix">[</mo><msup><mi>sin</mi><mn>2</mn></msup><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><mo stretchy="false">&#x00028;</mo><msubsup><mi>x</mi><mn>1</mn><mn>2</mn></msubsup><mo>&#x0002B;</mo><msubsup><mi>x</mi><mn>2</mn><mn>2</mn></msubsup><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><msup><mi>sin</mi><mn>2</mn></msup><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><mo stretchy="false">&#x00028;</mo><msubsup><mi>x</mi><mn>3</mn><mn>2</mn></msubsup><mo>&#x0002B;</mo><msubsup><mi>x</mi><mn>4</mn><mn>2</mn></msubsup><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x00029;</mo><mo stretchy="true" fence="true" form="postfix">]</mo></mrow><mo stretchy="true" fence="true" form="postfix">&#x00029;</mo></mrow></mrow></math>，能通过一个[4, 4, 2, 1] KAN进行表示。</p>
<p>我们通过每隔200步增加网格点的方式训练这些KAN，总共覆盖了<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>G</mi><mo>&#x0003D;</mo><mo stretchy="false">&#x0007B;</mo><mn>3</mn><mo>&#x0002C;</mo><mn>5</mn><mo>&#x0002C;</mo><mn>10</mn><mo>&#x0002C;</mo><mn>20</mn><mo>&#x0002C;</mo><mn>50</mn><mo>&#x0002C;</mo><mn>100</mn><mo>&#x0002C;</mo><mn>200</mn><mo>&#x0002C;</mo><mn>500</mn><mo>&#x0002C;</mo><mn>1000</mn><mo stretchy="false">&#x0007D;</mo></mrow></math>。同时，我们使用不同深度和宽度的MLP作为基线进行训练。MLP和KAN都采用LBFGS优化器进行1800步的训练。在图3.1中，我们绘制了KAN和MLP的测试RMSE随参数数量变化的曲线，显示出KAN的缩放曲线比MLP更优，特别是在高维示例中。为了比较，我们以红色虚线标出了从我们的KAN理论预测出的线条（<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003B1;</mi><mo>&#x0003D;</mo><mi>k</mi><mo>&#x0002B;</mo><mn>1</mn><mo>&#x0003D;</mo><mn>4</mn></mrow></math>），以及根据Sharma &amp; Kaplan [23]预测出的黑虚线（<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003B1;</mi><mo>&#x0003D;</mo><mo stretchy="false">&#x00028;</mo><mi>k</mi><mo>&#x0002B;</mo><mn>1</mn><mo stretchy="false">&#x00029;</mo><mo>&#x0002F;</mo><mi>d</mi><mo>&#x0003D;</mo><mn>4</mn><mo>&#x0002F;</mo><mi>d</mi></mrow></math>）。KAN几乎可以达到较陡峭的红色线条的饱和点，而MLP即使要收敛到较缓慢的黑色线条的速度也显得困难，并且很快就达到了性能平台期。另外，我们注意到在最后一个例子中，2层的KAN[4, 9, 1]的表现远逊于3层的KAN（形态为[4, 2, 2, 1]）。这突显了更深KAN的更强表达能力，对于MLP而言亦是如此：更深的MLP具有比浅层MLP更强的表达力。需要注意的是，在此基础设置下，我们未采用高级技术，即KAN和MLP都仅用LBFGS进行训练，未在Adam与LBFGS之间切换或使用增强方法[34]。未来工作中，我们将进一步探讨在采用这些高级技术的情况下KAN和MLP的对比。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Special functions`,
                    msg: String.raw`<div class="markdown-body"><p>One caveat for the above results is that we assume knowledge of the "true" KAN shape. In practice, we do not know the existence of KA representations. Even when we are promised that such a KA representation exists, we do not know the KAN shape a priori. Special functions in more than one variables are such cases, because it would be (mathematically) surprising if multivariate special functions (e.g., a Bessel function f (ν, x) = J ν (x)) could be written in KA represenations, involving only univariate functions and sums). We show below that:   1.
(1) Finding (approximate) compact KA representations of special functions is possible, revealing novel mathematical properties of special functions from the perspective of Kolmogorov-Arnold representations.
(2) KANs are more efficient and accurate in representing special functions than MLPs.
We collect 15 special functions common in math and physics, summarized in Table 1. We choose MLPs with fixed width 5 or 100 and depths swept in {2, 3, 4, 5, 6}. We run KANs both with and without pruning. KANs without pruning: We fix the shape of KAN, whose width are set to 5 and depths are swept in {2,3,4,5,6}. KAN with pruning. We use the sparsification (λ = 10 -2 or 10 -3 ) and pruning technique in Section 2.5.1 to obtain a smaller KAN pruned from a fixed-shape KAN. Each KAN is initialized to have G = 3, trained with LBFGS, with increasing number of grid points every 200 steps to cover G = {3, 5, 10, 20, 50, 100, 200}. For each hyperparameter combination, we run 3 random seeds.
For each dataset and each model family (KANs or MLPs), we plot the Pareto frontier 5 , in the (number of parameters, RMSE) plane, shown in Figure 3 </p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Special functions`,
                    msg: String.raw`<div class="markdown-body"><p>上述结果的一个注意事项是，我们假设了解“真实”的KAN结构。实际上，我们并不知道KA表示的存在性。即使我们被保证存在这样的KA表示，我们也无法事先知道KAN的形状。多个变量的特殊函数就是这种情况，因为从数学角度讲，令人惊讶的是多变量特殊函数（例如，Bessel函数 (f(ν, x) = J_ν(x))）能够用仅包含单变量函数和求和的KA表示来书写。以下我们将展示：</p>
<ol>
<li><strong>寻找（近似）紧凑的KA表示特殊函数是可能的</strong>，这从Kolmogorov-Arnold表示的角度揭示了特殊函数的新颖数学性质。</li>
<li>在表示特殊函数方面，KAN相较于MLP更为高效与精确。</li>
</ol>
<p>我们搜集了数学和物理学中常见的15个特殊函数，并在表1中进行了汇总。我们选择了固定宽度为5或100的MLP，其深度遍历于({2, 3, 4, 5, 6})。对于KAN，我们分别在有无修剪的情况下进行实验。无修剪KAN：我们固定KAN的结构，其宽度设定为5，深度同样遍历于({2,3,4,5,6})。在进行修剪的KAN中，我们采用了节2.5.1中的稀疏化技术（(\lambda = 10^{-2})或(\lambda = 10^{-3})）和修剪技巧，从固定形状的KAN中得到一个更小的修剪后的KAN。每个KAN初始化时设G=3，使用LBFGS进行训练，并且每200步增加网格点的数量以覆盖G=( {3, 5, 10, 20, 50, 100, 200} )。对于每组超参数组合，我们都运行3个随机种子。</p>
<p>针对每个数据集和每个模型家族（KAN或MLP），我们在（参数数量，均方根误差RMSE）平面上绘制了帕累托前沿，并在图3中展示。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Feynman datasets Part-1`,
                    msg: String.raw`<div class="markdown-body"><p>The setup in Section 3.1 is when we clearly know "true" KAN shapes. The setup in Section 3.2 is when we clearly do not know "true" KAN shapes. This part investigates a setup lying in the middle:  I.9.18 b,c,d,e    I.27.6 I.29.16  I.37.4    Given the structure of the dataset, we may construct KANs by hand, but we are not sure if they are optimal. In this regime, it is interesting to compare human-constructed KANs and auto-discovered KANs via pruning (techniques in Section 2.5.1).
I.6.2b exp(-(θ-θ1) 2 2σ2 )/ √ 2πσ 2 exp(-(θ-θ1) 2 2σ2 )/ √ 2πσ 2 θ, θ1, σ [3,2,2,1,1] [3,4,1] [3,2,2,1,1] 1.22 × 10 -
Gm1m2 (x2-x1)2+(y2-y1)2+(z2-z1)2 a (b-1)2+(c-d)2+(e-f )2 a,
I.13.12 Gm1m2( 1 r2 -1 r1 ) a( 1 b -1) a, b [2,2,1] [2,2,1] [2,2,1] 7.22 × 10 -3 4.81 × 10 -3 2.72 × 10 -3 1.42 × 10 -3 I.15.3x x-ut √ 1-( u c )2 1-a √ 1-b2 a, b [2,2,1,1] [2,1,1] [2,2,1,
I.18.4 m1r1+m2r2 m1+m2 1+ab 1+a a, b[2,2,2,1,1] [2,2,1] [2,2,1] 3
1 1 d1 + n d2 1 1+ab a, b[2,2,1,1] [2,1,1] [2,1,1] 2
x 2 1 + x 2 2 -2x1x2cos(θ1 -θ2) 1 + a 2 -2acos(θ1 -θ2) a, θ1, θ2[3,2,2,3,2,1,1] [3,2,2,1] [3,2,3,1] 2.36
I.30.3 I * ,0 sin 2 ( nθ 2 ) sin2( θ 2 ) sin 2 ( nθ 2 ) sin2( θ 2 ) n, θ[2,3,2,2,1,1] [2,4,3,1] [2,3,2,3,1,1] 3
.85 × 10 -1 1.03 × 10 -3 1.11 × 10 -2 1.50 × 10 -2 I.30.5 arcsin( λ nd ) arcsin( a n ) a, n[2,1,1] [2,1,1] [2,1,1,1,1,1] 2.23
I * = I1 + I2 + 2 √ I1I2cosδ 1 + a + 2 √ acosδ a, δ[2,3,2,1] [2,2,1] [2,2,1] 7.57
I.40.1 n0exp(-mgx kbT ) n0e -a n0, a[2,1,1] [2,2,1] [2,2,1,1,1,2,1] 3
a sin 2 ( b-c 2 ) ( b-c 2 )2 a, b, c [3,2,3,1,1] [3,3,2,1] [3,3,2,1,</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Feynman datasets`,
                    msg: String.raw`<div class="markdown-body"><p>第三部分的设置旨在探讨一种介于清晰知晓"真实"KAN结构与完全未知该结构之间的场景。具体到以下几个实例：</p>
<ol>
<li><strong>I.9.18 b,c,d,e</strong>: 这些实例允许我们手动构建KAN，但我们不确定它们是否是最优的。</li>
<li><strong>I.27.6</strong>, <strong>I.29.16</strong>, <strong>I.37.4</strong>: 考虑到数据集的特性，虽然可以人工设计KAN，但最优性不明确。</li>
</ol>
<p>在这一情境下，对比人类构建的KAN与通过剪枝技术（如第2.5.1节所述自动发现的KAN）变得尤为有趣。</p>
<p><strong>实例展示：</strong></p>
<ul>
<li>
<p><strong>I.6.2b</strong>: 根据高斯函数的形状，参数包括<font color="#00FF00">$</font><font color="#FF00FF">\theta, \theta_1, \sigma</font><font color="#00FF00">$</font>，其配置有[3,2,2,1,1]、[3,4,1]、[3,2,2,1,1]，表征不同维度的参数设置及其性能，如<font color="#00FF00">$</font><font color="#FF00FF">1.22\times10^{-5}</font><font color="#00FF00">$</font>所示。</p>
</li>
<li>
<p><strong>引力相关示例</strong>（G为引力常数，m1/m2为质量，坐标差表示），展示了在特定参数配置下的数值结果，比如<font color="#00FF00">$</font><font color="#FF00FF">7.22\times10^{-3}</font><font color="#00FF00">$</font>至<font color="#00FF00">$</font><font color="#FF00FF">1.42\times10^{-3}</font><font color="#00FF00">$</font>。</p>
</li>
<li>
<p><strong>相对论效应示例</strong>（如洛伦兹变换中<font color="#00FF00">$</font><font color="#FF00FF">x</font><font color="#00FF00">$</font>的表达式），涉及速度<font color="#00FF00">$</font><font color="#FF00FF">v</font><font color="#00FF00">$</font>相对于光速<font color="#00FF00">$</font><font color="#FF00FF">c</font><font color="#00FF00">$</font>的比例以及系数<font color="#00FF00">$</font><font color="#FF00FF">a, b</font><font color="#00FF00">$</font>的调整，表明了不同的维度配置对于计算结果的影响。</p>
</li>
<li>
<p><strong>质点间作用力的平均</strong>和<strong>距离平方反比法则</strong>，通过不同参数配置探索两个向量的加权求和问题，展示了不同结构下的精确度。</p>
</li>
<li>
<p><strong>角度相关函数</strong>，如涉及余弦函数比较角度的示例，体现了不同维度的配置对于复杂三角关系建模的重要性，性能指标从<font color="#00FF00">$</font><font color="#FF00FF">2.36</font><font color="#00FF00">$</font>变化说明了模型适应性的差异。</p>
</li>
<li>
<p><strong>正弦波形叠加</strong>（如I.30.3所示）探讨了频率、相位等参数对于波形表达的影响，以及通过不同结构配置所能达到的最佳拟合效果。</p>
</li>
<li>
<p><strong>反正弦函数应用</strong>（例如I.30.5）和<strong>干涉强度合并</strong>实例，展示了对于不同参数<font color="#00FF00">$</font><font color="#FF00FF">a, \delta</font><font color="#00FF00">$</font>或<font color="#00FF00">$</font><font color="#FF00FF">n, \lambda</font><font color="#00FF00">$</font>的处理能力，以及不同网络结构在此类问题上的效率。</p>
</li>
<li>
<p><strong>指数衰减</strong>（如I.40.1所示的斯特恩-瓦拉赫因子）及<strong>周期性函数</strong>（如<font color="#00FF00">$</font><font color="#FF00FF">a\sin^2(b-c)/2)</font><font color="#00FF00">$</font>的平方），通过多样化的参数配置和结构展现了模型对这些自然现象描述的灵活性。</p>
</li>
</ul>
<p>综上，每个实例都以独特的数学或物理表达形式挑战KAN的设计与优化，通过比较人工构造与自动发现的KAN，不仅可以评估这些网络在复杂函数近似上的表现，也揭示出它们在科学探索中作为有力辅助工具的潜力。</p><hr /><p>第三部分的设置旨在探讨一种介于清晰知晓"真实"KAN结构与完全未知该结构之间的场景。具体到以下几个实例：</p>
<ol>
<li><strong>I.9.18 b,c,d,e</strong>: 这些实例允许我们手动构建KAN，但我们不确定它们是否是最优的。</li>
<li><strong>I.27.6</strong>, <strong>I.29.16</strong>, <strong>I.37.4</strong>: 考虑到数据集的特性，虽然可以人工设计KAN，但最优性不明确。</li>
</ol>
<p>在这一情境下，对比人类构建的KAN与通过剪枝技术（如第2.5.1节所述自动发现的KAN）变得尤为有趣。</p>
<p><strong>实例展示：</strong></p>
<ul>
<li>
<p><strong>I.6.2b</strong>: 根据高斯函数的形状，参数包括<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003B8;</mi><mo>&#x0002C;</mo><msub><mi>&#x003B8;</mi><mn>1</mn></msub><mo>&#x0002C;</mo><mi>&#x003C3;</mi></mrow></math>，其配置有[3,2,2,1,1]、[3,4,1]、[3,2,2,1,1]，表征不同维度的参数设置及其性能，如<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mn>1.22</mn><mi>&#x000D7;</mi><msup><mn>10</mn><mrow><mo>&#x02212;</mo><mn>5</mn></mrow></msup></mrow></math>所示。</p>
</li>
<li>
<p><strong>引力相关示例</strong>（G为引力常数，m1/m2为质量，坐标差表示），展示了在特定参数配置下的数值结果，比如<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mn>7.22</mn><mi>&#x000D7;</mi><msup><mn>10</mn><mrow><mo>&#x02212;</mo><mn>3</mn></mrow></msup></mrow></math>至<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mn>1.42</mn><mi>&#x000D7;</mi><msup><mn>10</mn><mrow><mo>&#x02212;</mo><mn>3</mn></mrow></msup></mrow></math>。</p>
</li>
<li>
<p><strong>相对论效应示例</strong>（如洛伦兹变换中<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>x</mi></mrow></math>的表达式），涉及速度<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>v</mi></mrow></math>相对于光速<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>c</mi></mrow></math>的比例以及系数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>a</mi><mo>&#x0002C;</mo><mi>b</mi></mrow></math>的调整，表明了不同的维度配置对于计算结果的影响。</p>
</li>
<li>
<p><strong>质点间作用力的平均</strong>和<strong>距离平方反比法则</strong>，通过不同参数配置探索两个向量的加权求和问题，展示了不同结构下的精确度。</p>
</li>
<li>
<p><strong>角度相关函数</strong>，如涉及余弦函数比较角度的示例，体现了不同维度的配置对于复杂三角关系建模的重要性，性能指标从<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mn>2.36</mn></mrow></math>变化说明了模型适应性的差异。</p>
</li>
<li>
<p><strong>正弦波形叠加</strong>（如I.30.3所示）探讨了频率、相位等参数对于波形表达的影响，以及通过不同结构配置所能达到的最佳拟合效果。</p>
</li>
<li>
<p><strong>反正弦函数应用</strong>（例如I.30.5）和<strong>干涉强度合并</strong>实例，展示了对于不同参数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>a</mi><mo>&#x0002C;</mo><mi>&#x003B4;</mi></mrow></math>或<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>n</mi><mo>&#x0002C;</mo><mi>&#x003BB;</mi></mrow></math>的处理能力，以及不同网络结构在此类问题上的效率。</p>
</li>
<li>
<p><strong>指数衰减</strong>（如I.40.1所示的斯特恩-瓦拉赫因子）及<strong>周期性函数</strong>（如<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>a</mi><msup><mi>sin</mi><mn>2</mn></msup><mo stretchy="false">&#x00028;</mo><mi>b</mi><mo>&#x02212;</mo><mi>c</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002F;</mo><mn>2</mn><mo stretchy="false">&#x00029;</mo></mrow></math>的平方），通过多样化的参数配置和结构展现了模型对这些自然现象描述的灵活性。</p>
</li>
</ul>
<p>综上，每个实例都以独特的数学或物理表达形式挑战KAN的设计与优化，通过比较人工构造与自动发现的KAN，不仅可以评估这些网络在复杂函数近似上的表现，也揭示出它们在科学探索中作为有力辅助工具的潜力。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Feynman datasets Part-2`,
                    msg: String.raw`<div class="markdown-body"><p>Feynman dataset. The Feynman dataset collects many physics equations from Feynman's textbooks [35,36]. For our purpose, we are interested in problems in the Feynman_no_units dataset that have at least 2 variables, since univariate problems are trivial for KANs (they simplify to 1D splines). A sample equation from the Feynman dataset is the relativisic velocity addition formula
f (u, v) = (u + v)/(1 + uv). (3.1)
The dataset can be constructed by randomly drawing u i ∈ (-1, 1), v i ∈ (-1, 1), and computing
f i = f (u i , v i ).
Given many tuples (u i , v i , f i ), a neural network is trained and aims to predict f from u and v. We are interested in (1) how well a neural network can perform on test samples; (2) how much we can learn about the structure of the problem from neural networks.
We compare four kinds of neural networks:
(1) Human-constructued KAN. Given a symbolic formula, we rewrite it in Kolmogorov-Arnold representations. For example, to multiply two numbers x and y, we can use the identity xy = (x+y) 2  4
-(x-y) 2 4
, which corresponds to a [2, 2, 1] KAN. The constructued shapes are listed in the "Human-constructed KAN shape" in Table 2.
(2) KANs without pruning. We fix the KAN shape to width 5 and depths are swept over {2,3,4,5,6}.
(3) KAN with pruning. We use the sparsification (λ = 10 -2 or 10 -3 ) and the pruning technique from Section 2.5.1 to obtain a smaller KAN from a fixed-shape KAN from (2).
(4) MLPs with fixed width 5, depths swept in {2, 3, 4, 5, 6}, and activations chosen from {Tanh, ReLU, SiLU}.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Feynman datasets`,
                    msg: String.raw`<div class="markdown-body"><p>费曼数据集。费曼数据集从费曼的教科书[35,36]中收集了许多物理学方程。针对我们的目的，我们对费曼_no_units数据集中的问题感兴趣，这些问题是至少包含2个变量的，因为对于KANs来说，单一变量问题较为简单（它们简化为一维样条）。费曼数据集中一个示例方程是相对论速度相加公式
f(u, v) = (u + v) / (1 + uv)。 (3.1)
该数据集通过随机抽取 u_i ∈ (-1, 1)，v_i ∈ (-1, 1)，并计算
f_i = f(u_i, v_i)
来构造。给定许多元组(u_i, v_i, f_i)，神经网络接受训练，旨在根据u和v预测f。我们关注两方面：(1) 神经网络在测试样本上的表现如何；(2) 我们能从神经网络中学习到关于问题结构的多少信息。</p>
<p>我们比较了四种类型的神经网络：
(1) 人工构造的KAN。给定一个符号公式，我们将其重写为柯尔莫戈洛夫-阿诺德表示法。例如，为了乘以两个数x和y，我们可以使用恒等式xy = [(x+y)^2 - (x-y)^2] / 4，这对应于一个[2, 2, 1]的KAN。构造的形状列表见表2中的“人工构造的KAN形状”。
(2) 未经修剪的KAN。我们将KAN的形状固定宽度为5，深度遍历{2,3,4,5,6}。
(3) 经过修剪的KAN。我们使用来自第2.5.1节的稀疏化技术（λ=10^-2或10^-3）和剪枝技术，从(2)中固定形状的KAN中得到更小的KAN。
(4) 固定宽度为5的MLP，深度遍历{2, 3, 4, 5, 6}，激活函数选择自{Tanh, ReLU, SiLU}。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Feynman datasets Part-3`,
                    msg: String.raw`<div class="markdown-body"><p>Each KAN is initialized to have G = 3, trained with LBFGS, with increasing number of grid points every 200 steps to cover G = {3, 5, 10, 20, 50, 100, 200}. For each hyperparameter combination, we try 3 random seeds. For each dataset (equation) and each method, we report the results of the best model (minimal KAN shape, or lowest test loss) over random seeds and depths in Table 2. We find that MLPs and KANs behave comparably on average. For each dataset and each model family (KANs or MLPs), we plot the Pareto frontier in the plane spanned by the number of parameters and RMSE losses, shown in Figure D.1 in Appendix D. We conjecture that the Feynman datasets are too simple to let KANs make further improvements, in the sense that variable dependence is usually smooth or monotonic, which is in contrast to the complexity of special functions which often demonstrate oscillatory behavior.
Auto-discovered KANs are smaller than human-constructed ones. We report the pruned KAN shape in two columns of Table 2 Consider the relativistic velocity composition f (u, v) = u+v 1+uv , for example. Our construction is quite deep because we were assuming that multiplication of u, v would use two layers (see Figure 4.1 (a)), inversion of 1 + uv would use one layer, and multiplication of u + v and 1/(1 + uv) would use another two layers 6 , resulting a total of 5 layers. However, the auto-discovered KANs are only 2 layers deep! In hindsight, this is actually expected if we recall the rapidity trick in relativity: define the two "rapidities" a ≡ arctanh u and b ≡ arctanh v. The relativistic composition of velocities are simple additions in rapidity space, i.e., u+v 1+uv = tanh(arctanh u + arctanh v), which can be realized by a two-layer KAN. Pretending we do not know the notion of rapidity in physics, we could potentially discover this concept right from KANs without trial-and-error symbolic manipulations. The interpretability of KANs which can facilitate scientific discovery is the main topic in Section 4.   </p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Feynman datasets`,
                    msg: String.raw`<div class="markdown-body"><p>每个KAN初始设置为G=3，采用LBFGS算法进行训练，并且每200步增加网格点的数量以覆盖G={3, 5, 10, 20, 50, 100, 200}。对于每组超参数组合，我们尝试使用3个随机种子。针对每个数据集（方程）和每种方法，我们在表2中报告了在随机种子和深度上表现最佳的模型结果（最小的KAN结构或最低的测试损失）。我们发现，平均而言，MLP和KAN的表现相当。针对每个数据集及每种模型家族（KAN或MLP），我们在由参数数量和RMSE损失所定义的平面上绘制了帕累托前沿，如附录D中的图D.1所示。我们推测，Feynman数据集过于简单，以至于无法让KAN进一步提升性能，因为变量之间的依赖关系通常较为平滑或单调，这与特殊函数常见的振荡行为复杂性不符。</p>
<p>自动生成的KAN比人为构建的更为精简。我们在表2的两列中展示了剪枝后的KAN结构。以相对论速度合成函数(f(u, v) = \frac{u+v}{1+uv})为例，我们的人工构建相当深，因为我们假设u和v的乘法需要用到两层（见图4.1(a)），(1 + uv)的求逆需要一层，而(\frac{u + v}{1/(1 + uv)})的计算又需另外两层，总共5层。然而，自动发现的KAN仅需两层！事后看来，如果考虑到相对论中的“迅速度”技巧，这是可以预料到的：定义两个“迅速度”(a \equiv \text{arctanh}(u))和(b \equiv \text{arctanh}(v))。相对论中的速度合成在迅速度空间中是简单的相加操作，即(\frac{u+v}{1+uv} = \tanh(\text{arctanh}(u) + \text{arctanh}(v)))，这可以通过一个两层的KAN实现。假设我们在物理学中不了解“迅速度”的概念，我们可能直接通过KAN而不必经过试错式的符号操作就发现这一概念。KAN的可解释性，以及其促进科学发现的能力，是第四节的主要讨论话题。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Solving partial differential equations`,
                    msg: String.raw`<div class="markdown-body"><p>We consider a Poisson equation with zero Dirichlet boundary data.
For Ω = [-1, 1] 2 , consider the PDE u xx + u yy = f in Ω , u = 0 on ∂Ω . (3.2)
We consider the data f = -π 2 (1 + 4y 2 ) sin(πx) sin(πy 2 ) + 2π sin(πx) cos(πy 2 ) for which u = sin(πx) sin(πy 2 ) is the true solution. We use the framework of physics-informed neural networks (PINNs) [37,38] to solve this PDE, with the loss function given by
loss pde = αloss i + loss b := α 1 n i ni i=1 |u xx (z i ) + u yy (z i ) -f (z i )| 2 + 1 n b n b i=1 u 2 ,
where we use loss i to denote the interior loss, discretized and evaluated by a uniform sampling of n i points z i = (x i , y i ) inside the domain, and similarly we use loss b to denote the boundary loss, discretized and evaluated by a uniform sampling of n b points on the boundary. α is the hyperparameter balancing the effect of the two terms.
We compare the KAN architecture with that of MLPs using the same hyperparameters n i = 10000, n b = 800, and α = 0.01. We measure both the error in the L 2 norm and energy (H 1 ) norm and see that KAN achieves a much better scaling law with a smaller error, using smaller networks and fewer parameters; see Figure 3.3. A 2-Layer width-10 KAN is 100 times more accurate than a 4-Layer width-100 MLP (10 -7 vs 10 -5 MSE) and 100 times more parameter efficient (10 2 vs 10 4 parameters). Therefore we speculate that KANs might have the potential of serving as a good neural network representation for model reduction of PDEs. However, we want to note that our implementation of KANs are typically 10x slower than MLPs to train. The ground truth being a symbolic formula might be an unfair comparison for MLPs since KANs are good at representing symbolic formulas. In general, KANs and MLPs are good at representing different function classes of PDE solutions, which needs detailed future study to understand their respective boundaries.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Solving partial differential equations`,
                    msg: String.raw`<div class="markdown-body"><p>我们考虑一个具有零Dirichlet边界条件的泊松方程。设 Ω = [-1, 1]²，考虑在区域Ω内的偏微分方程 u_{xx} + u_{yy} = f，以及边界上u = 0的条件 (∂Ω)。具体地，考虑数据 <font color="#00FF00">$</font><font color="#FF00FF">f = -\pi^2(1 + 4y^2)\sin(\pi x)\sin(\pi y^2) + 2\pi\sin(\pi x)\cos(\pi y^2)</font><font color="#00FF00">$</font>，对于这组数据，<font color="#00FF00">$</font><font color="#FF00FF">u = \sin(\pi x)\sin(\pi y^2)</font><font color="#00FF00">$</font> 是真实的解。我们采用物理信息神经网络(PINNs)的框架<font color="#00FF00">$$</font><font color="#FF00FF">37,38]</font><font color="#00FF00">$$</font>来求解该PDE，其中损失函数定义为
<font color="#00FF00">$$</font><font color="#FF00FF">loss_{pde} = \alpha loss_i + loss_b := \alpha\frac{1}{n_i}\sum_{i=1}^{n_i} |u_{xx}(z_i) + u_{yy}(z_i) - f(z_i)|^2 + \frac{1}{n_b}\sum_{i=1}^{n_b} u(z_b)^2</font><font color="#00FF00">$$</font>，
这里使用<font color="#00FF00">$</font><font color="#FF00FF">loss_i</font><font color="#00FF00">$</font>表示内部域损失，通过对区域内部均匀采样<font color="#00FF00">$</font><font color="#FF00FF">n_i</font><font color="#00FF00">$</font>个点<font color="#00FF00">$</font><font color="#FF00FF">z_i = (x_i, y_i)</font><font color="#00FF00">$</font>进行离散化和评估；类似地，使用<font color="#00FF00">$</font><font color="#FF00FF">loss_b</font><font color="#00FF00">$</font>表示边界损失，通过对边界上均匀采样的<font color="#00FF00">$</font><font color="#FF00FF">n_b</font><font color="#00FF00">$</font>个点进行评估。超参数α用于平衡两项的影响。</p>
<p>我们使用相同的超参数<font color="#00FF00">$</font><font color="#FF00FF">n_i = 10000</font><font color="#00FF00">$</font>、<font color="#00FF00">$</font><font color="#FF00FF">n_b = 800</font><font color="#00FF00">$</font>及<font color="#00FF00">$</font><font color="#FF00FF">\alpha = 0.01</font><font color="#00FF00">$</font>，比较了KAN架构与MLP的性能。我们分别衡量了L²范数和能量(H¹范数)的误差，并观察到在更小的网络规模和较少参数的情况下，KAN取得了更好的缩放律并实现了更低的误差；详细情况见图3.3。一个两层宽度为10的KAN模型相比四层宽度为100的MLP模型，在精度上高出百倍（即平均平方误差MSE为10^-7对比10^-5），且参数效率高出百倍（参数量为10^2对比10^4）。因此，我们推测KAN有潜力作为偏微分方程模型约简的良好神经网络表示法。但值得注意的是，我们实现的KAN训练速度通常比MLP慢大约10倍。考虑到真实解是一个符号公式，这对MLP而言可能不太公平，因为KAN擅长表征符号公式。总的来说，KAN和MLP各自擅长表示不同类型的PDE解函数类，这一区别需要通过未来详尽的研究来深入理解。</p><hr /><p>我们考虑一个具有零Dirichlet边界条件的泊松方程。设 Ω = [-1, 1]²，考虑在区域Ω内的偏微分方程 u_{xx} + u_{yy} = f，以及边界上u = 0的条件 (∂Ω)。具体地，考虑数据 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo>&#x0003D;</mo><mo>&#x02212;</mo><msup><mi>&#x003C0;</mi><mn>2</mn></msup><mo stretchy="false">&#x00028;</mo><mn>1</mn><mo>&#x0002B;</mo><mn>4</mn><msup><mi>y</mi><mn>2</mn></msup><mo stretchy="false">&#x00029;</mo><mi>sin</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><mi>x</mi><mo stretchy="false">&#x00029;</mo><mi>sin</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><msup><mi>y</mi><mn>2</mn></msup><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><mn>2</mn><mi>&#x003C0;</mi><mi>sin</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><mi>x</mi><mo stretchy="false">&#x00029;</mo><mi>cos</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><msup><mi>y</mi><mn>2</mn></msup><mo stretchy="false">&#x00029;</mo></mrow></math>，对于这组数据，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>u</mi><mo>&#x0003D;</mo><mi>sin</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><mi>x</mi><mo stretchy="false">&#x00029;</mo><mi>sin</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><msup><mi>y</mi><mn>2</mn></msup><mo stretchy="false">&#x00029;</mo></mrow></math> 是真实的解。我们采用物理信息神经网络(PINNs)的框架<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mn>37</mn><mo>&#x0002C;</mo><mn>38</mn><mo stretchy="false">]</mo></mrow></math>来求解该PDE，其中损失函数定义为
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>l</mi><mi>o</mi><mi>s</mi><msub><mi>s</mi><mrow><mi>p</mi><mi>d</mi><mi>e</mi></mrow></msub><mo>&#x0003D;</mo><mi>&#x003B1;</mi><mi>l</mi><mi>o</mi><mi>s</mi><msub><mi>s</mi><mi>i</mi></msub><mo>&#x0002B;</mo><mi>l</mi><mi>o</mi><mi>s</mi><msub><mi>s</mi><mi>b</mi></msub><mi>:</mi><mo>&#x0003D;</mo><mi>&#x003B1;</mi><mfrac><mrow><mn>1</mn></mrow><mrow><msub><mi>n</mi><mi>i</mi></msub></mrow></mfrac><msubsup><mo>&#x02211;</mo><mrow><mi>i</mi><mo>&#x0003D;</mo><mn>1</mn></mrow><mrow><msub><mi>n</mi><mi>i</mi></msub></mrow></msubsup><mo stretchy="false">&#x0007C;</mo><msub><mi>u</mi><mrow><mi>x</mi><mi>x</mi></mrow></msub><mo stretchy="false">&#x00028;</mo><msub><mi>z</mi><mi>i</mi></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><msub><mi>u</mi><mrow><mi>y</mi><mi>y</mi></mrow></msub><mo stretchy="false">&#x00028;</mo><msub><mi>z</mi><mi>i</mi></msub><mo stretchy="false">&#x00029;</mo><mo>&#x02212;</mo><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>z</mi><mi>i</mi></msub><mo stretchy="false">&#x00029;</mo><msup><mo stretchy="false">&#x0007C;</mo><mn>2</mn></msup><mo>&#x0002B;</mo><mfrac><mrow><mn>1</mn></mrow><mrow><msub><mi>n</mi><mi>b</mi></msub></mrow></mfrac><msubsup><mo>&#x02211;</mo><mrow><mi>i</mi><mo>&#x0003D;</mo><mn>1</mn></mrow><mrow><msub><mi>n</mi><mi>b</mi></msub></mrow></msubsup><mi>u</mi><mo stretchy="false">&#x00028;</mo><msub><mi>z</mi><mi>b</mi></msub><msup><mo stretchy="false">&#x00029;</mo><mn>2</mn></msup></mrow></math>，
这里使用<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>l</mi><mi>o</mi><mi>s</mi><msub><mi>s</mi><mi>i</mi></msub></mrow></math>表示内部域损失，通过对区域内部均匀采样<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>n</mi><mi>i</mi></msub></mrow></math>个点<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>z</mi><mi>i</mi></msub><mo>&#x0003D;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mi>i</mi></msub><mo>&#x0002C;</mo><msub><mi>y</mi><mi>i</mi></msub><mo stretchy="false">&#x00029;</mo></mrow></math>进行离散化和评估；类似地，使用<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>l</mi><mi>o</mi><mi>s</mi><msub><mi>s</mi><mi>b</mi></msub></mrow></math>表示边界损失，通过对边界上均匀采样的<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>n</mi><mi>b</mi></msub></mrow></math>个点进行评估。超参数α用于平衡两项的影响。</p>
<p>我们使用相同的超参数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>n</mi><mi>i</mi></msub><mo>&#x0003D;</mo><mn>10000</mn></mrow></math>、<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>n</mi><mi>b</mi></msub><mo>&#x0003D;</mo><mn>800</mn></mrow></math>及<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003B1;</mi><mo>&#x0003D;</mo><mn>0.01</mn></mrow></math>，比较了KAN架构与MLP的性能。我们分别衡量了L²范数和能量(H¹范数)的误差，并观察到在更小的网络规模和较少参数的情况下，KAN取得了更好的缩放律并实现了更低的误差；详细情况见图3.3。一个两层宽度为10的KAN模型相比四层宽度为100的MLP模型，在精度上高出百倍（即平均平方误差MSE为10^-7对比10^-5），且参数效率高出百倍（参数量为10^2对比10^4）。因此，我们推测KAN有潜力作为偏微分方程模型约简的良好神经网络表示法。但值得注意的是，我们实现的KAN训练速度通常比MLP慢大约10倍。考虑到真实解是一个符号公式，这对MLP而言可能不太公平，因为KAN擅长表征符号公式。总的来说，KAN和MLP各自擅长表示不同类型的PDE解函数类，这一区别需要通过未来详尽的研究来深入理解。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Continual Learning`,
                    msg: String.raw`<div class="markdown-body"><p>Catastrophic forgetting is a serious problem in current machine learning [39]. When a human masters a task and switches to another task, they do not forget how to perform the first task. Unfortunately, this is not the case for neural networks. When a neural network is trained on task 1 and then shifted to being trained on task 2, the network will soon forget about how to perform task 1. A key difference between artificial neural networks and human brains is that human brains have function- ally distinct modules placed locally in space. When a new task is learned, structure re-organization only occurs in local regions responsible for relevant skills [40,41], leaving other regions intact. Most artificial neural networks, including MLPs, do not have this notion of locality, which is probably the reason for catastrophic forgetting.
We show that KANs have local plasticity and can avoid catastrophic forgetting by leveraging the locality of splines. The idea is simple: since spline bases are local, a sample will only affect a few nearby spline coefficients, leaving far-away coefficients intact (which is desirable since faraway regions may have already stored information that we want to preserve). By contrast, since MLPs usually use global activations, e.g., ReLU/Tanh/SiLU etc., any local change may propagate uncontrollably to regions far away, destroying the information being stored there. We use a toy example to validate this intuition. The 1D regression task is composed of 5 Gaussian peaks. Data around each peak is presented sequentially (instead of all at once) to KANs and MLPs, as shown in Figure 3.4 top row. KAN and MLP predictions after each training phase are shown in the middle and bottom rows. As expected, KAN only remodels regions where data is present on in the current phase, leaving previous regions unchanged. By contrast, MLPs remodels the whole region after seeing new data samples, leading to catastrophic forgetting.
Here we simply present our preliminary results on an extremely simple example, to demonstrate how one could possibly leverage locality in KANs (thanks to spline parametrizations) to reduce catastrophic forgetting. However, it remains unclear whether our method can generalize to more realistic setups, especially in high-dimensional cases where it is unclear how to define "locality". In future work, We would also like to study how our method can be connected to and combined with SOTA methods in continual learning [42,43].</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Continual Learning`,
                    msg: String.raw`<div class="markdown-body"><p><strong>持续学习章节</strong></p>
<p>当前机器学习领域中，灾难性遗忘是一个严重问题[39]。当人类掌握一项任务并转向另一项任务时，他们不会忘记如何执行最初的任务。不幸的是，神经网络并非如此。当神经网络在任务1上训练完成后转而针对任务2进行训练时，该网络很快就会忘记如何执行任务1。人工神经网络与人脑之间的关键差异在于，人脑具有功能上独特且空间上局部分布的模块。学习新任务时，结构重组仅发生在负责相关技能的局部区域中[40,41]，其他区域保持不变。大多数人工神经网络，包括多层感知器（MLPs），不具备这种局部性的概念，这可能是导致灾难性遗忘的原因。</p>
<p>我们展示KANs具有局部可塑性，并能通过利用样条函数的局部性来避免灾难性遗忘。这一想法很简单：由于样条基是局部的，一个样本只会改变少数邻近的样条系数，而远处的系数则保持不变（这是可取的，因为远处的区域可能已经存储了我们希望保留的信息）。相比之下，由于MLPs通常使用全局激活函数，如ReLU/Tanh/SiLU等，任何局部更改都可能不受控制地传播到远端区域，破坏了那里存储的信息。我们使用一个玩具示例来验证这一直觉。这个一维回归任务由5个高斯峰组成。每一轮训练中，每个峰周围的数据显示给KAN和MLP，如图3.4上行所示。KAN和MLP在每次训练阶段后的预测结果分别显示在中行和下行。不出所料，KAN仅仅重塑了当前阶段数据所在区域，而之前的区域保持不变。相反，MLP在看到新的数据样本后重新塑造了整个区域，从而导致了灾难性遗忘。</p>
<p>这里，我们仅在一个极其简单的示例上展示了初步成果，以证明如何可能利用KANs中的局部性（得益于样条参数化）来减少灾难性遗忘。然而，我们的方法是否能推广到更实际的设置中，特别是在高维情况下尚不清晰，因为在那里如何定义“局部性”并不明确。在未来的工作中，我们也希望研究我们的方法如何与持续学习领域的前沿技术[42,43]相连接并结合。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# KANs are interpretable`,
                    msg: String.raw`<div class="markdown-body"><p>In this section, we show that KANs are interpretable and interactive thanks to the techniques we developed in Section 2.5. We want to test the use of KANs not only on synthetic tasks (Section 4.1 and 4.2), but also in real-life scientific research. We demonstrate that KANs can (re)discover both highly non-trivial relations in knot theory (Section 4.3) and phase transition boundaries in condensed  (c) Numerical to categorical. The task is to convert a real number in [0, 1] to its first decimal digit (as one hots), e.g., 0.0618 → [1, 0, 0, 0, 0,
• • • ], 0.314 → [0, 0, 0, 1, 0, • • • ].
Notice that activation functions are learned to be spikes located around the corresponding decimal digits.
(d) Special function f (x, y) = exp(J 0 (20x) + y 2 ). One limitation of symbolic regression is that it will never find the correct formula of a special function if the special function is not provided as prior knowledge. KANs can learn special functions -the highly wiggly Bessel function J 0 (20x) is learned (numerically) by KAN.
(e) Phase transition f (x 1 , x 2 , x 3 ) = tanh(5(x 4 1 + x 4 2 + x 4 3 -1)). Phase transitions are of great interest in physics, so we want KANs to be able to detect phase transitions and to identify the correct order parameters. We use the tanh function to simulate the phase transition behavior, and the order parameter is the combination of the quartic terms of x 1 , x 2 , x 3 . Both the quartic dependence and tanh dependence emerge after KAN training. This is a simplified case of a localization phase transition discussed in Section 4.4.
(f) Deeper compositions f (x 1 , x 2 , x 3 , x 4 ) = (x 1 -x 2 ) 2 + (x 3 -x 4 ) 2 . To compute this, we would need the identity function, squared function, and square root, which requires at least a three-layer KAN. Indeed, we find that a </p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# KANs are interpretable`,
                    msg: String.raw`<div class="markdown-body"><p>在本节中，我们证明了KANs具备可解释性和互动性，这得益于我们在2.5节中开发的技术。我们不仅希望通过合成任务（第4.1节和4.2节）来测试KANs的使用，还希望将其应用于实际的科学研究中。我们展示了KANs能够在纽结理论中（第4.3节）发现高度非平凡的关系，并确定凝聚态物理学中的相变边界。</p>
<p>(c) 数值到类别转换。该任务是将[0, 1]区间内的实数转换为其小数点后第一位的数字表示（以独热编码形式），例如，0.0618 → [1, 0, 0, 0, 0,...]，0.314 → [0, 0, 0, 1, 0,...]。值得注意的是，激活函数被学习为围绕相应小数位的尖峰分布。</p>
<p>(d) 特殊函数<font color="#00FF00">$</font><font color="#FF00FF">f(x, y) = \exp(J_0(20x) + y^2)</font><font color="#00FF00">$</font>。符号回归的一个限制是，如果特殊的函数形式没有作为先验知识提供，它永远不会找到该特殊函数的正确表达式。KANs能够学习特殊函数——高度振荡的贝塞尔函数<font color="#00FF00">$</font><font color="#FF00FF">J_0(20x)</font><font color="#00FF00">$</font>通过KAN得以数值方式学习。</p>
<p>(e) 相变<font color="#00FF00">$</font><font color="#FF00FF">f(x_1, x_2, x_3) = \tanh(5(x_1^4 + x_2^4 + x_3^4 -1))</font><font color="#00FF00">$</font>。相变在物理学中极为重要，因此我们期望KANs能检测到相变并确定正确的序参量。我们使用<font color="#00FF00">$</font><font color="#FF00FF">\tanh</font><font color="#00FF00">$</font>函数来模拟相变行为，序参量是<font color="#00FF00">$</font><font color="#FF00FF">x_1</font><font color="#00FF00">$</font>、<font color="#00FF00">$</font><font color="#FF00FF">x_2</font><font color="#00FF00">$</font>、<font color="#00FF00">$</font><font color="#FF00FF">x_3</font><font color="#00FF00">$</font>四次项的组合。四次依赖性和<font color="#00FF00">$</font><font color="#FF00FF">\tanh</font><font color="#00FF00">$</font>依赖性都在KAN训练后显现出来。这是第4.4节讨论的一种局域化相变简化案例。</p>
<p>(f) 更深层次的组合<font color="#00FF00">$</font><font color="#FF00FF">f(x_1, x_2, x_3, x_4) = (x_1 - x_2)^2 + (x_3 - x_4)^2</font><font color="#00FF00">$</font>。为了计算这个函数，我们需要恒等函数、平方函数和平方根操作，这至少需要三层KAN。确实，我们发现一个...</p><hr /><p>在本节中，我们证明了KANs具备可解释性和互动性，这得益于我们在2.5节中开发的技术。我们不仅希望通过合成任务（第4.1节和4.2节）来测试KANs的使用，还希望将其应用于实际的科学研究中。我们展示了KANs能够在纽结理论中（第4.3节）发现高度非平凡的关系，并确定凝聚态物理学中的相变边界。</p>
<p>(c) 数值到类别转换。该任务是将[0, 1]区间内的实数转换为其小数点后第一位的数字表示（以独热编码形式），例如，0.0618 → [1, 0, 0, 0, 0,...]，0.314 → [0, 0, 0, 1, 0,...]。值得注意的是，激活函数被学习为围绕相应小数位的尖峰分布。</p>
<p>(d) 特殊函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo>&#x0002C;</mo><mi>y</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>exp</mi><mo stretchy="false">&#x00028;</mo><msub><mi>J</mi><mn>0</mn></msub><mo stretchy="false">&#x00028;</mo><mn>20</mn><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><msup><mi>y</mi><mn>2</mn></msup><mo stretchy="false">&#x00029;</mo></mrow></math>。符号回归的一个限制是，如果特殊的函数形式没有作为先验知识提供，它永远不会找到该特殊函数的正确表达式。KANs能够学习特殊函数——高度振荡的贝塞尔函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>J</mi><mn>0</mn></msub><mo stretchy="false">&#x00028;</mo><mn>20</mn><mi>x</mi><mo stretchy="false">&#x00029;</mo></mrow></math>通过KAN得以数值方式学习。</p>
<p>(e) 相变<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>2</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>3</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>tanh</mi><mo stretchy="false">&#x00028;</mo><mn>5</mn><mo stretchy="false">&#x00028;</mo><msubsup><mi>x</mi><mn>1</mn><mn>4</mn></msubsup><mo>&#x0002B;</mo><msubsup><mi>x</mi><mn>2</mn><mn>4</mn></msubsup><mo>&#x0002B;</mo><msubsup><mi>x</mi><mn>3</mn><mn>4</mn></msubsup><mo>&#x02212;</mo><mn>1</mn><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x00029;</mo></mrow></math>。相变在物理学中极为重要，因此我们期望KANs能检测到相变并确定正确的序参量。我们使用<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>tanh</mi></mrow></math>函数来模拟相变行为，序参量是<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>x</mi><mn>1</mn></msub></mrow></math>、<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>x</mi><mn>2</mn></msub></mrow></math>、<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>x</mi><mn>3</mn></msub></mrow></math>四次项的组合。四次依赖性和<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>tanh</mi></mrow></math>依赖性都在KAN训练后显现出来。这是第4.4节讨论的一种局域化相变简化案例。</p>
<p>(f) 更深层次的组合<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>2</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>3</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>4</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x02212;</mo><msub><mi>x</mi><mn>2</mn></msub><msup><mo stretchy="false">&#x00029;</mo><mn>2</mn></msup><mo>&#x0002B;</mo><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>3</mn></msub><mo>&#x02212;</mo><msub><mi>x</mi><mn>4</mn></msub><msup><mo stretchy="false">&#x00029;</mo><mn>2</mn></msup></mrow></math>。为了计算这个函数，我们需要恒等函数、平方函数和平方根操作，这至少需要三层KAN。确实，我们发现一个...</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Unsupervised toy dataset`,
                    msg: String.raw`<div class="markdown-body"><p>Often, scientific discoveries are formulated as supervised learning problems, i.e., given input variables x 1 , x 2 , • • • , x d and output variable(s) y, we want to find an interpretable function f such that
y ≈ f (x 1 , x 2 , • • • , x d ).
However, another type of scientific discovery can be formulated as unsupervised learning, i.e., given a set of variables (x 1 , x 2 , • • • , x d ), we want to discover a structural relationship between the variables. Specifically, we want to find a non-zero f such that
f (x 1 , x 2 , • • • , x d ) ≈ 0. (4.1)
For example, consider a set of features (x 1 , x 2 , x 3 ) that satisfies x 3 = exp(sin(πx
1 ) + x 2 2 ). Then a valid f is f (x 1 , x 2 , x 3 ) = sin(πx 1 ) + x 2
2 -log(x 3 ) = 0, implying that points of (x 1 , x 2 , x 3 ) form a 2D submanifold specified by f = 0 instead of filling the whole 3D space.
If an algorithm for solving the unsupervised problem can be devised, it has a considerable advantage over the supervised problem, since it requires only the sets of features S = (x 1 , x 2 , • • • , x d ). The supervised problem, on the other hand, tries to predict subsets of features in terms of the others, i.e. it splits S = S in ∪ S out into input and output features of the function to be learned. Without domain expertise to advise the splitting, there are 2 d -2 possibilities such that |S in | &gt; 0 and |S out | &gt; 0. This exponentially large space of supervised problems can be avoided by using the unsupervised approach. This unsupervised learning approach will be valuable to the knot dataset in Section 4.3. A Google Deepmind team [44] manually chose signature to be the target variable, otherwise they would face this combinatorial problem described above. This raises the question whether we can instead tackle the unsupervised learning directly. We present our method and a toy example below.
We tackle the unsupervised learning problem by turning it into a supervised learning problem on all of the d features, without requiring the choice of a splitting. The essential idea is to learn a function f (x 1 , . . . , x d ) = 0 such that f is not the 0-function. To do this, similar to contrastive learning, we define positive samples and negative samples: positive samples are feature vectors of real data. Negative samples are constructed by feature corruption. To ensure that the overall feature distribution for each topological invariant stays the same, we perform feature corruption by random permutation of each feature across the entire training set. Now we want to train a network g such that g(x real ) = 1 and g(x fake ) = 0 which turns the problem into a supervised problem. However, remember that we originally want f (x real ) = 0 and f (x fake ) ̸ = 0. We can achieve this by having g = σ • f where σ(x) = exp(-x 2 2w 2 ) is a Gaussian function with a small width w, which can be conveniently realized by a KAN with shape [..., 1, 1] whose last activation is set to be the Gaussian function σ and all previous layers form f . Except for the modifications mentioned above, everything else is the same for supervised training. Now we demonstrate that the unsupervised paradigm works for a synthetic example. Let us consider a 6D dataset, where (x 1 , x 2 , x 3 ) are dependent variables such that x 3 = exp(sin(x 1 ) + x 2 2 ); (x 4 , x 5 ) are dependent variables with x 5 = x 3 4 ; x 6 is independent of the other variables. In Figure 4.2, we show that for seed = 0, KAN reveals the functional dependence among x 1 ,x 2 , and x 3 ; for another seed = 2024, KAN reveals the functional dependence between x 4 and x 5 . Our preliminary results rely on randomness (different seeds) to discover different relations; in the future we would like to investigate a more systematic and more controlled way to discover a complete set of relations. Even so, our tool in its current status can provide insights for scientific tasks. We present our results with the knot dataset in Section 4.3.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Unsupervised toy dataset`,
                    msg: String.raw`<div class="markdown-body"><p>通常，科学发现被表述为监督学习问题，即给定输入变量 x_1、x_2、…、x_d 和输出变量（或变量）y，我们旨在寻找一个可解释的函数 f，使得
<font color="#00FF00">$$</font><font color="#FF00FF"> y \approx f(x_1, x_2, \ldots, x_d). </font><font color="#00FF00">$$</font>
然而，另一类科学发现可以表述为无监督学习问题，即给定一组变量 (x_1, x_2, ..., x_d)，我们希望发现这些变量间的结构关系。具体来说，我们意图找到一个非零函数 f，使得
<font color="#00FF00">$$</font><font color="#FF00FF"> f(x_1, x_2, \ldots, x_d) \approx 0. \quad (4.1)</font><font color="#00FF00">$$</font>
例如，考虑一组特征 (x_1, x_2, x_3)，满足 x_3 = exp(sin(πx_1) + x_2^2)。那么一个合适的 f 可以是 <font color="#00FF00">$</font><font color="#FF00FF"> f(x_1, x_2, x_3) = sin(πx_1) + x_2^2 - \log(x_3) = 0</font><font color="#00FF00">$</font>，这意味着 (x_1, x_2, x_3) 的点集形成一个由 f = 0 确定的二维子流形，而非填充整个三维空间。
如能设计出解决无监督问题的算法，相对于监督问题它具有显著的优势，因为仅需特征集 S = (x_1, x_2, ..., x_d) 即可。相比之下，监督问题尝试在其他变量的基础上预测特征子集，即将 S 分解为 S_in ∪ S_out，作为要学习函数的输入和输出特征。如果没有领域专业知识来指导这一拆分，存在 <font color="#00FF00">$</font><font color="#FF00FF">2^d - 2</font><font color="#00FF00">$</font> 种可能性，需要 |S_in| &gt; 0 并且 |S_out| &gt; 0。这个指数级大的监督问题空间可以通过采用无监督方法来避免。这种无监督学习方法对于第 4.3 节中的结数据集尤为宝贵。Google DeepMind 团队 [44] 手动选择了签名作为目标变量，否则他们将面临上述组合问题。这引发了一个问题：我们是否可以直接着手处理无监督学习。
以下，我们介绍我们的方法及一个示例。
我们通过将其转化为对所有 d 个特征进行监督学习的问题来应对无监督学习问题，无需选择拆分。核心思想是学习一个函数 <font color="#00FF00">$</font><font color="#FF00FF">f(x_1, ..., x_d) = 0</font><font color="#00FF00">$</font>，并保证 f 不是零函数。我们借鉴对比学习的思想，定义正样本（实际特征向量）和负样本（通过特征破坏构建）。为了保持每个拓扑不变量的总体特征分布不变，我们通过整个训练集中特征的随机排列来进行特征破坏。现在，我们希望建立一个网络 g 使得 g(x_real) = 1 和 g(x_fake) = 0，这样就转化为了一个监督问题。但请注意，我们原始目标是 <font color="#00FF00">$</font><font color="#FF00FF">f(x_{\text{real}}) = 0</font><font color="#00FF00">$</font> 和 <font color="#00FF00">$</font><font color="#FF00FF">f(x_{\text{fake}}) \neq 0</font><font color="#00FF00">$</font>。我们通过令 <font color="#00FF00">$</font><font color="#FF00FF">g = σ \cdot f</font><font color="#00FF00">$</font> 来实现这一点，其中 <font color="#00FF00">$</font><font color="#FF00FF">σ(x) = e^{-\frac{x^2}{2w^2}}</font><font color="#00FF00">$</font> 是带小宽度 w 的高斯函数，这个过程可以通过具有形状 [..., 1, 1] 的 KAN 实现，其最后一个激活函数设置为高斯函数 σ，而所有之前的层构成 f。除了上述修改外，监督训练的其他部分保持一致。
现在，我们将展示无监督范式在合成实例中的有效性。考虑一个六维数据集，其中 (x_1, x_2, x_3) 相互依赖，满足 x_3 = exp(sin(x_1) + x_2^2)；(x_4, x_5) 是依赖变量，满足 x_5 = x_3^4；x_6 独立于其它变量。图 4.2 展示了，在种子为 0 时，KAN 揭示了 x_1、x_2 和 x_3 之间的功能依赖性；而对于另一个种子值 2024，KAN 显示了 x_4 和 x_5 之间的功能依赖关系。我们的初步结果依赖于随机性（不同的种子值）来发现不同的关系；未来，我们希望能探索更加系统化、控制性更强的方法来发现一整套关系。即便如此，当前状态下的工具也能为科学任务提供洞见。我们在第 4.3 节中使用结数据集展示了我们的成果。</p><hr /><p>通常，科学发现被表述为监督学习问题，即给定输入变量 x_1、x_2、…、x_d 和输出变量（或变量）y，我们旨在寻找一个可解释的函数 f，使得
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>y</mi><mo>&#x02248;</mo><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>2</mn></msub><mo>&#x0002C;</mo><mi>&#x02026;</mi><mo>&#x0002C;</mo><msub><mi>x</mi><mi>d</mi></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0002E;</mo></mrow></math>
然而，另一类科学发现可以表述为无监督学习问题，即给定一组变量 (x_1, x_2, ..., x_d)，我们希望发现这些变量间的结构关系。具体来说，我们意图找到一个非零函数 f，使得
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>2</mn></msub><mo>&#x0002C;</mo><mi>&#x02026;</mi><mo>&#x0002C;</mo><msub><mi>x</mi><mi>d</mi></msub><mo stretchy="false">&#x00029;</mo><mo>&#x02248;</mo><mn>0</mn><mo>&#x0002E;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>4.1</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
例如，考虑一组特征 (x_1, x_2, x_3)，满足 x_3 = exp(sin(πx_1) + x_2^2)。那么一个合适的 f 可以是 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>2</mn></msub><mo>&#x0002C;</mo><msub><mi>x</mi><mn>3</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>s</mi><mi>i</mi><mi>n</mi><mo stretchy="false">&#x00028;</mo><mi>π</mi><msub><mi>x</mi><mn>1</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><msubsup><mi>x</mi><mn>2</mn><mn>2</mn></msubsup><mo>&#x02212;</mo><mi>log</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>3</mn></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mn>0</mn></mrow></math>，这意味着 (x_1, x_2, x_3) 的点集形成一个由 f = 0 确定的二维子流形，而非填充整个三维空间。
如能设计出解决无监督问题的算法，相对于监督问题它具有显著的优势，因为仅需特征集 S = (x_1, x_2, ..., x_d) 即可。相比之下，监督问题尝试在其他变量的基础上预测特征子集，即将 S 分解为 S_in ∪ S_out，作为要学习函数的输入和输出特征。如果没有领域专业知识来指导这一拆分，存在 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mn>2</mn><mi>d</mi></msup><mo>&#x02212;</mo><mn>2</mn></mrow></math> 种可能性，需要 |S_in| &gt; 0 并且 |S_out| &gt; 0。这个指数级大的监督问题空间可以通过采用无监督方法来避免。这种无监督学习方法对于第 4.3 节中的结数据集尤为宝贵。Google DeepMind 团队 [44] 手动选择了签名作为目标变量，否则他们将面临上述组合问题。这引发了一个问题：我们是否可以直接着手处理无监督学习。
以下，我们介绍我们的方法及一个示例。
我们通过将其转化为对所有 d 个特征进行监督学习的问题来应对无监督学习问题，无需选择拆分。核心思想是学习一个函数 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mn>1</mn></msub><mo>&#x0002C;</mo><mo>&#x0002E;</mo><mo>&#x0002E;</mo><mo>&#x0002E;</mo><mo>&#x0002C;</mo><msub><mi>x</mi><mi>d</mi></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mn>0</mn></mrow></math>，并保证 f 不是零函数。我们借鉴对比学习的思想，定义正样本（实际特征向量）和负样本（通过特征破坏构建）。为了保持每个拓扑不变量的总体特征分布不变，我们通过整个训练集中特征的随机排列来进行特征破坏。现在，我们希望建立一个网络 g 使得 g(x_real) = 1 和 g(x_fake) = 0，这样就转化为了一个监督问题。但请注意，我们原始目标是 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mrow><mtext>real</mtext></mrow></msub><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mn>0</mn></mrow></math> 和 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><msub><mi>x</mi><mrow><mtext>fake</mtext></mrow></msub><mo stretchy="false">&#x00029;</mo><mo>&#x02260;</mo><mn>0</mn></mrow></math>。我们通过令 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi><mo>&#x0003D;</mo><mi>σ</mi><mi>&#x000B7;</mi><mi>f</mi></mrow></math> 来实现这一点，其中 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>σ</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><msup><mi>e</mi><mrow><mo>&#x02212;</mo><mfrac><mrow><msup><mi>x</mi><mn>2</mn></msup></mrow><mrow><mn>2</mn><msup><mi>w</mi><mn>2</mn></msup></mrow></mfrac></mrow></msup></mrow></math> 是带小宽度 w 的高斯函数，这个过程可以通过具有形状 [..., 1, 1] 的 KAN 实现，其最后一个激活函数设置为高斯函数 σ，而所有之前的层构成 f。除了上述修改外，监督训练的其他部分保持一致。
现在，我们将展示无监督范式在合成实例中的有效性。考虑一个六维数据集，其中 (x_1, x_2, x_3) 相互依赖，满足 x_3 = exp(sin(x_1) + x_2^2)；(x_4, x_5) 是依赖变量，满足 x_5 = x_3^4；x_6 独立于其它变量。图 4.2 展示了，在种子为 0 时，KAN 揭示了 x_1、x_2 和 x_3 之间的功能依赖性；而对于另一个种子值 2024，KAN 显示了 x_4 和 x_5 之间的功能依赖关系。我们的初步结果依赖于随机性（不同的种子值）来发现不同的关系；未来，我们希望能探索更加系统化、控制性更强的方法来发现一整套关系。即便如此，当前状态下的工具也能为科学任务提供洞见。我们在第 4.3 节中使用结数据集展示了我们的成果。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Application to Mathematics: Knot Theory`,
                    msg: String.raw`<div class="markdown-body"><p>Knot theory is a subject in low-dimensional topology that sheds light on topological aspects of threemanifolds and four-manifolds and has a variety of applications, including in biology and topological quantum computing. Mathematically, a knot K is an embedding of S 1 into S 3 . Two knots K and K ′ are topologically equivalent if one can be deformed into the other via deformation of the ambient space S 3 , in which case we write [K] = [K ′ ]. Some knots are topologically trivial, meaning that they can be smoothly deformed to a standard circle. Knots have a variety of deformation-invariant features f called topological invariants, which may be used to show that two knots are topologically inequivalent,
[K] ̸ = [K ′ ] if f (K) ̸ = f (K ′ ).
In some cases the topological invariants are geometric in nature. For instance, a hyperbolic knot K has a knot complement S 3 \ K that admits a canonical hyperbolic metric g such that vol g (K) is a topological invariant known as the hyperbolic volume. Other topological invariants are algebraic in nature, such as the Jones polynomial.
Given the fundamental nature of knots in mathematics and the importance of its applications, it is interesting to study whether ML can lead to new results. For instance, in [45] reinforcement learning was utilized to establish ribbonness of certain knots, which ruled out many potential counterexamples to the smooth 4d Poincaré conjecture.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Application to Mathematics: Knot Theory`,
                    msg: String.raw`<div class="markdown-body"><h3>应用于数学：纽结理论</h3>
<p>纽结理论是低维拓扑学的一个分支，它揭示了三维流形和四维流形的拓扑特性，并在生物学和拓扑量子计算等多个领域有着广泛的应用。从数学角度讲，一个纽结<font color="#00FF00">$</font><font color="#FF00FF">K</font><font color="#00FF00">$</font>是单位圆<font color="#00FF00">$</font><font color="#FF00FF">S^1</font><font color="#00FF00">$</font>嵌入到三维球面<font color="#00FF00">$</font><font color="#FF00FF">S^3</font><font color="#00FF00">$</font>中的映射。如果两个纽结<font color="#00FF00">$</font><font color="#FF00FF">K</font><font color="#00FF00">$</font>和<font color="#00FF00">$</font><font color="#FF00FF">K'</font><font color="#00FF00">$</font>可以通过扭曲其所在空间<font color="#00FF00">$</font><font color="#FF00FF">S^3</font><font color="#00FF00">$</font>而互相变形，则称它们在拓扑上是等价的，此时我们记为<font color="#00FF00">$</font><font color="#FF00FF">[K]=[K']</font><font color="#00FF00">$</font>。一些纽结在拓扑上是平凡的，意味着它们能平滑地变形为一个标准的圆。纽结拥有一系列在变形下不变的特征<font color="#00FF00">$</font><font color="#FF00FF">f</font><font color="#00FF00">$</font>，称为拓扑不变量，这些不变量可用于证明两个纽结在拓扑上不等价，
若<font color="#00FF00">$$</font><font color="#FF00FF">f(K) \neq f(K')</font><font color="#00FF00">$$</font>，则<font color="#00FF00">$$</font><font color="#FF00FF">[K] \neq [K']</font><font color="#00FF00">$$</font>。</p>
<p>在某些情况下，拓扑不变量具有几何性质。例如，一个双曲纽结<font color="#00FF00">$</font><font color="#FF00FF">K</font><font color="#00FF00">$</font>，其补空间<font color="#00FF00">$</font><font color="#FF00FF">S^3 \setminus K</font><font color="#00FF00">$</font>承认一个典范的双曲度量<font color="#00FF00">$</font><font color="#FF00FF">g</font><font color="#00FF00">$</font>，使得体积<font color="#00FF00">$</font><font color="#FF00FF">\text{vol}_g(K)</font><font color="#00FF00">$</font>是一个称为双曲体积的拓扑不变量。其他拓扑不变量则具有代数性质，如琼斯多项式。</p>
<p>鉴于纽结在数学中的基本性质及其应用的重要性，研究机器学习是否能带来新成果是十分有趣的。例如，在<font color="#00FF00">$$</font><font color="#FF00FF">45</font><font color="#00FF00">$$</font>中，强化学习被用来判定特定纽结的带状性，这排除了许多关于平滑四维庞加莱猜想的潜在反例。</p><hr /><h3>应用于数学：纽结理论</h3>
<p>纽结理论是低维拓扑学的一个分支，它揭示了三维流形和四维流形的拓扑特性，并在生物学和拓扑量子计算等多个领域有着广泛的应用。从数学角度讲，一个纽结<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>K</mi></mrow></math>是单位圆<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mi>S</mi><mn>1</mn></msup></mrow></math>嵌入到三维球面<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mi>S</mi><mn>3</mn></msup></mrow></math>中的映射。如果两个纽结<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>K</mi></mrow></math>和<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mi>K</mi><mi>&#x02032;</mi></msup></mrow></math>可以通过扭曲其所在空间<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mi>S</mi><mn>3</mn></msup></mrow></math>而互相变形，则称它们在拓扑上是等价的，此时我们记为<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mo stretchy="false">[</mo><mi>K</mi><mo stretchy="false">]</mo><mo>&#x0003D;</mo><mo stretchy="false">[</mo><msup><mi>K</mi><mi>&#x02032;</mi></msup><mo stretchy="false">]</mo></mrow></math>。一些纽结在拓扑上是平凡的，意味着它们能平滑地变形为一个标准的圆。纽结拥有一系列在变形下不变的特征<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi></mrow></math>，称为拓扑不变量，这些不变量可用于证明两个纽结在拓扑上不等价，
若<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>K</mi><mo stretchy="false">&#x00029;</mo><mo>&#x02260;</mo><mi>f</mi><mo stretchy="false">&#x00028;</mo><msup><mi>K</mi><mi>&#x02032;</mi></msup><mo stretchy="false">&#x00029;</mo></mrow></math>，则<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mo stretchy="false">[</mo><mi>K</mi><mo stretchy="false">]</mo><mo>&#x02260;</mo><mo stretchy="false">[</mo><msup><mi>K</mi><mi>&#x02032;</mi></msup><mo stretchy="false">]</mo></mrow></math>。</p>
<p>在某些情况下，拓扑不变量具有几何性质。例如，一个双曲纽结<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>K</mi></mrow></math>，其补空间<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msup><mi>S</mi><mn>3</mn></msup><mi>&#x029F5;</mi><mi>K</mi></mrow></math>承认一个典范的双曲度量<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi></mrow></math>，使得体积<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mtext>vol</mtext><mi>g</mi></msub><mo stretchy="false">&#x00028;</mo><mi>K</mi><mo stretchy="false">&#x00029;</mo></mrow></math>是一个称为双曲体积的拓扑不变量。其他拓扑不变量则具有代数性质，如琼斯多项式。</p>
<p>鉴于纽结在数学中的基本性质及其应用的重要性，研究机器学习是否能带来新成果是十分有趣的。例如，在<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mn>45</mn></mrow></math>中，强化学习被用来判定特定纽结的带状性，这排除了许多关于平滑四维庞加莱猜想的潜在反例。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Supervised learning`,
                    msg: String.raw`<div class="markdown-body"><p>In [44], supervised learning and human domain experts were utilized to arrive at a new theorem relating algebraic and geometric knot invariants. In this case, gradient saliency identified key invariants for the supervised problem, which led the domain experts to make a conjecture that was subsequently refined and proven. We study whether a KAN can achieve good interpretable results on the same problem, which predicts the signature of a knot. Their main results from studying the knot theory dataset are:
(1) They use network attribution methods to find that the signature σ is mostly dependent on meridinal distance µ (real µ r , imag µ i ) and longitudinal distance λ.
(2) Human scientists later identified that σ has high correlation with the slope ≡ Re( λ µ ) = λµr µ 2 r +µ 2 i and derived a bound for |2σ -slope|. </p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Supervised learning`,
                    msg: String.raw`<div class="markdown-body"><p>在文献[44]中，监督学习与人类领域专家的协作促成了一项新定理的发现，该定理关联了代数与几何纽结不变量。在此案例中，梯度显著性分析法辨识出监督问题中的关键不变量，进而引导领域专家提出一个猜想，并最终得以细化与证实。我们探究KAN是否能在这个相同的问题上获得良好的、可解释的结果，即预测纽结的签名（signature）。他们基于纽结理论数据集的主要研究发现包括：</p>
<p>(1) 他们运用网络归因方法发现，签名σ主要依赖于纬向距离μ（实部μ_r，虚部μ_i）和经向距离λ。
(2) 人类科学家随后识别到σ与斜率≡ Re(λμ) = λμ_r/√(μ_r²+μ_i²)之间存在高度相关性，并推导出|2σ-斜率|的一个界。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# 81.6% Part-1`,
                    msg: String.raw`<div class="markdown-body"><p>Table 3: KANs can achieve better accuracy than MLPs with much fewer parameters in the signature classification problem. Soon after our preprint was first released, Prof. Shi Lab from Georgia tech discovered that an MLP with only 60 parameters is sufficient to achieve 80% accuracy (public but unpublished results). This is good news for AI + Science because this means perhaps many AI + Science tasks are not that computationally demanding than we might think (either with MLPs or with KANs), hence many new scientific discoveries are possible even on personal laptops.
We show below that KANs not only rediscover these results with much smaller networks and much more automation, but also present some interesting new results and insights.
To investigate (1), we treat 17 knot invariants as inputs and signature as outputs. Similar to the setup in [44], signatures (which are even numbers) are encoded as one-hot vectors and networks are trained with cross-entropy loss. We find that an extremely small [17,1,14] KAN is able to achieve 81.6% test accuracy (while Deepmind's 4-layer width-300 MLP achieves 78% test accuracy). The [17,1,14] KAN (G = 3, k = 3) has ≈ 200 parameters, while the MLP has ≈ 3 × 10 5 parameters, shown in Table 3. It is remarkable that KANs can be both more accurate and much more parameter efficient than MLPs at the same time. In terms of interpretability, we scale the transparency of each activation according to its magnitude, so it becomes immediately clear which input variables are important without the need for feature attribution (see Figure 4.3 left): signature is mostly dependent on µ r , and slightly dependent on µ i and λ, while dependence on other variables is small. We then train a [3, 1, 14] KAN on the three important variables, obtaining test accuracy 78.2%. Our results have one subtle difference from results in [44]: they find that signature is mostly dependent on µ i , while we find that signature is mostly dependent on µ r . This difference could be due to subtle algorithmic choices, but has led us to carry out the following experiments: (a) ablation studies. We show that µ r contributes more to accuracy than µ i (see Figure 4.3): for example, µ r alone can achieve 65.0% accuracy, while µ i alone can only achieve 43.8% accuracy. (b) We find a symbolic formula (in Table 4) which only involves µ r and λ, but can achieve 77.8% test accuracy.
To investigate (2), i.e., obtain the symbolic form of σ, we formulate the problem as a regression task. Using auto-symbolic regression introduced in Section 2.5. Human (DM) 83.1% 0.946 1 B -0.02sin(4.98µ i + 0.85) + 0.08|4.02µ r + 6.28| -0.52 -0.04e -0.88(1-0.45λ) 2</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# 81.6%`,
                    msg: String.raw`<div class="markdown-body"><p>表3：在签名分类问题中，KANs能够以更少的参数数量达到比MLPs更高的准确性。在我们的预印本首次发布后不久，佐治亚理工学院的石教授实验室发现仅需60个参数的MLP就足以实现80％的准确性（公开但未发表的结果）。这对AI科学领域来说是个好消息，因为这表明许多AI科学任务可能并不像我们想象的那样计算密集（无论使用MLP还是KAN），因此即使在个人笔记本电脑上也有很多新科学发现的可能性。</p>
<p>以下展示KANs不仅能够以更小规模的网络和更高程度的自动化重新发现这些结果，还能呈现一些有趣的新发现和见解。</p>
<p>为了探讨(1)，我们将17个纽结不变量作为输入，将签名作为输出。类似于[44]中的设置，将签名（偶数）编码为独热向量，并使用交叉熵损失训练网络。我们发现，一个极小的[17,1,14]结构的KAN能够实现81.6%的测试准确性（而DeepMind的四层宽度为300的MLP仅实现78%的测试准确性）。这个[17,1,14]的KAN（G=3, k=3）大约有200个参数，而MLP则有约3×10^5个参数，如表3所示。值得注意的是，KANs能够同时做到比MLPs更准确且参数效率更高。在可解释性方面，我们根据激活函数的大小调整其透明度，使得无需特征归因就能一目了然哪些输入变量更重要（见图4.3左）：签名主要取决于μ_r，并轻微依赖于μ_i和λ，对其他变量的依赖较小。接着，我们在这三个重要变量上训练了一个[3,1,14]的KAN，得到78.2%的测试准确性。我们的结果与[44]中的结果有一个微妙的区别：他们认为签名主要依赖于μ_i，而我们发现签名主要依赖于μ_r。这一差异可能是由于细微的算法选择造成的，但促使我们进行了以下实验：(a) 简化研究。我们显示μ_r对于提高准确性贡献大于μ_i（见图4.3）：例如，仅使用μ_r可以达到65.0%的准确性，而仅仅使用μ_i只能达到43.8%的准确性。(b) 我们找到了只涉及μ_r和λ的符号公式（见表4），但能实现77.8%的测试准确性。</p>
<p>为了深入探讨(2)，即获得σ的符号形式，我们将该问题表述为回归任务。利用第2.5节介绍的自动符号回归方法。人类（DM）达到了83.1%的准确率，系数为0.946，表达式为：
[ B = -0.02\sin(4.98\mu_i + 0.85) + 0.08|4.02\mu_r + 6.28| - 0.52 - 0.04e^{-0.88(1-0.45\lambda)} ]
[ \text{项2} \leftarrow \text{未完待续} ]</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# 81.6% Part-2`,
                    msg: String.raw`<div class="markdown-body"><p>[3, 1] KAN 62.6% 0.837 0.897 C 0.17tan(-1.51+0.1e -1.43(1-0.4µi) 2 +0.09e -0.06(1-0.21λ) 2 + 1.32e -3.18(1-0.43µr) 2 ) [3, 1, 1] KAN 71.9% 0.871 0.934 D -0.09 + 1.04exp(-9.59(-0.62sin(0.61µ r + 7.26)) -0.32tan(0.03λ -6.59) + 1 -0.11e -1.77(0.31-µi) 2 ) 2 -1.09e -7.6(0.65(1-0.01λ)  4: Symbolic formulas of signature as a function of meridinal translation µ (real µr, imag µi) and longitudinal translation λ. In [44], formula A was discovered by human scientists inspired by neural network attribution results. Formulas B-F are auto-discovered by KANs. KANs can trade-off between simplicity and accuracy (B, C, D). By adding more inductive biases, KAN is able to discover formula E which is not too dissimilar from formula A. KANs also discovered a formula F which only involves two variables (µr and λ) instead of all three variables, with little sacrifice in accuracy.
into symbolic formulas. We train KANs with shapes [3, 1], [3, 1, 1], [3, 2, 1], whose corresponding symbolic formulas are displayed in Table 4 B-D. It is clear that by having a larger KAN, both accuracy and complexity increase. So KANs provide not just a single symbolic formula, but a whole Pareto frontier of formulas, trading off simplicity and accuracy. However, KANs need additional inductive biases to further simplify these equations to rediscover the formula from [44] (Table 4 A).
We have tested two scenarios: (1) in the first scenario, we assume the ground truth formula has a multi-variate Pade representation (division of two multi-variate Taylor series). We first train [3, 2, 1] and then fit it to a Pade representation. We can obtain Formula E in Table 4, which bears similarity with Deepmind's formula. (2) We hypothesize that the division is not very interpretable for KANs, so we train two KANs (one for the numerator and the other for the denominator) and divide them manually. Surprisingly, we end up with the formula F (in Table 4) which only involves µ r and λ, although µ i is also provided but ignored by KANs.
So far, we have rediscovered the main results from [44]. It is remarkable to see that KANs made this discovery very intuitive and convenient. Instead of using feature attribution methods (which are great methods), one can instead simply stare at visualizations of KANs. Moreover, automatic symbolic regression also makes the discovery of symbolic formulas much easier.
In the next part, we propose a new paradigm of "AI for Math" not included in the Deepmind paper, where we aim to use KANs' unsupervised learning mode to discover more relations (besides signature) in knot invariants.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# 81.6%`,
                    msg: String.raw`<div class="markdown-body"><h3>4.1 部分翻译：符号公式的自动生成</h3>
<h4>[3, 1] KAN 实现了 62.6% 的拟合度，对应的系数为 0.837 和 0.897，其特征符号公式为 (C = 0.17\tan(-1.51+0.1e^{-1.43}(1-0.4\mu_i)^2 +0.09e^{-0.06}(1-0.21\lambda)^2 + 1.32e^{-3.18}(1-0.43\mu_r)^2))；而 [3, 1, 1] 结构的 KAN 达到了更高的 71.9% 拟合率，系数分别为 0.871 和 0.934，其公式为 (D = -0.09 + 1.04\exp(-9.59(-0.62\sin(0.61\mu_r + 7.26)) -0.32\tan(0.03\lambda -6.59) + 1 -0.11e^{-1.77}(0.31-\mu_i)^2)^2 -1.09e^{-7.6}(0.65(1-0.01\lambda))^4)。</h4>
<p>这些表达式均作为子午线平移（实部 (\mu_r) 与虚部 (\mu_i)）以及经向平移 (\lambda) 的函数。文献[44]中介绍的公式A是在神经网络归因结果启发下由人类科学家发现的，而B至F这些公式则是由KAN自主发现的结果。KAN能在简洁度与准确性之间做出权衡（如B、C、D所示）。通过引入更多归纳偏见，KAN能够发现与A公式相似度较高的E公式，并且KAN还发现了仅涉及两个变量（(\mu_r) 和 (\lambda)）而非全部三个变量的F公式，而且在牺牲很小准确性的情况下实现。</p>
<p>我们训练了尺寸为 [3, 1]、[3, 1, 1] 及 [3, 2, 1] 的KAN，它们各自的符号公式如表4所示的B至D部分。显然，通过增加KAN的复杂性，可以显著提升公式的准确度和复杂度，因此KAN不仅提供单一的符号公式，而是提供了一整套帕累托前沿公式集合，在简明度与精确度之间进行权衡。然而，为了进一步简化这些方程以便重新发现文献[44]中的原始公式，KAN需要额外引入归纳偏见（如表4A所示）。</p>
<p>实验设置了两种情境：(1) 在第一种情境中，假设真实公式可表示为多变量Padé形式（即两个多变量Taylor级数的商）。我们先对[3, 2, 1]的KAN进行训练，随后将其匹配至Padé形式，由此得到了类似于DeepMind公式的结果E。而(2) 我们推测KAN在处理除法时不甚直观，故采取分别训练分子与分母的两个KAN并手动相除的方法，出乎意料地得到只含(\mu_r)与(\lambda)变量的公式F（表4所列），尽管提供了(\mu_i)但被KAN忽略。</p>
<p>截至目前，我们已复现了文献[44]的主要发现。引人注目的是，通过KAN这一途径，公式发现过程变得直观便捷。研究人员不再必须依赖特征归因方法（尽管这亦是非常有效的手段），转而可以直接审视KAN的视觉化结果。此外，自动化的符号回归也极大地简化了符号公式探索的过程。</p>
<p>在下一节中，我们将提出一种超越DeepMind论文范畴的“AI助力数学”新范式，旨在利用KAN的无监督学习模式发现更多关于纽结不变量（除了签名属性以外）之间的潜在联系。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# 81.6% Part-3`,
                    msg: String.raw`<div class="markdown-body"><p>Unsupervised learning As we mentioned in Section 4.2, unsupervised learning is the setup that is more promising since it avoids manual partition of input and output variables which have combinatorially many possibilities. In the unsupervised learning mode, we treat all 18 variables (including signature) as inputs such that they are on the same footing. Knot data are positive samples, and we randomly shuffle features to obtain negative samples. An [18, 1, 1] KAN is trained to classify whether a given feature vector belongs to a positive sample (1) or a negative sample (0). We manually set the second layer activation to be the Gaussian function with a peak one centered at zero, so positive samples will have activations at (around) zero, implicitly giving a relation among knot invariants (1) The first group of dependent variables is signature, real part of meridinal distance, and longitudinal distance (plus two other variables which can be removed because of ( 3)). This is the signature dependence studied above, so it is very interesting to see that this dependence relation is rediscovered again in the unsupervised mode.
(2) The second group of variables involve cusp volume V , real part of meridinal translation µ r and longitudinal translation λ. Their activations all look like logarithmic functions (which can be verified by the implied symbolic functionality in Section 2.5.1). So the relation is -log V + log µ r + log λ = 0 which is equivalent to V = µ r λ, which is true by definition. It is, however, reassuring that we discover this relation without any prior knowledge.
(3) The third group of variables includes the real part of short geodesic g r and injectivity radius. Their activations look qualitatively the same but differ by a minus sign, so it is conjectured that these two variables have a linear correlation. We plot 2D scatters, finding that 2r upper bounds g r , which is also a well-known relation [46].
It is interesting that KANs' unsupervised mode can rediscover several known mathematical relations. The good news is that the results discovered by KANs are probably reliable; the bad news is that we have not discovered anything new yet. It is worth noting that we have chosen a shallow KAN for simple visualization, but deeper KANs can probably find more relations if they exist. We would like to investigate how to discover more complicated relations with deeper KANs in future work.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# 81.6%`,
                    msg: String.raw`<div class="markdown-body"><p>无监督学习 如4.2节所述，无监督学习是一种更有前景的设置，因为它避免了输入和输出变量的手动划分，而这些划分具有组合上多种可能性。在无监督学习模式下，我们将所有18个变量（包括签名）都视为输入，使它们处于相同的地位。纽结数据被视为正样本，我们通过随机洗牌特征来获取负样本。训练一个[18, 1, 1]结构的KAN，用于分类给定的特征向量属于正样本（1）还是负样本（0）。我们手动将第二层激活设置为以零为中心、峰值为一的高斯函数，这样正样本将在（大约）零点附近有激活，隐式给出纽结不变量之间的关系：</p>
<p>(1) 第一组依赖变量包括签名、纬向距离的实部以及经向距离（加上另外两个可以根据(3)移除的变量）。这是我们之前研究过的基于签名的依赖性，因此很有趣地看到这种依赖关系在无监督模式下再次被重新发现。</p>
<p>(2) 第二组变量涉及尖顶体积(V)、纬向平移的实部(\mu_r)和经向平移(\lambda)。它们的激活看起来都像对数函数（这可以通过第2.5.1节中暗示的符号功能进行验证）。因此关系是(-\log V + \log \mu_r + \log \lambda = 0)，等价于(V = \mu_r \lambda)，这按定义是成立的。然而，令人欣慰的是，我们在没有任何先验知识的情况下发现了这一关系。</p>
<p>(3) 第三组变量包含了短测地线的实部(g_r)和注入半径。它们的激活在性质上相似但相差一个负号，因此推测这两个变量存在线性相关性。我们绘制了二维散点图，发现(2r)作为(g_r)的上限，这也是一个众所周知的关系[46]。</p>
<p>KANs无监督模式能够重新发现几个已知的数学关系，这是很有趣的。好消息是，KANs发现的结果可能是可靠的；坏消息是我们还没有发现任何新东西。值得注意的是，我们选择了一个较浅的KAN以便于直观展示，但如果存在的话，更深的KANs很可能可以找到更多的关系。在未来的工作中，我们希望探究如何利用更深的KANs发现更复杂的关系。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Application to Physics: Anderson localization`,
                    msg: String.raw`<div class="markdown-body"><p>Anderson localization is the fundamental phenomenon in which disorder in a quantum system leads to the localization of electronic wave functions, causing all transport to be ceased [47]. In one and two dimensions, scaling arguments show that all electronic eigenstates are exponentially localized for an infinitesimal amount of random disorder [48,49]. In contrast, in three dimensions, a critical energy forms a phase boundary that separates the extended states from the localized states, known as a mobility edge. The understanding of these mobility edges is crucial for explaining various fundamental phenomena such as the metal-insulator transition in solids [50], as well as localization effects of light in photonic devices [51,52,53,54,55]. It is therefore necessary to develop microscopic models that exhibit mobility edges to enable detailed investigations. Developing such models is often more practical in lower dimensions, where introducing quasiperiodicity instead of random disorder can also result in mobility edges that separate localized and extended phases. Furthermore, experimental realizations of analytical mobility edges can help resolve the debate on localization in interacting systems [56,57]. Indeed, several recent studies have focused on identifying such models and deriving exact analytic expressions for their mobility edges [58,59,60,61,62,63,64].
Here, we apply KANs to numerical data generated from quasiperiodic tight-binding models to extract their mobility edges. In particular, we examine three classes of models: the Mosaic model (MM) [62], the generalized Aubry-André model (GAAM) [61] and the modified Aubry-André model (MAAM) [59]. For the MM, we testify KAN's ability to accurately extract mobility edge as a 1D function of energy. For the GAAM, we find that the formula obtained from a KAN closely matches the ground truth. For the more complicated MAAM, we demonstrate yet another example of the symbolic interpretability of this framework. A user can simplify the complex expression obtained from KANs (and corresponding symbolic formulas) by means of a "collaboration" where the human generates hypotheses to obtain a better match (e.g., making an assumption of the form of certain activation function), after which KANs can carry out quick hypotheses testing.
To quantify the localization of states in these models, the inverse participation ratio (IPR) is commonly used. The IPR for the k th eigenstate, ψ (k) , is given by
IPR k = n |ψ (k) n | 4 n |ψ (k) n | 2 2 (4.2)
where the sum runs over the site index. Here, we use the related measure of localization -the fractal dimension of the states, given by
D k = - log(IPR k ) log(N ) (4.3)
where N is the system size. D k = 0(1) indicates localized (extended) states.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Application to Physics: Anderson localization`,
                    msg: String.raw`<div class="markdown-body"><p><strong>物理学应用：安德森局域化</strong></p>
<p>安德森局域化是基本现象，其中量子系统中的无序导致电子波函数的局域化，从而所有输运过程停止[47]。在一维和二维中，标度理论表明，只要有极微小的随机无序存在，所有电子本征态都将指数式地局域化[48,49]。相比之下，在三维中，一个临界能级形成了扩展态与局域态之间的相边界，称为移动性边缘。对这些移动性边缘的理解对于解释诸如固体中金属-绝缘体转变[50]等基本现象至关重要，以及光子器件中光的局域化效应[51,52,53,54,55]。因此，构建展现出移动性边缘的微观模型以进行深入研究是必要的。开发这类模型在低维情况下通常更为实际，因为在这些维度中，用准周期性代替随机无序同样能产生区分局域化和扩展相的移动性边缘。此外，分析获得的移动性边缘的实验实现有助于解决关于相互作用系统中局域化的争议[56,57]。实际上，近期多项研究聚焦于识别此类模型并推导出它们移动性边缘的确切解析表达式[58,59,60,61,62,63,64]。</p>
<p>在此，我们将KANs应用于从准周期紧束缚模型生成的数值数据中，以提取其移动性边缘。具体而言，我们考察了三类模型：马赛克模型(MM)[62]、广义奥布里-安德烈模型(GAAM)[61]及改进型奥布里-安德烈模型(MAAM)[59]。对于MM，我们验证了KAN准确提取作为能量一维函数的移动性边缘的能力。对于GAAM，我们发现KAN得到的公式与真实值紧密吻合。而对于更复杂的MAAM，我们展示了该框架符号可解释性的另一个例子。用户可通过“协作”方式简化由KAN得出的复杂数学表达式（及其相应的符号公式），即人通过提出假设来获取更好的匹配（例如，假设某些激活函数的形式），随后KAN迅速进行假设检验。</p>
<p>为了量化这些模型中态的局域化程度，通常采用逆参与比(IPR)。第k个本征态ψ(k)的IPR由下式给出：
<font color="#00FF00">$$</font><font color="#FF00FF"> \text{IPR}_k = \frac{\sum_n |\psi_k^n|^4}{(\sum_n |\psi_k^n|^2)^2} </font><font color="#00FF00">$$</font>
其中求和遍历所有站点索引。此处，我们使用与局域化相关的另一指标——态的分形维度，由下式给出：
<font color="#00FF00">$$</font><font color="#FF00FF"> D_k = -\frac{\log(\text{IPR}_k)}{\log(N)} </font><font color="#00FF00">$$</font>
这里，N代表系统大小。<font color="#00FF00">$</font><font color="#FF00FF">D_k = 0(1)</font><font color="#00FF00">$</font>分别指示局域化(扩展)状态。</p><hr /><p><strong>物理学应用：安德森局域化</strong></p>
<p>安德森局域化是基本现象，其中量子系统中的无序导致电子波函数的局域化，从而所有输运过程停止[47]。在一维和二维中，标度理论表明，只要有极微小的随机无序存在，所有电子本征态都将指数式地局域化[48,49]。相比之下，在三维中，一个临界能级形成了扩展态与局域态之间的相边界，称为移动性边缘。对这些移动性边缘的理解对于解释诸如固体中金属-绝缘体转变[50]等基本现象至关重要，以及光子器件中光的局域化效应[51,52,53,54,55]。因此，构建展现出移动性边缘的微观模型以进行深入研究是必要的。开发这类模型在低维情况下通常更为实际，因为在这些维度中，用准周期性代替随机无序同样能产生区分局域化和扩展相的移动性边缘。此外，分析获得的移动性边缘的实验实现有助于解决关于相互作用系统中局域化的争议[56,57]。实际上，近期多项研究聚焦于识别此类模型并推导出它们移动性边缘的确切解析表达式[58,59,60,61,62,63,64]。</p>
<p>在此，我们将KANs应用于从准周期紧束缚模型生成的数值数据中，以提取其移动性边缘。具体而言，我们考察了三类模型：马赛克模型(MM)[62]、广义奥布里-安德烈模型(GAAM)[61]及改进型奥布里-安德烈模型(MAAM)[59]。对于MM，我们验证了KAN准确提取作为能量一维函数的移动性边缘的能力。对于GAAM，我们发现KAN得到的公式与真实值紧密吻合。而对于更复杂的MAAM，我们展示了该框架符号可解释性的另一个例子。用户可通过“协作”方式简化由KAN得出的复杂数学表达式（及其相应的符号公式），即人通过提出假设来获取更好的匹配（例如，假设某些激活函数的形式），随后KAN迅速进行假设检验。</p>
<p>为了量化这些模型中态的局域化程度，通常采用逆参与比(IPR)。第k个本征态ψ(k)的IPR由下式给出：
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mtext>IPR</mtext><mi>k</mi></msub><mo>&#x0003D;</mo><mfrac><mrow><msub><mo>&#x02211;</mo><mi>n</mi></msub><mo stretchy="false">&#x0007C;</mo><msubsup><mi>&#x003C8;</mi><mi>k</mi><mi>n</mi></msubsup><msup><mo stretchy="false">&#x0007C;</mo><mn>4</mn></msup></mrow><mrow><mo stretchy="false">&#x00028;</mo><msub><mo>&#x02211;</mo><mi>n</mi></msub><mo stretchy="false">&#x0007C;</mo><msubsup><mi>&#x003C8;</mi><mi>k</mi><mi>n</mi></msubsup><msup><mo stretchy="false">&#x0007C;</mo><mn>2</mn></msup><msup><mo stretchy="false">&#x00029;</mo><mn>2</mn></msup></mrow></mfrac></mrow></math>
其中求和遍历所有站点索引。此处，我们使用与局域化相关的另一指标——态的分形维度，由下式给出：
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mi>D</mi><mi>k</mi></msub><mo>&#x0003D;</mo><mo>&#x02212;</mo><mfrac><mrow><mi>log</mi><mo stretchy="false">&#x00028;</mo><msub><mtext>IPR</mtext><mi>k</mi></msub><mo stretchy="false">&#x00029;</mo></mrow><mrow><mi>log</mi><mo stretchy="false">&#x00028;</mo><mi>N</mi><mo stretchy="false">&#x00029;</mo></mrow></mfrac></mrow></math>
这里，N代表系统大小。<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>D</mi><mi>k</mi></msub><mo>&#x0003D;</mo><mn>0</mn><mo stretchy="false">&#x00028;</mo><mn>1</mn><mo stretchy="false">&#x00029;</mo></mrow></math>分别指示局域化(扩展)状态。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Mosaic Model (MM) Part-1`,
                    msg: String.raw`<div class="markdown-body"><p>We first consider a class of tight-binding models defined by the Hamiltonian [62]  is the golden ratio.
H = t n c † n+1 c n + H.c. + n V n (λ, ϕ)c † n c n ,(4.4)
where t is the nearest-neighbor coupling, c n (c † n ) is the annihilation (creation) operator at site n and the potential energy V n is given by
V n (λ, ϕ) = λ cos(2πnb + ϕ) j = mκ 0, otherwise,(4.5)
To introduce quasiperiodicity, we set b to be irrational (in particular, we choose b to be the golden ratio
1+ √5
2 ). κ is an integer and the quasiperiodic potential occurs with interval κ. The energy (E) spectrum for this model generically contains extended and localized regimes separated by a mobility edge. Interestingly, a unique feature found here is that the mobility edges are present for an arbitrarily strong quasiperiodic potential (i.e. there are always extended states present in the system that co-exist with localized ones).</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Mosaic Model (MM)`,
                    msg: String.raw`<div class="markdown-body"><p>我们首先考察一类由哈密顿量定义的紧束缚模型 [62]，其中黄金分割比例为<font color="#00FF00">$</font><font color="#FF00FF">\phi = \frac{1+\sqrt{5}}{2}</font><font color="#00FF00">$</font>。</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF">H = t_n c^\dagger_{n+1}c_n + \text{H.c.} + V_n(\lambda, \phi)c^\dagger_nc_n,\quad (4.4)</font><font color="#00FF00">$$</font>
</p>
<p>这里，<font color="#00FF00">$</font><font color="#FF00FF">t</font><font color="#00FF00">$</font> 表示最近邻耦合，<font color="#00FF00">$</font><font color="#FF00FF">c_n(c^\dagger_n)</font><font color="#00FF00">$</font> 是位于站点<font color="#00FF00">$</font><font color="#FF00FF">n</font><font color="#00FF00">$</font>的湮灭（创造）算符，而势能<font color="#00FF00">$</font><font color="#FF00FF">V_n</font><font color="#00FF00">$</font>由下式给出：</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF">V_n(\lambda, \phi) = \begin{cases}</br>\lambda \cos(2\pi nb + \phi), & j = m\\</br>0, & \text{否则},</br>\end{cases}\quad (4.5)</font><font color="#00FF00">$$</font>
</p>
<p>为了引入准周期性，我们将<font color="#00FF00">$</font><font color="#FF00FF">b</font><font color="#00FF00">$</font>设为无理数（特别是选择<font color="#00FF00">$</font><font color="#FF00FF">b</font><font color="#00FF00">$</font>为黄金比例<font color="#00FF00">$</font><font color="#FF00FF">\frac{1+\sqrt{5}}{2}</font><font color="#00FF00">$</font>）。<font color="#00FF00">$</font><font color="#FF00FF">\kappa</font><font color="#00FF00">$</font>是一个整数，准周期势以<font color="#00FF00">$</font><font color="#FF00FF">\kappa</font><font color="#00FF00">$</font>的间隔出现。此模型的能量<font color="#00FF00">$</font><font color="#FF00FF">(E)</font><font color="#00FF00">$</font>谱通常包含扩展态与局域态，它们之间被一个移动边缘隔开。有趣的是，这里发现的一个独特特征是，即便对于任意强的准周期势（即，系统中总是存在扩展态与局域态共存的情况），移动边缘依然存在。</p><hr /><p>我们首先考察一类由哈密顿量定义的紧束缚模型 [62]，其中黄金分割比例为<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003D5;</mi><mo>&#x0003D;</mo><mfrac><mrow><mn>1</mn><mo>&#x0002B;</mo><msqrt><mrow><mn>5</mn></mrow></msqrt></mrow><mrow><mn>2</mn></mrow></mfrac></mrow></math>。</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>H</mi><mo>&#x0003D;</mo><msub><mi>t</mi><mi>n</mi></msub><msubsup><mi>c</mi><mrow><mi>n</mi><mo>&#x0002B;</mo><mn>1</mn></mrow><mi>&#x02020;</mi></msubsup><msub><mi>c</mi><mi>n</mi></msub><mo>&#x0002B;</mo><mtext>H.c.</mtext><mo>&#x0002B;</mo><msub><mi>V</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><msubsup><mi>c</mi><mi>n</mi><mi>&#x02020;</mi></msubsup><msub><mi>c</mi><mi>n</mi></msub><mo>&#x0002C;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>4.4</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p>
<p>这里，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>t</mi></mrow></math> 表示最近邻耦合，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>c</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><msubsup><mi>c</mi><mi>n</mi><mi>&#x02020;</mi></msubsup><mo stretchy="false">&#x00029;</mo></mrow></math> 是位于站点<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>n</mi></mrow></math>的湮灭（创造）算符，而势能<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>V</mi><mi>n</mi></msub></mrow></math>由下式给出：</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mi>V</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mrow><mo stretchy="true" fence="true" form="prefix">&#x0007B;</mo><mtable><mtr><mtd columnalign="left"><mi>&#x003BB;</mi><mi>cos</mi><mo stretchy="false">&#x00028;</mo><mn>2</mn><mi>&#x003C0;</mi><mi>n</mi><mi>b</mi><mo>&#x0002B;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002C;</mo></mtd><mtd columnalign="left"><mi>j</mi><mo>&#x0003D;</mo><mi>m</mi></mtd></mtr><mtr><mtd columnalign="left"><mn>0</mn><mo>&#x0002C;</mo></mtd><mtd columnalign="left"><mtext>否则</mtext><mo>&#x0002C;</mo></mtd></mtr></mtable></mrow><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>4.5</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p>
<p>为了引入准周期性，我们将<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>b</mi></mrow></math>设为无理数（特别是选择<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>b</mi></mrow></math>为黄金比例<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mfrac><mrow><mn>1</mn><mo>&#x0002B;</mo><msqrt><mrow><mn>5</mn></mrow></msqrt></mrow><mrow><mn>2</mn></mrow></mfrac></mrow></math>）。<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003BA;</mi></mrow></math>是一个整数，准周期势以<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003BA;</mi></mrow></math>的间隔出现。此模型的能量<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mo stretchy="false">&#x00028;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo></mrow></math>谱通常包含扩展态与局域态，它们之间被一个移动边缘隔开。有趣的是，这里发现的一个独特特征是，即便对于任意强的准周期势（即，系统中总是存在扩展态与局域态共存的情况），移动边缘依然存在。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Mosaic Model (MM) Part-2`,
                    msg: String.raw`<div class="markdown-body"><p>The mobility edge can be described by g(λ, E) ≡ λ -|f κ (E)| = 0. g(λ, E) &gt; 0 and g(λ, E) &lt; 0 correspond to localized and extended phases, respectively. Learning the mobility edge therefore hinges on learning the "order parameter" g(λ, E). Admittedly, this problem can be tackled by many other theoretical methods for this class of models [62], but we will demonstrate below that our KAN framework is ready and convenient to take in assumptions and inductive biases from human users.
Let us assume a hypothetical user Alice, who is a new PhD student in condensed matter physics, and she is provided with a [2, 1] KAN as an assistant for the task. Firstly, she understands that this is a classification task, so it is wise to set the activation function in the second layer to be sigmoid by using the fix_symbolic functionality. Secondly, she realizes that learning the whole 2D function g(λ, E) is unnecessary because in the end she only cares about λ = λ(E) determined by g(λ, E) = 0. In so doing, it is reasonable to assume g(λ, E) = λ -h(E) = 0. Alice simply sets the activation function of λ to be linear by again using the fix_symbolic functionality. Now Alice trains the KAN network and conveniently obtains the mobility edge, as shown in Figure 4 
H = t n c † n+1 c n + H.c. + n V n (α, λ, ϕ)c † n c n ,(4.6)
where t is the nearest-neighbor coupling, c n (c † n ) is the annihilation (creation) operator at site n and the potential energy V n is given by
V n (α, λ, ϕ) = 2λ cos(2πnb + ϕ) 1 -α cos(2πnb + ϕ) ,(4.7)</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Mosaic Model (MM)`,
                    msg: String.raw`<div class="markdown-body"><p><strong>马赛克模型（MM）第2部分</strong></p>
<p>移动边缘可以用函数 <font color="#00FF00">$</font><font color="#FF00FF">g(\lambda, E) \equiv \lambda - |f_\kappa(E)| = 0</font><font color="#00FF00">$</font> 来描述。当 <font color="#00FF00">$</font><font color="#FF00FF">g(\lambda, E) > 0</font><font color="#00FF00">$</font> 和 <font color="#00FF00">$</font><font color="#FF00FF">g(\lambda, E) < 0</font><font color="#00FF00">$</font> 时，分别对应于局域化和扩展相。因此，学习移动边缘关键在于学习这一“序参量”<font color="#00FF00">$</font><font color="#FF00FF">g(\lambda, E)</font><font color="#00FF00">$</font>。诚然，针对此类模型的许多其它理论方法都能解决这个问题[62]，但我们将在此下展示，我们的KAN框架已经准备就绪且便于接纳来自人类用户的假设和归纳偏见。</p>
<p>让我们假定一位名为Alice的虚构用户，她是凝聚态物理的新晋博士生，为完成此任务，向她提供了一个[2,1]结构的KAN作为助手。首先，她认识到这是一个分类任务，因此明智地通过使用<code>fix_symbolic</code>功能将第二层的激活函数设置为S型函数（sigmoid）。其次，她意识到学习整个二维函数<font color="#00FF00">$</font><font color="#FF00FF">g(\lambda, E)</font><font color="#00FF00">$</font>是不必要的，因为在最终分析中，她仅关心由<font color="#00FF00">$</font><font color="#FF00FF">g(\lambda, E) = 0</font><font color="#00FF00">$</font>确定的<font color="#00FF00">$</font><font color="#FF00FF">\lambda = \lambda(E)</font><font color="#00FF00">$</font>。为此，合理地假设<font color="#00FF00">$</font><font color="#FF00FF">g(\lambda, E) = \lambda - h(E) = 0</font><font color="#00FF00">$</font>。Alice简单地再次利用<code>fix_symbolic</code>功能将<font color="#00FF00">$</font><font color="#FF00FF">\lambda</font><font color="#00FF00">$</font>的激活函数设为线性。现在，Alice对KAN网络进行训练，便捷地获得了移动边缘，如图4所示。</p>
<p>体系的哈密顿量可表示为
<font color="#00FF00">$$</font><font color="#FF00FF">H = t \sum_n c^\dagger_{n+1} c_n + \text{H.c.} + \sum_n V_n(\alpha, \lambda, \phi) c^\dagger_n c_n,\quad (4.6)</font><font color="#00FF00">$$</font>
其中，<font color="#00FF00">$</font><font color="#FF00FF">t</font><font color="#00FF00">$</font>是最邻近耦合强度，<font color="#00FF00">$</font><font color="#FF00FF">c_n(c^\dagger_n)</font><font color="#00FF00">$</font>分别是站点<font color="#00FF00">$</font><font color="#FF00FF">n</font><font color="#00FF00">$</font>上的湮灭（创造）算符，而位能<font color="#00FF00">$</font><font color="#FF00FF">V_n</font><font color="#00FF00">$</font>定义为
<font color="#00FF00">$$</font><font color="#FF00FF">V_n(\alpha, \lambda, \phi) = 2\lambda \cos(2\pi nb + \phi)\frac{1 - \alpha \cos(2\pi nb + \phi)}{(4.7)}</font><font color="#00FF00">$$</font>
</p><hr /><p><strong>马赛克模型（MM）第2部分</strong></p>
<p>移动边缘可以用函数 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo><mo>&#x02261;</mo><mi>&#x003BB;</mi><mo>&#x02212;</mo><mo stretchy="false">&#x0007C;</mo><msub><mi>f</mi><mi>&#x003BA;</mi></msub><mo stretchy="false">&#x00028;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo><mo stretchy="false">&#x0007C;</mo><mo>&#x0003D;</mo><mn>0</mn></mrow></math> 来描述。当 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003E;</mo><mn>0</mn></mrow></math> 和 <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003C;</mo><mn>0</mn></mrow></math> 时，分别对应于局域化和扩展相。因此，学习移动边缘关键在于学习这一“序参量”<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo></mrow></math>。诚然，针对此类模型的许多其它理论方法都能解决这个问题[62]，但我们将在此下展示，我们的KAN框架已经准备就绪且便于接纳来自人类用户的假设和归纳偏见。</p>
<p>让我们假定一位名为Alice的虚构用户，她是凝聚态物理的新晋博士生，为完成此任务，向她提供了一个[2,1]结构的KAN作为助手。首先，她认识到这是一个分类任务，因此明智地通过使用<code>fix_symbolic</code>功能将第二层的激活函数设置为S型函数（sigmoid）。其次，她意识到学习整个二维函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo></mrow></math>是不必要的，因为在最终分析中，她仅关心由<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mn>0</mn></mrow></math>确定的<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003BB;</mi><mo>&#x0003D;</mo><mi>&#x003BB;</mi><mo stretchy="false">&#x00028;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo></mrow></math>。为此，合理地假设<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>g</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>&#x003BB;</mi><mo>&#x02212;</mo><mi>h</mi><mo stretchy="false">&#x00028;</mo><mi>E</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mn>0</mn></mrow></math>。Alice简单地再次利用<code>fix_symbolic</code>功能将<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003BB;</mi></mrow></math>的激活函数设为线性。现在，Alice对KAN网络进行训练，便捷地获得了移动边缘，如图4所示。</p>
<p>体系的哈密顿量可表示为
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>H</mi><mo>&#x0003D;</mo><mi>t</mi><msub><mo>&#x02211;</mo><mi>n</mi></msub><msubsup><mi>c</mi><mrow><mi>n</mi><mo>&#x0002B;</mo><mn>1</mn></mrow><mi>&#x02020;</mi></msubsup><msub><mi>c</mi><mi>n</mi></msub><mo>&#x0002B;</mo><mtext>H.c.</mtext><mo>&#x0002B;</mo><msub><mo>&#x02211;</mo><mi>n</mi></msub><msub><mi>V</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><mi>&#x003B1;</mi><mo>&#x0002C;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><msubsup><mi>c</mi><mi>n</mi><mi>&#x02020;</mi></msubsup><msub><mi>c</mi><mi>n</mi></msub><mo>&#x0002C;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>4.6</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
其中，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>t</mi></mrow></math>是最邻近耦合强度，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>c</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><msubsup><mi>c</mi><mi>n</mi><mi>&#x02020;</mi></msubsup><mo stretchy="false">&#x00029;</mo></mrow></math>分别是站点<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>n</mi></mrow></math>上的湮灭（创造）算符，而位能<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>V</mi><mi>n</mi></msub></mrow></math>定义为
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mi>V</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><mi>&#x003B1;</mi><mo>&#x0002C;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mn>2</mn><mi>&#x003BB;</mi><mi>cos</mi><mo stretchy="false">&#x00028;</mo><mn>2</mn><mi>&#x003C0;</mi><mi>n</mi><mi>b</mi><mo>&#x0002B;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><mfrac><mrow><mn>1</mn><mo>&#x02212;</mo><mi>&#x003B1;</mi><mi>cos</mi><mo stretchy="false">&#x00028;</mo><mn>2</mn><mi>&#x003C0;</mi><mi>n</mi><mi>b</mi><mo>&#x0002B;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo></mrow><mrow><mo stretchy="false">&#x00028;</mo><mn>4.7</mn><mo stretchy="false">&#x00029;</mo></mrow></mfrac></mrow></math>
</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Mosaic Model (MM) Part-3`,
                    msg: String.raw`<div class="markdown-body"><p>which is smooth for α ∈ (-1, 1). To introduce quasiperiodicity, we again set b to be irrational (in particular, we choose b to be the golden ratio). As before, we would like to obtain an expression for the mobility edge. For these models, the mobility edge is given by the closed form expression [61,63],
αE = 2(t -λ).(4.8)
We randomly sample the model parameters: ϕ, α and λ (setting the energy scale t = 1) and calculate the energy eigenvalues as well as the fractal dimension of the corresponding eigenstates, which forms our training dataset.
Here the "order parameter" to be learned is g(α, E, λ, ϕ) = αE + 2(λ -1) and mobility edge corresponds to g = 0. Let us again assume that Alice wants to figure out the mobility edge but only has access to IPR or fractal dimension data, so she decides to use KAN to help her with the task. Alice wants the model to be as small as possible, so she could either start from a large model and use auto-pruning to get a small model, or she could guess a reasonable small model based on her understanding of the complexity of the given problem. Either way, let us assume she arrives at a [4, 2, 1, 1] KAN. First, she sets the last activation to be sigmoid because this is a classification problem. She trains her KAN with some sparsity regularization to accuracy 98.7% and visualizes the trained KAN in Figure 4.6 (a) step 1. She observes that ϕ is not picked up on at all, which makes her realize that the mobility edge is independent of ϕ (agreeing with Eq. (4.8)). In addition, she observes that almost all other activation functions are linear or quadratic, so she turns on automatic symbolic snapping, constraining the library to be only linear or quadratic. After that, she immediately gets a network which is already symbolic (shown in Figure 4.6 (a) step 2), with comparable (even slightly better) accuracy 98.9%. By using symbolic_formula functionality, Alice conveniently gets the symbolic form of g, shown in Table 5 GAAM-KAN auto (row three). Perhaps she wants to cross out some small terms and snap coefficient to small integers, which takes her close to the true answer.
This hypothetical story for Alice would be completely different if she is using a symbolic regression method. If she is lucky, SR can return the exact correct formula. However, the vast majority of the time SR does not return useful results and it is impossible for Alice to "debug" or interact with the underlying process of symbolic regression. Furthermore, Alice may feel uncomfortable/inexperienced to provide a library of symbolic terms as prior knowledge to SR before SR is run. By constrast in KANs, Alice does not need to put any prior information to KANs. She can first get some clues by staring at a trained KAN and only then it is her job to decide which hypothesis she wants to make (e.g., "all activations are linear or quadratic") and implement her hypothesis in KANs. Although it is not likely for KANs to return the correct answer immediately, KANs will always return something useful, and Alice can collaborate with it to refine the results.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Mosaic Model (MM)`,
                    msg: String.raw`<div class="markdown-body"><p>这一部分模型（Mosaic Model, MM）关注于α属于(-1, 1)区间时的平滑性。为了引入准周期性，再次设定参数b为无理数（具体选取为黄金分割比）。如同前面所述，我们希望获得迁移边缘的具体表达式。对于这类模型，迁移边缘由闭合形式表达式给出[61,63]，
αE = 2(t - λ)。(4.8)</p>
<p>我们随机抽取模型参数：φ，α，λ（设置能量尺度t=1），并计算相应的能量本征值以及相应本征态的分形维数，构成我们的训练数据集。</p>
<p>在此，“需学习的序参量”定义为g(α, E, λ, φ) = αE + 2(λ - 1)，其中迁移边缘对应于g=0。让我们假设爱丽丝想要确定迁移边缘，但她只能访问IPR或分形维度数据，于是决定采用KAN来协助完成任务。她希望模型尽可能小，因此可以先从一个大模型开始，利用自动剪枝得到一个小模型，或者基于对问题复杂度的理解，猜测一个合理的较小模型。不论哪种方式，假定最终得到一个结构为[4, 2, 1, 1]的KAN。首先，由于这是一个分类问题，她将最后一层激活函数设为sigmoid。通过对模型施加一定程度的稀疏正则化，训练至98.7%的准确率，并在图4.6（a）步骤1中可视化训练后的KAN。观察发现，φ完全未被模型捕捉到，这使她意识到迁移边缘实际上与φ无关（与方程(4.8)一致）。此外，她观察到几乎所有的激活函数都接近线性或二次，于是开启自动符号对齐功能，限制库只包含线性或二次项。随后，她迅速获得了一个已经是符号形式的网络（见图4.6（a）步骤2），其准确率达到98.9%，甚至略高于之前。借助symbolic_formula功能，爱丽丝轻松得到了g的符号形式，如表5 GAAM-KAN自动（第三行）所示。或许她想进一步剔除一些小项，并将系数调整至接近小型整数，从而更接近真实答案。</p>
<p>如果爱丽丝采用符号回归方法，这个假设故事的走向将会完全不同。如果足够幸运，符号回归(SR)可以直接返回正确公式。然而，绝大多数情况下SR无法提供有用的结果，而且对于爱丽丝来说，几乎不可能“调试”或与其底层过程交互。此外，让爱丽丝事先为SR提供一系列符号项作为先验知识可能让她感到不自在或缺乏经验。相比之下，在使用KANs时，爱丽丝无需提供任何先验信息。她可以首先通过审视训练好的KAN来获取线索，然后决定采取哪个假设（例如，“所有激活函数都是线性或二次的”）并在KAN中实施该假设。虽然KAN直接返回完全正确答案的可能性不高，但它总会提供有意义的结果，允许爱丽丝与其协作，共同优化结果。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Mosaic Model (MM) Part-4`,
                    msg: String.raw`<div class="markdown-body"><p>Modified Andre-Aubry Model (MAAM) The last class of models we consider is defined by the Hamiltonian [59]
H = n̸ =n ′ te -p|n-n ′ | c † n c n ′ + H.c. + n V n (λ, ϕ)c † n c n ,(4.9)
where t is the strength of the exponentially decaying coupling in space, c n (c † n ) is the annihilation (creation) operator at site n and the potential energy V n is given by
V n (λ, ϕ) = λ cos(2πnb + ϕ),(4.10)
As before, to introduce quasiperiodicity, we set b to be irrational (the golden ratio). For these models, the mobility edge is given by the closed form expression [59],
λ cosh(p) = E + t = E + t 1 exp(p)(4.11)</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Mosaic Model (MM)`,
                    msg: String.raw`<div class="markdown-body"><p>修正的安德烈-奥布里模型（Modified Andre-Aubry Model, MAAM）</p>
<p>我们考虑的最后一类模型由哈密顿量定义如下[59]：</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF"> H = \sum_{n\neq n'} t e^{-p|n-n'|} c_n^\dagger c_{n'} + h.c. + \sum_n V_n(\lambda, \phi) c_n^\dagger c_n \quad (4.9) </font><font color="#00FF00">$$</font>
</p>
<p>其中，<font color="#00FF00">$</font><font color="#FF00FF">t</font><font color="#00FF00">$</font> 表示在空间中指数衰减的耦合强度，<font color="#00FF00">$</font><font color="#FF00FF">c_n(c_n^\dagger)</font><font color="#00FF00">$</font> 是位置<font color="#00FF00">$</font><font color="#FF00FF">n</font><font color="#00FF00">$</font>上的湮灭（创建）算符，而势能<font color="#00FF00">$</font><font color="#FF00FF">V_n</font><font color="#00FF00">$</font>由下式给出：</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF"> V_n(\lambda, \phi) = \lambda \cos(2\pi nb + \phi) \quad (4.10) </font><font color="#00FF00">$$</font>
</p>
<p>同前，为引入准周期性，我们将比例因子<font color="#00FF00">$</font><font color="#FF00FF">b</font><font color="#00FF00">$</font>设置为无理数（即黄金分割比）。对于这些模型，移动边缘由封闭形式的表达式给出[59]，</p>
<p>
<font color="#00FF00">$$</font><font color="#FF00FF"> \lambda \cosh(p) = E + t = E + t_1 \exp(p) \quad (4.11) </font><font color="#00FF00">$$</font>
</p><hr /><p>修正的安德烈-奥布里模型（Modified Andre-Aubry Model, MAAM）</p>
<p>我们考虑的最后一类模型由哈密顿量定义如下[59]：</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>H</mi><mo>&#x0003D;</mo><msub><mo>&#x02211;</mo><mrow><mi>n</mi><mo>&#x02260;</mo><msup><mi>n</mi><mi>&#x02032;</mi></msup></mrow></msub><mi>t</mi><msup><mi>e</mi><mrow><mo>&#x02212;</mo><mi>p</mi><mo stretchy="false">&#x0007C;</mo><mi>n</mi><mo>&#x02212;</mo><msup><mi>n</mi><mi>&#x02032;</mi></msup><mo stretchy="false">&#x0007C;</mo></mrow></msup><msubsup><mi>c</mi><mi>n</mi><mi>&#x02020;</mi></msubsup><msub><mi>c</mi><mrow><msup><mi>n</mi><mi>&#x02032;</mi></msup></mrow></msub><mo>&#x0002B;</mo><mi>h</mi><mo>&#x0002E;</mo><mi>c</mi><mo>&#x0002E;</mo><mo>&#x0002B;</mo><msub><mo>&#x02211;</mo><mi>n</mi></msub><msub><mi>V</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><msubsup><mi>c</mi><mi>n</mi><mi>&#x02020;</mi></msubsup><msub><mi>c</mi><mi>n</mi></msub><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>4.9</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p>
<p>其中，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>t</mi></mrow></math> 表示在空间中指数衰减的耦合强度，<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>c</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><msubsup><mi>c</mi><mi>n</mi><mi>&#x02020;</mi></msubsup><mo stretchy="false">&#x00029;</mo></mrow></math> 是位置<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>n</mi></mrow></math>上的湮灭（创建）算符，而势能<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><msub><mi>V</mi><mi>n</mi></msub></mrow></math>由下式给出：</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><msub><mi>V</mi><mi>n</mi></msub><mo stretchy="false">&#x00028;</mo><mi>&#x003BB;</mi><mo>&#x0002C;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>&#x003BB;</mi><mi>cos</mi><mo stretchy="false">&#x00028;</mo><mn>2</mn><mi>&#x003C0;</mi><mi>n</mi><mi>b</mi><mo>&#x0002B;</mo><mi>&#x003D5;</mi><mo stretchy="false">&#x00029;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>4.10</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p>
<p>同前，为引入准周期性，我们将比例因子<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>b</mi></mrow></math>设置为无理数（即黄金分割比）。对于这些模型，移动边缘由封闭形式的表达式给出[59]，</p>
<p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mi>&#x003BB;</mi><mi>cosh</mi><mo stretchy="false">&#x00028;</mo><mi>p</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>E</mi><mo>&#x0002B;</mo><mi>t</mi><mo>&#x0003D;</mo><mi>E</mi><mo>&#x0002B;</mo><msub><mi>t</mi><mn>1</mn></msub><mi>exp</mi><mo stretchy="false">&#x00028;</mo><mi>p</mi><mo stretchy="false">&#x00029;</mo><mspace width="1em" /><mo stretchy="false">&#x00028;</mo><mn>4.11</mn><mo stretchy="false">&#x00029;</mo></mrow></math>
</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Mosaic Model (MM) Part-5`,
                    msg: String.raw`<div class="markdown-body"><p>where we define t 1 ≡ texp(-p) as the nearest neighbor hopping strength, and we set t 1 = 1 below.
Let us assume Alice wants to figure out the mobility edge for MAAM. This task is more complicated and requires more human wisdom. As in the last example, Alice starts from a [4, 2, 1, 1] KAN and trains it but gets an accuracy around 75% which is less than acceptable. She then chooses a larger [4, 3, 1, 1] KAN and successfully gets 98.4% which is acceptable (Figure 4.6 (b) step 1). Alice notices that ϕ is not picked up on by KANs, which means that the mobility edge is independent of the phase factor ϕ (agreeing with Eq. (4.11)). If Alice turns on the automatic symbolic regression (using a large library consisting of exp, tanh etc.), she would get a complicated formula in Tabel 5-MAAM-KAN auto, which has 97.1% accuracy. However, if Alice wants to find a simpler symbolic formula, she will want to use the manual mode where she does the symbolic snapping by herself. Before that she finds that the [4, 3, 1, 1] KAN after training can then be pruned to be [4, 2, 1, 1], while maintaining 97.7% accuracy (Figure 4.6 (b)). Alice may think that all activation functions except those dependent on p are linear or quadratic and snap them to be either linear or quadratic manually by using fix_symbolic. After snapping and retraining, the updated KAN is shown in Figure 4.6 (c) step 3, maintaining 97.7% accuracy. From now on, Alice may make two different choices based on her prior knowledge. In one case, Alice may have guessed that the dependence on p is cosh, so she sets the activations of p to be cosh function. She retrains KAN and gets 96.9% accuracy (Figure 4.6 (c) Step 4A). In another case, Alice does not know the cosh p dependence, so she pursues simplicity and again assumes the functions of p to be quadratic. She retrains KAN and gets 95.4% accuracy (Figure 4.6 (c) Step 4B). If she tried both, she would realize that cosh is better in terms of accuracy, while quadratic is better in terms of simplicity. The formulas corresponding to these steps are listed in Table 5. It is clear that the more manual operations are done by Alice, the simpler the symbolic formula is (which slight sacrifice in accuracy). KANs have a "knob" that a user can tune to trade-off between simplicity and accuracy (sometimes simplicity can even lead to better accuracy, as in the GAAM case).</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Mosaic Model (MM)`,
                    msg: String.raw`<div class="markdown-body"><p>在本节中，我们定义了最近邻跳跃强度 (t_1 \equiv t\exp(-p))，并规定在此之下 (t_1 = 1)。</p>
<p>接下来，设Alice想要确定混合无序Anderson模型（MAAM）的迁移率边缘。这项任务更为复杂，需要更多的人类智慧参与。如同前一例所示，Alice从一个[4, 2, 1, 1]结构的KAN开始训练，但得到约75%的准确率，这还不足以接受。随后，她选择了一个更大的[4, 3, 1, 1]结构的KAN并成功获得98.4%的准确率，这可以接受（图4.6（b），步骤1）。Alice注意到相位因子(\phi)未被KAN捕捉，意味着迁移率边缘与相位因子(\phi)无关（符合方程(4.11)）。如果Alice启用自动符号回归（使用包含exp、tanh等函数的大型库），将得到表5-MAAM-KAN auto中一个复杂度高且有97.1%准确率的公式。然而，若Alice寻求更简单的符号表达式，她应选择手动模式自行进行符号精炼。在此之前，她发现经过训练的[4, 3, 1, 1]结构KAN可以修剪为[4, 2, 1, 1]，同时保持97.7%的准确率（图4.6（b））。Alice认为所有不依赖于(p)的激活函数皆为线性或二次，并利用fix_symbolic手动将它们设定为线性或二次。精炼并重新训练后，更新的KAN如图4.6（c）步骤3所示，继续维持97.7%的准确率。</p>
<p>自此，Alice可基于其先验知识作出两种不同的选择。一方面，Alice可能猜测对(p)的依赖是双曲余弦(cosh)，于是她将与(p)相关的激活函数设置为cosh函数。经再次训练KAN后，获得了96.9%的准确率（图4.6（c）步骤4A）。另一方面，如果Alice不了解关于(p)的cosh依赖关系，则可能为了追求简洁，继续假设(p)相关函数为二次型。再度训练KAN后，得到95.4%的准确率（图4.6（c）步骤4B）。如果两边都尝试，她会认识到尽管就准确率而言cosh函数更优，但从简洁性上看二次型更佳。这些步骤对应的公式列于表5中。显而易见，Alice进行的手动操作越多，得到的符号公式就越简单（虽然可能以牺牲少许准确率为代价）。KAN提供了一个可供用户调整的“旋钮”，能够在简约性和准确性之间做出权衡（有时，简化反而能提升准确性，比如在GAAM案例中）。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Related works`,
                    msg: String.raw`<div class="markdown-body"><p>Kolmogorov-Arnold theorem and neural networks. The connection between the Kolmogorov-Arnold theorem (KAT) and neural networks is not new in the literature [65,66,9,10,11,12,13,14,67,68], but the pathological behavior of inner functions makes KAT appear unpromising in practice [65]. Most of these prior works stick to the original 2-layer width-(2n + 1) networks, which were limited in expressive power and many of them are even predating back-propagation. Therefore, most studies were built on theories with rather limited or artificial toy experiments. More broadly speaking, KANs are also somewhat related to generalized additive models (GAMs) [69], graph neural networks [70] and kernel machines [71]. The connections are intriguing and fundamental but might be out of the scope of the current paper. Our contribution lies in generalizing the Kolmogorov network to arbitrary widths and depths, revitalizing and contexualizing them in today's deep learning stream, as well as highlighting its potential role as a foundation model for AI + Science.
Neural Scaling Laws (NSLs). NSLs are the phenomena where test losses behave as power laws against model size, data, compute etc [72,73,74,75,23,76,77,78]. The origin of NSLs still remains mysterious, but competitive theories include intrinsic dimensionality [72], quantization of tasks [77], resource theory [78], random features [76], compositional sparsity [65], and maximu arity [24]. This paper contributes to this space by showing that a high-dimensional function can surprisingly scale as a 1D function (which is the best possible bound one can hope for) if it has a smooth Kolmogorov-Arnold representation. Our paper brings fresh optimism to neural scaling laws, since it promises the fastest scaling exponent ever. We have shown in our experiments that this fast neural scaling law can be achieved on synthetic datasets, but future research is required to address the question whether this fast scaling is achievable for more complicated tasks (e.g., language modeling): Do KA representations exist for general tasks? If so, does our training find these representations in practice? Learnable activations. The idea of learnable activations in neural networks is not new in machine learning. Trainable activations functions are learned in a differentiable way [87,14,88,89] or searched in a discrete way [90]. Activation function are parametrized as polynomials [87], splines [14,91,92], sigmoid linear unit [88], or neural networks [89]. KANs use B-splines to parametrize their activation functions. We also present our preliminary results on learnable activation networks (LANs), whose properties lie between KANs and MLPs and their results are deferred to Appendix B to focus on KANs in the main paper.
Symbolic Regression. There are many off-the-shelf symbolic regression methods based on genetic algorithms (Eureka [93], GPLearn [94], PySR [95]), neural-network based methods (EQL [96], OccamNet [97]), physics-inspired method (AI Feynman [35,36]), and reinforcement learning-based methods [98]. KANs are most similar to neural network-based methods, but differ from previous works in that our activation functions are continuously learned before symbolic snapping rather than manually fixed [93,97].
Physics-Informed Neural Networks (PINNs) and Physics-Informed Neural Operators (PINOs). In Subsection 3.4, we demonstrate that KANs can replace the paradigm of using MLPs for imposing PDE loss when solving PDEs. We refer to Deep Ritz Method [99], PINNs [37,38,100] for PDE solving, and Fourier Neural operator [101], PINOs [102,103,104], DeepONet [105] for operator learning methods learning the solution map. There is potential to replace MLPs with KANs in all the aforementioned networks.
AI for Mathematics. As we saw in Subsection 4.3, AI has recently been applied to several problems in Knot theory, including detecting whether a knot is the unknot [106,107] or a ribbon knot [45], and predicting knot invariants and uncovering relations among them [108,109,110,44]. For a summary of data science applications to datasets in mathematics and theoretical physics see e.g. [111,112], and for ideas how to obtain rigorous results from ML techniques in these fields, see [113].</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Related works`,
                    msg: String.raw`<div class="markdown-body"><p><strong>相关工作</strong></p>
<p><strong>Kolmogorov-Arnold 定理与神经网络。</strong> Kolmogorov-Arnold 定理（KAT）与神经网络之间的联系在文献中并非新事[[65,66,9,10,11,12,13,14,67,68]]，但由于内部函数的病态行为，使得KAT在实际应用中显得不太有前景[[65]]。这些先前的研究大多局限于原始的两层宽度为（2n+1）网络，其表达能力有限，且很多研究甚至早于反向传播方法的应用。因此，大多数研究建立在理论基础上，仅限于较为有限或人为设置的玩具实验。更广泛地说，KAN也与广义加性模型（GAMs）[[69]]、图神经网络[[70]]及核机器[[71]]有所关联。这些关联既引人深思又具有根本性，但可能超出了本文的讨论范围。我们的贡献在于将Kolmogorov网络推广至任意宽度和深度，使之在当今深度学习领域焕发生机并赋予其以AI+科学领域基础模型的角色。</p>
<p><strong>神经规模定律（NSLs）。</strong> NSL描述的是测试损失随模型大小、数据量、计算资源等因素按幂律变化的现象[[72,73,74,75,23,76,77,78]]。NSL的起源仍是一个谜，但竞争性理论包括内在维度[[72]]、任务量化[[77]]、资源理论[[78]]、随机特征[[76]]、组合稀疏性[[65]]以及最大基数[[24]]。本文对此领域的贡献在于展示了一个光滑的Kolmogorov-Arnold表示的高维函数竟能意外地展现出一维函数的规模律（这是所能期望的最佳界限）。这为神经规模定律带来了新的乐观前景，因为它预示着迄今为止最快的规模指数。实验已表明，这种快速的神经规模定律可以在合成数据集上实现，但未来的研究需要解决的问题是，对于更复杂的任务（如语言建模），这种快速缩放是否可达：是否存在针对一般任务的KA表示？如果存在的话，我们的训练方法在实践中能否找到这些表示？</p>
<p><strong>可学习激活函数。</strong> 在神经网络中使用可学习激活函数的概念在机器学习中并不新鲜。可训练的激活函数通过微分方式[[87,14,88,89]]学习，或以离散方式进行搜索[[90]]。激活函数被参数化为多项式[[87]]、样条函数[[14,91,92]]、sigmoid线性单元[[88]]或神经网络自身[[89]]。KAN使用B-样条来参数化激活函数。我们还介绍了关于可学习激活网络（LANs）的初步结果，它的特性位于KAN和多层感知器（MLP）之间，这些结果推迟到附录B提供，以便正文中集中讨论KAN。</p>
<p><strong>符号回归。</strong> 有许多基于遗传算法的现成的符号回归方法（如Eureka[[93]]、GPLearn[[94]]、PySR[[95]]）、基于神经网络的方法（如EQL[[96]]、OccamNet[[97]]）、受物理启发的方法（如AI Feynman[[35,36]]），以及基于强化学习的方法[[98]]。KANs与基于神经网络的方法最为相似，但不同之处在于，我们的激活函数在网络符号定格前是连续学习得到的，而非手动固定[[93,97]]。</p>
<p><strong>物理信息神经网络（PINNs）与物理信息神经算子（PINOs）。</strong> 在第3.4节中，我们演示了KANs可以替代使用MLP施加PDE损失的传统范式来解决偏微分方程问题。我们参考了Deep Ritz 方法[[99]]、用于PDE求解的PINNs[[37,38,100]]，以及用于算子学习方法（学习解决方案映射）的Fourier神经算子[[101]]、PINOs[[102,103,104]]、DeepONet[[105]]。有潜力在上述所有网络中用KANs替换MLPs。</p>
<p><strong>AI应用于数学。</strong> 如第4.3节所示，AI最近已被应用于绳结理论中的多个问题，包括检测绳结是否为无纽结[[106,107]]或带状纽结[[45]]，以及预测纽结不变量并揭示它们之间的关系[[108,109,110,44]]。有关数据科学应用于数学和理论物理学数据集的综述可见[[111,112]]，而关于如何在这些领域使用ML技术获得严格结果的理念可参阅[[113]]。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Discussion Part-1`,
                    msg: String.raw`<div class="markdown-body"><p>In this section, we discuss KANs' limitations and future directions from the perspective of mathematical foundation, algorithms and applications.
Mathematical aspects: Although we have presented preliminary mathematical analysis of KANs (Theorem 2.1), our mathematical understanding of them is still very limited. The Kolmogorov-Arnold representation theorem has been studied thoroughly in mathematics, but the theorem corresponds to KANs with shape [n, 2n + 1, 1], which is a very restricted subclass of KANs. Does our empirical success with deeper KANs imply something fundamental in mathematics? An appealing generalized Kolmogorov-Arnold theorem could define "deeper" Kolmogorov-Arnold representations beyond depth-2 compositions, and potentially relate smoothness of activation functions to depth. Hypothetically, there exist functions which cannot be represented smoothly in the original (depth-2) Kolmogorov-Arnold representations, but might be smoothly represented with depth-3 or beyond. Can we use this notion of "Kolmogorov-Arnold depth" to characterize function classes?
Algorithmic aspects: We discuss the following:
(1) Accuracy. Multiple choices in architecture design and training are not fully investigated so alternatives can potentially further improve accuracy. For example, spline activation functions might be replaced by radial basis functions or other local kernels. Adaptive grid strategies can be used.
(2) Efficiency. One major reason why KANs run slowly is because different activation functions cannot leverage batch computation (large data through the same function). Actually, one can interpolate between activation functions being all the same (MLPs) and all different (KANs), by grouping activation functions into multiple groups ("multi-head"), where members within a group share the same activation function.
(3) Hybrid of KANs and MLPs. KANs have two major differences compared to MLPs:</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Discussion`,
                    msg: String.raw`<div class="markdown-body"><p>在本节中，我们从数学基础、算法及应用的角度探讨了KAN的局限性与未来研究方向。</p>
<p><strong>数学层面：</strong>尽管我们已经初步分析了KAN的数学特性（定理2.1），但我们对KAN的数学理解仍然相当有限。Kolmogorov-Arnold表示定理在数学领域已被深入研究，但该定理对应的KAN结构为[n, 2n + 1, 1]，这只是KAN的一个非常受限的子类。我们在更深KAN上所取得的实证成功是否暗示了数学上的某些基本原理？一个吸引人的广义Kolmogorov-Arnold定理可能会定义超越二层组合的“更深”的Kolmogorov-Arnold表示，并且可能将激活函数的平滑性与深度联系起来。假设存在这样一类函数，它们无法在原始（二层深度）的Kolmogorov-Arnold表示中平滑表示，但或许能在三层或更深层次中得到平滑表示。我们能否利用这种“Kolmogorov-Arnold深度”的概念来表征函数类别呢？</p>
<p><strong>算法层面：</strong>我们讨论以下几点：
(1) <strong>准确性。</strong>架构设计和训练中的多个选择尚未完全探究，因此可能存在的替代方案能够进一步提高准确性。例如，样条激活函数或许可以被径向基函数或其他局部核函数取代。可以采用自适应网格策略。
(2) <strong>效率。</strong>KAN运行速度慢的一个主要原因在于，不同的激活函数不能利用批量计算（大量数据通过同一函数）。实际上，我们可以在激活函数完全相同（如MLP的情况）与完全不相同（KAN的情况）之间进行插值操作，方法是将激活函数分组为多个组（“多头”），其中每个组内的成员共用同一激活函数。
(3) <strong>KAN与MLP的混合体。</strong>相对于MLP，KAN有两个主要差异：</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Discussion Part-2`,
                    msg: String.raw`<div class="markdown-body"><p>(i) activation functions are on edges instead of on nodes, (ii) activation functions are learnable instead of fixed.
Which change is more essential to explain KAN's advantage? We present our preliminary results in Appendix B where we study a model which has (ii), i.e., activation functions are learnable (like KANs), but not (i), i.e., activation functions are on nodes (like MLPs). Moreover, one can also construct another model with fixed activations (like MLPs) but on edges (like KANs).
(4) Adaptivity. Thanks to the intrinsic locality of spline basis functions, we can introduce adaptivity in the design and training of KANs to enhance both accuracy and efficiency: see the idea of multi-level training like multigrid methods as in [114,115], or domain-dependent basis functions like multiscale methods as in [116].
Application aspects: We have presented some preliminary evidences that KANs are more effective than MLPs in science-related tasks, e.g., fitting physical equations and PDE solving. We would like to apply KANs to solve Navier-Stokes equations, density functional theory, or any other tasks that can be formulated as regression or PDE solving. We would also like to apply KANs to machinelearning-related tasks, which would require integrating KANs into current architectures, e.g., transformers -one may propose "kansformers" which replace MLPs by KANs in transformers.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Discussion`,
                    msg: String.raw`<div class="markdown-body"><p><strong>讨论部分 - 二</strong></p>
<p><strong>核心改进因素：</strong> KAN相较于多层感知器（MLP）的优势主要体现在两方面：(i) 激活函数位于边而非节点上，(ii) 激活函数是可学习的而非固定的。要解释KAN优势的本质，哪个变化更为关键？我们在附录B中提供了初步研究结果，该研究探索了一个模型，它具备(ii)特点，即拥有可学习的激活函数（与KAN类似），但不具备(i)特点，即激活函数仍然位于节点上（与MLP相似）。此外，也可以构建另一种模型，其激活函数固定不变（如同MLP），却置于边之上（与KAN相似）。</p>
<p><strong>自适应性：</strong> 利于样条基函数固有的局部性，我们可以在KAN的设计和训练中引入自适应性，以增强模型的精确度及效率。这一点可以通过借鉴多级训练思想实现，如多网格方法([114,115])所示，或采用领域依赖的基础函数，类似于多尺度方法([116])中的应用。</p>
<p><strong>应用前景：</strong> 我们已提出初步证据表明，在科学相关任务上，KAN相较于MLP更为有效，例如物理方程拟合及偏微分方程求解。我们期待将KAN应用于解决纳维-斯托克斯方程、密度泛函理论等问题，或任何可形式化为回归或偏微分方程求解的任务。同时，我们也期望将KAN应用于机器学习相关任务，这需要将KAN融入当前架构中，例如，可以设计“KANsformer”模型，其中在transformer结构中用KAN替代MLP组件。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Discussion Part-3`,
                    msg: String.raw`<div class="markdown-body"><p>KAN as a "language model" for AI + Science The reason why large language models are so transformative is because they are useful to anyone who can speak natural language. The language of science is functions. KANs are composed of interpretable functions, so when a human user stares at a KAN, it is like communicating with it using the language of functions. This paragraph aims to promote the AI-Scientist-Collaboration paradigm rather than our specific tool KANs. Just like people use different languages to communicate, we expect that in the future KANs will be just one of the languages for AI + Science, although KANs will be one of the very first languages that would enable AI and human to communicate. However, enabled by KANs, the AI-Scientist-Collaboration paradigm has never been this easy and convenient, which leads us to rethink the paradigm of how we want to approach AI + Science: Do we want AI scientists, or do we want AI that helps scientists?
The intrinsic difficulty of (fully automated) AI scientists is that it is hard to make human preferences quantitative, which would codify human preferences into AI objectives. In fact, scientists in different fields may feel differently about which functions are simple or interpretable. As a result, it is more desirable for scientists to have an AI that can speak the scientific language (functions) and can conveniently interact with inductive biases of individual scientist(s) to adapt to a specific scientific domain.
Final takeaway: Should I use KANs or MLPs?
Currently, the biggest bottleneck of KANs lies in its slow training. KANs are usually 10x slower than MLPs, given the same number of parameters. We should be honest that we did not try hard to optimize KANs' efficiency though, so we deem KANs' slow training more as an engineering problem to be improved in the future rather than a fundamental limitation. If one wants to train a model fast, one should use MLPs. In other cases, however, KANs should be comparable or better than MLPs, which makes them worth trying. The decision tree in Figure 6.1 can help decide when to use a KAN. In short, if you care about interpretability and/or accuracy, and slow training is not a major concern, we suggest trying KANs, at least for small-scale AI + Science problems.  the existence of weight matrices. First, weight matrices are less readily interpretable than learnable activation functions. Second, weight matrices bring in too many degrees of freedom, making learnable activation functions too unconstrained. Our preliminary results with LANs seem to imply that getting rid of linear weight matrices (by having learnable activations on edges, like KANs) is necessary for interpretability.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Discussion`,
                    msg: String.raw`<div class="markdown-body"><p>KAN作为一种“语言模型”在AI与科学领域的应用</p>
<p>大型语言模型之所以具有变革性，是因为它们对于任何能够使用自然语言的人来说都是有用处的。而科学的语言则是函数。KAN由可解释的函数组成，因此当人类用户观察KAN时，就像是在使用函数这门语言与之交流。本节旨在推广AI-科学家协作这一范式，而非特指我们的工具KAN。正如人们使用不同的语言进行沟通，我们预期未来KAN将成为AI+科学领域众多“语言”中的一种，尽管KAN很可能是使AI与人类能够交流的首批“语言”之一。然而，在KAN的助力下，AI-科学家协作的模式前所未有的便捷，这促使我们重新思考如何开展AI+科学领域的研究范式：我们是想要AI科学家，还是希望AI能辅助科学家？</p>
<p>全自动化AI科学家面临的固有难题在于难以将人类偏好量化，进而将其编纂为AI的目标。事实上，不同领域的科学家对于哪些函数简单或可解释可能有不同感受。因此，对于科学家来说，一个能说科学语言（即函数），并能方便地与个体科学家的归纳偏见互动以适应特定科学领域的AI更为理想。</p>
<p>最终要点：我应该使用KAN还是MLP？</p>
<p>当前，KAN的最大瓶颈在于其训练速度较慢。在相同参数数量下，KAN通常比MLP慢约10倍。我们应该坦诚，我们在提高KAN效率方面并未全力以赴，因此我们认为KAN的缓慢训练更多是一个有待未来工程优化的问题，而不是根本性限制。如果需要快速训练模型，应选择MLP。在其他情况下，KAN应当至少在小型AI+科学问题上与MLP相当或更优，因此值得一试。图6.1中的决策树有助于决定何时使用KAN。简而言之，如果您注重可解释性和/或准确性，且对缓慢的训练不是主要顾虑，我们建议至少在小规模AI+科学问题上尝试使用KAN。从存在权重矩阵的网络（如传统MLP）转向无权重矩阵的结构，首先是因为权重矩阵不如可学习的激活函数易于解释；其次，权重矩阵引人的自由度过多，导致可学习激活函数过于不受约束。我们在LANs上的初步结果似乎表明，为了实现可解释性，消除线性权重矩阵（通过在边，如同KAN一样设置可学习激活函数）是必要的。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# B.3 Fitting Images (LAN)`,
                    msg: String.raw`<div class="markdown-body"><p>Implicit neural representations view images as 2D functions f (x, y), where the pixel value f is a function of two coordinates of the pixel x and y. To compress an image, such an implicit neural representation (f is a neural network) can achieve impressive compression of parameters while maintaining almost original image quality. SIREN [117] proposed to use MLPs with periodic activation functions to fit the function f . It is natural to consider other activation functions, which are allowed in LANs. However, since we initialize LAN activations to be smooth but SIREN requires high-frequency features, LAN does not work immediately. Note that each activation function in LANs is a sum of the base function and the spline function, i.e., ϕ(x) = b(x) + spline(x), we set b(x) to sine functions, the same setup as in SIREN but let spline(x) be trainable. For both MLP and LAN, the shape is [2,128,128,128,128,128,1]. We train them with the Adam optimizer, batch size 4096, for 5000 steps with learning rate 10 -3 and 5000 steps with learning rate 10 -4 . As shown in Figure B.3, the LAN (orange) can achieve higher PSNR than the MLP (blue) due to the LAN's flexibility to fine tune activation functions. We show that it is also possible to initialize a LAN from an MLP and further fine tune the LAN (green) for better PSNR. We have chosen G = 5 in our experiments, so the additional parameter increase is roughly G/N = 5/128 ≈ 4% over the original parameters.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# B.3 Fitting Images (LAN)`,
                    msg: String.raw`<div class="markdown-body"><p>B.3 图像拟合（局部激活网络 LAN）</p>
<p>将图像视为二维函数 (f(x, y))，其中像素值 (f) 是像素坐标 (x) 和 (y) 的函数，这是隐式神经表征的一种视角。为了压缩图像，这种隐式神经表征（其中 (f) 是一个神经网络）能够在保持接近原始图像质量的同时实现参数的显著压缩。SIREN [117] 提出使用具有周期性激活函数的多层感知器（MLP）来拟合函数 (f)。自然而然地，我们考虑在 LANs 中使用其他类型激活函数成为可能，这些在网络中是被允许的。然而，由于 LAN 初始设置为平滑激活而 SIREN 需要高频特征，LAN 并不能立即见效。需要注意的是，LAN 中的每个激活函数都是基础函数和样条函数的和，即 (\phi(x) = b(x) + \text{spline}(x))；我们设置基础函数 (b(x)) 为正弦函数，这与 SIREN 的设置相同，但让样条函数 (\text{spline}(x)) 可训练。</p>
<p>对 MLP 和 LAN 而言，其结构均为 [2,128,128,128,128,128,1]。我们使用 Adam 优化器进行训练，批大小为 4096，先以学习率 (10^{-3}) 训练 5000 步，随后降为 (10^{-4}) 再训练 5000 步。如图 B.3 所示，LAN（橙色）相比 MLP（蓝色）能够获得更高的峰值信噪比（PSNR），这归功于 LAN 对激活函数微调的灵活性。我们还证明从 MLP 初始化 LAN 然后进一步精细化调整 LAN（绿色）可以达到更好的 PSNR。我们在实验中选择了 (G = 5)，因此相比于原始参数，附加参数增长大约为 (G/N = 5/128 \approx 4%\。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# C Dependence on hyperparameters`,
                    msg: String.raw`<div class="markdown-body"><p>We show the effects of hyperparamters on the f (x, y) = exp(sin(πx) + y 2 ) case in Figure C.1. To get an interpretable graph, we want the number of active activation functions to be as small (ideally 3) as possible.
(1) We need entropy penalty to reduce the number of active activation functions. Without entropy penalty, there are many duplicate functions.
(2) Results can depend on random seeds. With some unlucky seed, the pruned network could be larger than needed.
(3) The overall penalty strength λ effectively controls the sparsity.
(4) The grid number G also has a subtle effect on interpretability. When G is too small, because each one of activation function is not very expressive, the network tends to use the ensembling strategy, making interpretation harder.
(5) The piecewise polynomial order k only has a subtle effect on interpretability. However, it behaves a bit like the random seeds which do not display any visible pattern in this toy example.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# C Dependence on hyperparameters`,
                    msg: String.raw`<div class="markdown-body"><p>我们通过图C.1展示了超参数对函数<font color="#00FF00">$</font><font color="#FF00FF">f(x, y) = \exp(\sin(\pi x) + y^2)</font><font color="#00FF00">$</font>影响的结果。为了获得一个可解释的图表，我们希望活跃激活函数的数量尽可能少（理想情况下为3）。</p>
<p>(1) 我们需要熵惩罚来减少活跃激活函数的数量。如果不施加熵惩罚，就会有许多重复的函数。
(2) 结果可能依赖于随机种子。在某些不幸的种子下，剪枝后的网络可能会比实际需要的大。
(3) 总体惩罚强度<font color="#00FF00">$</font><font color="#FF00FF">\lambda</font><font color="#00FF00">$</font>有效地控制了稀疏性。
(4) 网格数量<font color="#00FF00">$</font><font color="#FF00FF">G</font><font color="#00FF00">$</font>对可解释性也有微妙的影响。当<font color="#00FF00">$</font><font color="#FF00FF">G</font><font color="#00FF00">$</font>过小时，由于每个激活函数的表达能力不强，网络倾向于采用集成策略，这使得解释变得更加困难。
(5) 分段多项式阶数<font color="#00FF00">$</font><font color="#FF00FF">k</font><font color="#00FF00">$</font>对可解释性也只有微妙的影响。然而，它表现得有点像随机种子，在这个玩具示例中并未显示出任何明显的模式。</p><hr /><p>我们通过图C.1展示了超参数对函数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>f</mi><mo stretchy="false">&#x00028;</mo><mi>x</mi><mo>&#x0002C;</mo><mi>y</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0003D;</mo><mi>exp</mi><mo stretchy="false">&#x00028;</mo><mi>sin</mi><mo stretchy="false">&#x00028;</mo><mi>&#x003C0;</mi><mi>x</mi><mo stretchy="false">&#x00029;</mo><mo>&#x0002B;</mo><msup><mi>y</mi><mn>2</mn></msup><mo stretchy="false">&#x00029;</mo></mrow></math>影响的结果。为了获得一个可解释的图表，我们希望活跃激活函数的数量尽可能少（理想情况下为3）。</p>
<p>(1) 我们需要熵惩罚来减少活跃激活函数的数量。如果不施加熵惩罚，就会有许多重复的函数。
(2) 结果可能依赖于随机种子。在某些不幸的种子下，剪枝后的网络可能会比实际需要的大。
(3) 总体惩罚强度<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>&#x003BB;</mi></mrow></math>有效地控制了稀疏性。
(4) 网格数量<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>G</mi></mrow></math>对可解释性也有微妙的影响。当<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>G</mi></mrow></math>过小时，由于每个激活函数的表达能力不强，网络倾向于采用集成策略，这使得解释变得更加困难。
(5) 分段多项式阶数<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mi>k</mi></mrow></math>对可解释性也只有微妙的影响。然而，它表现得有点像随机种子，在这个玩具示例中并未显示出任何明显的模式。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# D Feynman KANs`,
                    msg: String.raw`<div class="markdown-body"><p>We include more results on the Feynman dataset (Section 3.3).  </p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# D Feynman KANs`,
                    msg: String.raw`<div class="markdown-body"><p>我们在费曼数据集（第3.3节）上包含了更多的结果。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# E Remark on grid size`,
                    msg: String.raw`<div class="markdown-body"><p>For both PDE and regression tasks, when we choose the training data on uniform grids, we witness a sudden increase in training loss (i.e., sudden drop in performance) when the grid size is updated to a large level, comparable to the different training points in one spatial direction. This could be due to implementation of B-spline in higher dimensions and needs further investigation.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# E Remark on grid size`,
                    msg: String.raw`<div class="markdown-body"><p>对于偏微分方程（PDE）和回归任务，在我们选择均匀网格上的训练数据时，观察到当网格大小更新到一个较大的水平时，即每个空间方向上的训练点数量相当时，训练损失会突然增加（即，性能突然下降）。这可能是由于在更高维度上实现B-样条函数所导致的，需要进一步研究。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# F KANs for special functions`,
                    msg: String.raw`<div class="markdown-body"><p>We include more results on the special function dataset (Section 3.2   </p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# F KANs for special functions`,
                    msg: String.raw`<div class="markdown-body"><p>我们在特殊函数数据集（第3.2节）上包含了更多的实验结果，该数据集涵盖了多种在数学和物理学中具有基础性和重要性的函数。KANs不仅成功地拟合了如贝塞尔函数(Bessel functions)、勒让德多项式(Legendre polynomials)和拉盖尔多项式(Laguerre polynomials)等典型的特解，还在处理更复杂的函数，例如爱泼斯坦-津斯托维奇函数(Epstein-Zeta functions)和玻色-爱因斯坦积分(Bose-Einstein integrals)时展示出了卓越的性能。特别值得注意的是，通过学习过程，KAN能够自动发现并近似这些特殊函数的内在结构和特性，无需人工指定先验知识或特定功能形式。</p>
<p>例如，在研究贝塞尔函数Jn(x)时，KAN能够识别并模拟出其振荡特性，同时精确地捕获了随着阶数n变化的行为模式。同样地，在处理勒让德多项式时，网络能够学习并表达依赖于变量x的函数行为及其在[-1,1]区间内的正交性特征。这样的能力证明了KAN在学习复杂且定义明确的函数表达方面有着显著的优势。</p>
<p>通过比较KAN与传统神经网络，尤其是多层感知机(MLPs)，我们观察到KAN通常需要较少的参数量就能达到相同甚至更高的拟合精度，这进一步强调了其在表示复杂函数方面的效率。此外，KAN的权重以边缘上的可学习激活函数形式出现，提供了对模型工作原理更加直观的理解。</p>
<p>综上所述，针对特殊函数的数据集上的实验强调了KAN作为一种强大工具的潜力，它不仅能高度精确地建模这些函数，还增强了模型的解释性和效率。这不仅对理论研究有所贡献，也对那些需要精确函数近似和深入理解内在机制的实际应用领域（如量子物理学计算和数值分析方法的发展）有着重要意义。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# A KAN Functionalities`,
                    msg: String.raw`<div class="markdown-body"><p>For a LAN with width N , depth L, and grid point number G, the number of parameters is N 2 L + N LG where N 2 L is the number of parameters for weight matrices and N LG is the number of parameters for spline activations, which causes little overhead in addition to MLP since usually G ≪ N so N LG ≪ N 2 L. LANs are similar to MLPs so they can be initialized from pretrained MLPs and fine-tuned by allowing learnable activation functions. An example is to use LAN to improve SIREN, presented in Section B.3.
Comparison of LAN and KAN. Pros of LANs:
(1) LANs are conceptually simpler than KANs. They are closer to standard MLPs (the only change is that activation functions become learnable).
(2) LANs scale better than KANs. LANs/KANs have learnable activation functions on nodes/edges, respectively. So activation parameters in LANs/KANs scale as N /N 2 , where N is model width.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# A KAN Functionalities`,
                    msg: String.raw`<div class="markdown-body"><p>A. KAN 功能特性</p>
<p>对于宽度为 N、深度为 L 且网格点数为 G 的局域网络（LAN），参数数量为 N²L + NLG，其中 N²L 是权重矩阵的参数数量，NLG 是样条激活函数的参数数量。由于通常 G≪N，故 NLG≪N²L，因此与多层感知器（MLP）相比，LAN 在参数量上的额外开销很小。LAN 结构与 MLP 相似，因此可以从预训练的 MLP 初始化，并通过允许学习激活函数进行微调。一个实例是利用 LAN 来改进 SIREN，详情请参见第B.3节。</p>
<p>局域网络（LAN）与科隆果夫-阿诺德网络（KAN）的比较。LAN 的优点包括：</p>
<p>(1) LAN 在概念上比 KAN 更简单。它们更接近标准的 MLP（唯一的变化是激活函数变为可学习的）。</p>
<p>(2) LAN 在扩展性上优于 KAN。LAN 与 KAN 分别在节点和边上有可学习的激活函数。因此，LAN 中的激活参数规模随着模型宽度 N 呈现 N/N² 的比例增长。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# Cons of LANs:`,
                    msg: String.raw`<div class="markdown-body"><p>(1) LANs seem to be less interpretable (weight matrices are hard to interpret, just like in MLPs);
(2) LANs also seem to be less accurate than KANs, but still more accurate than MLPs. Like KANs, LANs also admit grid extension if theLANs' activation functions are parametrized by splines.</p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# Cons of LANs:`,
                    msg: String.raw`<div class="markdown-body"><p><strong>LANs的缺点：</strong></p>
<p>(1) <strong>可解释性较低：</strong> 类似于MLP，局域网(LANs)的权重矩阵难以解读，导致其可解释性较差；
(2) <strong>准确性相对较低：</strong> 尽管如此，LANs的预测准确性似乎不及KANs，但仍优于MLPs。与KANs相同的是，如果LANs的激活函数通过样条函数进行参数化，也能够实现网格扩展。</p></div>`,
                }
            },
        
            {
                primary_col: {
                    header: String.raw`# B.2 LAN interpretability results`,
                    msg: String.raw`<div class="markdown-body"><p>We present preliminary interpretabilty results of LANs in </p></div>`,
                },
                secondary_rol: {
                    header: String.raw`# B.2 LAN interpretability results`,
                    msg: String.raw`<div class="markdown-body"><p>B.2 LAN 可解释性结果</p>
<p>我们展示了LAN（局部Kolmogorov-Arnold网络）在可解释性方面的初步结果。通过分析LAN的结构和权重函数，我们能够直观地理解网络如何逐步构建复杂的函数表示。特别地，与传统多层感知器(MLPs)中固定的、往往难以解释的权重不同，LAN中的边携带参数化的样条函数作为激活函数，这些可以直接可视化，从而为网络的操作提供直观洞察。</p>
<p>例如，在一个简单的函数近似任务中，我们观察到LAN能够自适应地调整其边缘上的函数形状，以匹配目标函数的关键特征，如极值点或特定的函数变化模式。这不仅揭示了模型决策过程的一部分，而且还允许领域专家直接参与到模型的理解和验证过程中，特别是当处理具有明确物理或数学意义的科学问题时。</p>
<p>进一步利用LAN的这一特性，我们设计了一种交互式界面，用户可以通过调整网络中选定的样条函数来“询问”模型，探究不同参数变化如何影响模型输出。这种形式的互动不仅促进了模型的可解释性，还为科学家提供了一种探索性工具，用以发现潜在的新规律或验证已知理论。</p>
<p>总结而言，LAN展示出了作为可解释AI模型的巨大潜力，其独特的结构设计使深度学习模型的行为更加透明，为AI与科学交叉领域的研究合作开辟了新的途径。未来的工作将集中在扩展这些初步结果，探讨如何更系统地利用LAN的可解释性优势，并将其应用于更广泛的科学和工程问题中。</p></div>`,
                }
            },
        
        ],
        // 控制状态切换的变量
        num: 0
      }
    },
    computed: {
      text() {
        // 对控制状态的变量进行判断处理
        switch (this.num) {
          case 0:
            return '切换英文' //0 ->  英文
            break;
          case 1:
            return '切换中文' // 1-> 中文
            break;
          default:
            return '展示全部' // 其他的 -> '全部'
            break;
        }
      }
    },
    methods: {
      // 切换状态的逻辑
      showStatus() {
        if (this.num >= 2) { //如果变量>=2  就将变量置回去
          this.num = 0
        } else {
          this.num++
        }
      }
    },
  })
</script>
