Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Jaward 
posted an update 16 days ago
Post
1481
When I read the KAN paper, I see physicists casually making fun of the uncertainties in MLPs or Neural nets as a whole:

- "The philosophy here is close to the mindset of physicists, who often care more about typical cases rather than worst cases" lol this went hard on NNs

- "Finite grid size can approximate the function well with a residue rate independent of the dimension, hence beating curse of dimensionality!" haha.

- "Neural scaling laws are the phenomenon where test loss decreases with more model parameters"

- "Our approach, which assumes the existence of smooth Kolmogorov Arnold representations, decomposes the high-dimensional function into several 1D functions"

Key Differences With MLPs:
- Activation Functions: Unlike MLPs that use fixed activation functions at the nodes, KANs utilize learnable activation functions located on the edges between nodes.
- Weight Parameters: In KANs, traditional linear weight matrices are absent. Instead, each weight parameter is replaced by a learnable univariate function, specifically a spline.
- Summation Nodes: Nodes in KANs perform simple summation of incoming signals without applying non-linear transformations.

Advantages Over MLPs:
- Accuracy: achieve higher accuracy with smaller network sizes compared to larger MLPs in tasks like data fitting and solving partial differential equations (PDEs).
- Interpretability: Due to their unique structure, KANs are more interpretable than MLPs.

Technical Innovations:
- Learnable Edges: learnable functions on network edges presents a novel approach to network design, providing greater flexibility in modeling complex relationships in data.
- No Linear Weights: elimination of linear weights reduces the parameters, and potentially simplifies the learning process, focusing on the optimization of univariate function representations.

Applications and Practical Use:
- Scientific Collaboration: KANs have been applied in scientific settings as tools to help discover or rediscover math
In this post