blog.gopenai.com Open in urlscan Pro
162.159.153.4  Public Scan

Submitted URL: https://blog.gopenai.com/understanding-kolmogorov-arnold-networks-kans-and-their-application-in-variational-autoencoders-...
Effective URL: https://blog.gopenai.com/understanding-kolmogorov-arnold-networks-kans-and-their-application-in-variational-autoencoders-...
Submission: On November 16 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Open in app

Sign up

Sign in

Write


Sign up

Sign in



Member-only story


UNDERSTANDING KOLMOGOROV-ARNOLD NETWORKS (KANS) AND THEIR APPLICATION IN
VARIATIONAL AUTOENCODERS

shashank Jain

·

Follow

Published in

GoPenAI

·
5 min read
·
Jun 28, 2024

15



Listen

Share

Today, we’ll be diving into the Kolmogorov-Arnold Networks, or KANs for short.
We’re going to explore how KANs can potentially revolutionize the way we build
and understand neural networks, especially when it comes to Variational
Autoencoders (VAEs).

Let’s start with the basics. What exactly are Kolmogorov-Arnold Networks?

KANs are based on a mathematical theorem called the Kolmogorov-Arnold
representation theorem. The gist of it is this: any continuous function of
multiple variables can be represented as a combination of continuous functions
of just one variable.

Now, why does this matter for neural networks? Well, think about it. Neural
networks are all about approximating complex functions. If we can represent any
function using simpler, one-dimensional functions, we might be able to create
more efficient and powerful neural networks. It’s like breaking down a complex
problem into smaller, more manageable pieces.

But here’s where it gets really interesting. We can implement these
one-dimensional functions using splines and piecewise polynomials. Let me break
that down for you.

Splines are like the Swiss Army knives of function approximation. They’re
smooth, flexible, and incredibly useful. Imagine you’re trying to draw a complex
curve. Instead of using one complicated function, you use several simpler
functions that connect smoothly. That’s…

CREATE AN ACCOUNT TO READ THE FULL STORY.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.



Continue in app
Or, continue in mobile web



Sign up with Google

Sign up with Facebook

Sign up with email

Already have an account? Sign in





15

15



Follow


PUBLISHED IN GOPENAI

1.4K Followers
·Last published 3 hours ago

Where the ChatGPT community comes together to share insights and stories.

Follow
Follow


WRITTEN BY SHASHANK JAIN

757 Followers
·3 Following

email : jain.sm@gmail.com

Follow



MORE FROM SHASHANK JAIN AND GOPENAI

In

AI Mind

by

shashank Jain


FREDNORMER: A SMARTER APPROACH TO TIME SERIES FORECASTING WITH FREQUENCY-BASED
NORMALIZATION


IN THIS BLOG, WE’LL DIVE INTO TIME SERIES FORECASTING CHALLENGES AND HOW
FREDNORMER, A FREQUENCY-DOMAIN NORMALIZATION TECHNIQUE, OFFERS A…


Oct 5
182
5



In

GoPenAI

by

kirouane Ayoub


FINE-TUNING EMBEDDINGS FOR SPECIFIC DOMAINS: A COMPREHENSIVE GUIDE


IMAGINE YOU’RE BUILDING A QUESTION ANSWERING SYSTEM FOR A MEDICAL DOMAIN. YOU
WANT TO ENSURE IT CAN ACCURATELY RETRIEVE RELEVANT MEDICAL…

Sep 30
567
3



In

GoPenAI

by

Tarun Singh




FREE LLM ACCESS THAT EVERY AI DEVELOPER SHOULD GRAB RIGHT NOW! GOOGLE’S GEMINI
API


AI DEVELOPERS, REJOICE! GOOGLE HAS ROLLED OUT ITS GEMINI API, AND IT’S AVAILABLE
FOR FREE. YES, YOU READ THAT RIGHT — NO FEES, NO STRINGS…


Nov 8
138
2



In

AI Mind

by

shashank Jain


TIME SERIES PREDICTION WITH LSTMS, FOURIER TRANSFORMS, AND PAPA AVERAGING: AN
EXPERIMENT


IN THIS BLOG, WE DIVE INTO AN EXPERIMENT THAT COMBINES FOURIER TRANSFORMS WITH
LSTM (LONG SHORT-TERM MEMORY) NETWORKS FOR TIME SERIES…


Oct 15
83
1


See all from shashank Jain
See all from GoPenAI



RECOMMENDED FROM MEDIUM

In

Biased-Algorithms

by

Amit Yadav


LIQUID STATE MACHINE: HOW IT WORKS AND HOW TO USE IT?


WHAT IS A LIQUID STATE MACHINE (LSM)?


Oct 5
10



In

Writing in the World of Artificial Intelligence

by

Abish Pius


PYTORCH IS MAKING FINE-TUNING LLMS EASY WITH TORCHTUNE (CODE EXAMPLES FOR LORA
AND QLORA INCLUDED)


FINE-TUNING LARGE LANGUAGE MODELS (LLMS) HAS BECOME INCREASINGLY VITAL AS
INDUSTRIES SEEK TO ADAPT POWERFUL PRETRAINED MODELS FOR SPECIFIC…


Oct 25
36




LISTS


NATURAL LANGUAGE PROCESSING

1809 stories·1424 saves


In

Level Up Coding

by

Dr. Ashish Bamania




XNETS ARE HERE TO OUTCOMPETE MLPS & KANS


DEEP DIVE INTO XNETS, A NEURAL NETWORK ARCHITECTURE THAT OUTPERFORMS MLPS, KANS,
AND PINNS AND LEARN TO BUILD ONE FROM SCRATCH.


5d ago
642
9



In

CodeX

by

AI Rabbit


HAS ANTHROPIC CLAUDE JUST WIPED OUT AN ENTIRE INDUSTRY?


IF YOU HAVE BEEN FOLLOWING THE NEWS, YOU MAY HAVE READ ABOUT A NEW FEATURE (OR
SHOULD I CALL IT A PRODUCT) IN THE CLAUDE API — IT IS…


Oct 27
2.1K
32



Esther Cifuentes


COMPARING TIME SERIES ALGORITHMS


EVALUATING LEADING TIME SERIES ALGORITHM WITH DARTS.


Oct 16
351
6



Code Thulo


KOLMOGOROV-ARNOLD NETWORKS VS. MULTI-LAYER PERCEPTRONS: KEY DIFFERENCES


KOLMOGOROV-ARNOLD NETWORKS (KANS) AND MULTI-LAYER PERCEPTRONS (MLPS) ARE BOTH
POWERFUL ARCHITECTURES IN THE REALM OF NEURAL NETWORKS, BUT…


Jul 5
9


See more recommendations

Help

Status

About

Careers

Press

Blog

Privacy

Terms

Text to speech

Teams


To make Medium work, we log user data. By using Medium, you agree to our Privacy
Policy, including cookie policy.