Radial Basis Function Networks are a type of local learning method that combines ideas from a number of approaches in machine learning.

This is another approach of learning to approximate a function. It's a global approximation (linear combination of localised approximations) and is similar to distance-weighted regression, except it's "eager" not "lazy".

# Diagram # Learned Hypothesis

The hypothesis has the form of:

(1)
\begin{align} f(x) = w_0 + \sum^k_{u = 1} w)uK_u(d(x_u,x)) \end{align}

Where $x_u$ is an instance of X and the Kernel function K() decreases as the distance d() increases. k (hidden units) is a user-provided constant that specifies the number of kernel functions to be included

## Kernel Function

It's common to set the kernel function as

(2)
\begin{align} K_u(d(x_u,x)) = e^{-\frac{d^2(x_u,x)}{2\sigma_u^2}} \end{align}

We can use this to approximate any function with arbitrarily small error if the k is sufficiently large and the kernel widths $\sigma ^2$ can be individually specified.

# Training Radial Basis Function Networks

1.

• Set k (num hidden units)
• Set $x_u$ and $\sigma_u^2$ for each hidden unit u

2.

• Train upper-level function to set weights $w_u$
• First choose variance (and perhaps mean) for each Ku
• Then hold Ku fixed, and train linear output layer – efficient methods to fit linear function
• Fit the data to minimise squared error (like in linear models)

3. Figure out which subsets to use for each kernel function

• Scatter them uniformly throughout the instance space
page revision: 4, last edited: 16 Apr 2012 14:30