Radial Basis Function Networks

Radial Basis Function Networks are a type of local learning method that combines ideas from a number of approaches in machine learning.

This is another approach of learning to approximate a function. It's a global approximation (linear combination of localised approximations) and is similar to distance-weighted regression, except it's "eager" not "lazy".



Learned Hypothesis

The hypothesis has the form of:

\begin{align} f(x) = w_0 + \sum^k_{u = 1} w)uK_u(d(x_u,x)) \end{align}

Where $x_u$ is an instance of X and the Kernel function K() decreases as the distance d() increases. k (hidden units) is a user-provided constant that specifies the number of kernel functions to be included

Kernel Function

It's common to set the kernel function as

\begin{align} K_u(d(x_u,x)) = e^{-\frac{d^2(x_u,x)}{2\sigma_u^2}} \end{align}

We can use this to approximate any function with arbitrarily small error if the k is sufficiently large and the kernel widths $\sigma ^2$ can be individually specified.

Training Radial Basis Function Networks


  • Set k (num hidden units)
  • Set $x_u$ and $\sigma_u^2$ for each hidden unit u


  • Train upper-level function to set weights $w_u$
    • First choose variance (and perhaps mean) for each Ku
    • Then hold Ku fixed, and train linear output layer – efficient methods to fit linear function
  • Fit the data to minimise squared error (like in linear models)

3. Figure out which subsets to use for each kernel function

  • Scatter them uniformly throughout the instance space