Conventional methods of supervised learning are inevitably faced with the problem of local minima; evidence is presented that second order methods such as the conjugate gradient and quasi-Newton techniques are particularly susceptible to being trapped in sub-optimal solutions. A new technique, expanded range approximation (ERA), is presented, which by the use of a homotopy on the range of the target outputs allows supervised learning methods to find a global minimum of the error function in almost every case. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.