A project involving the use of radial basis function neural networks (RBFN) to simulate classical Hermite interpolation has recently been initiated. The gist of Hermite interpolation is that the first derivative f'(x) of the function being approximated, in addition to the function value f(x), is used to generate an interpolating polynomial. Such a polynomial has a higher degree than the usual Lagrange interpolating polynomial, but is also expected to be a more accurate approximation. In the current project we have devised a way to interpolate the derivatives f'(x) using a superposition of classical interpolatory RBFNs. This approach negates the need to determine the weights in the RBFNs subject to a derivative condition; rather, the derivative condition determines the coefficients in the superposition of the RBFNs. An error analysis has shown that the superposed network is expected to be more accurate by several orders than the underlying classical RBFNs, and this has been demonstrated with several numerical examples.