One-hidden-layer feedforward neural networks are described as functions having many real-valued parameters. The larger the number of parameters is, neural networks can approximate various functions (universal approximation property). The essential optimal order of approximation bounds is already derived in 1996. We focused on the numerical experiment that indicates the neural networks whose parameters have stochastic perturbations gain better performance than ordinary neural networks, and explored the approimation property of neural networks with stochastic perturbations. In this paper, we derived the quantitative order of variance of stochastic perturbations to achieve the essential approximation order.