Stylized animation is often adored for its innovative and daring visual creativity. Due to the strong visual impact and color contrast inherent in woodcut style design, it has been applied in animation and comics. However, traditional woodcuts, hand-drawn, and previous computer-aided methods have yet to address the issues of dwindling design inspiration, lengthy production times, and complex adjustment procedures. To tackle these challenges, we propose a novel network framework, the Woodcut-style Design Assistant Network (WDANet). Notably, our research is the first to utilize diffusion models to streamline the woodcut-style design process. We curate the Woodcut-62 dataset, which features works from 62 renowned historical artists, to train WDANet in absorbing and learning the aesthetic nuances of woodcut prints, offering users a wealth of design references. Our WDANet integrates text and woodcut-style image features based on a denoising network. WDANet allows users to input or slightly modify a text description to quickly generate accurate, high-quality woodcut-style designs, saving time and offering flexibility. As confirmed by user studies, quantitative and qualitative analyses show that WDANet outperforms the current state-of-the-art in generating woodcut-style images and proves its value as a design aid.