Human emotion recognition from facial expressions depicted in images is an active area of research in computer vision and machine learning. Facial expressions depict emotions as a mix of basic emotions, each with a different intensity. Hence, measuring these intensities from facial expression images is a challenging task. Previous studies have solved this task by using label-distribution learning (LDL), which assigns a distribution of class intensities to an instance to describe the mix of emotions more explicitly. In this paper, we tackle this problem by proposing an LDL framework that predicts the intensities of six basic human emotions, i.e., happiness, sadness, anger, fear, surprise, and disgust, by using a novel convolutional (Conv) layer called the visibility convolutional layer (VCL). The VCL preserves the advantages of traditional Conv layers, in terms of using filters to extract features, while reducing the number of learnable parameters and allowing for the extraction of strong texture features. Our LDL framework, which we call VCNN-ELDL, uses the features extracted by the VCLs and those extracted by traditional Conv layers to predict a discrete distribution. We evaluate VCNN-ELDL on the widely-used s-JAFFE and s-BU-3DFE datasets. Our results show that our framework can effectively learn the distribution of emotions from face images by attaining a stronger performance than state-of-the-art LDL methods.