Major contributions
The use of a graph convolutional network using positions and edges
(PEGCN) model is proposed to address the aforementioned problems.
Firstly, aiming at the problem whereby the traditional GNN ignores the
relative position relation between words, the position information
encoding is added into the word embedding part, so that the network can
learn the position information between words. Secondly, in view of the
insufficient use of edge features in GNN, the adjacency matrix is
proposed to raise and normalize to extract multi-dimensional continuous
features of edges. Finally, combining the advantages of the large-scale
pre-training model, it can be proved that using the large-scale
pre-training model is beneficial to the transduction learning through
experiments. The key contribution of the work can be divided into the
following parts:
(1) In this paper, we propose the PEGCN model, which effectively
addresses the issue of disregarding text positional information in graph
neural networks by utilizing input representations with positional
information in the word embedding section;
(2) The new model can contain multidimensional positive edge features,
which overcomes the limitation that the traditional GNN can only process
one-dimensional edge features and makes full use of the features of
nodes and edges in the graph;