Recently, deep semi-supervised graph embedding learning has drawn much attention for its appealing performance on the data with a pre-specified graph structure, which could be predefined or empirically constructed based on given data samples. However, the pre-specified graphs often contain considerable noisy/inaccurate connections and have a huge size for large datasets. Most existing embedding algorithms just take the graph off the shelf during the whole training stage and thus are easy to be misled by the inaccurate graph edges, as well as may result in large model size. In this paper, we attempt to address these issues by proposing a novel deep semi-supervised algorithm for simultaneous graph embedding and node classification, utilizing dynamic graph learning in neural network hidden layer space. Particularly, we construct an anchor graph to summarize the whole dataset using the hidden layer features of a consistency-constrained network. The anchor graph is used for sampling node neighborhood context, which is then presented together with node labels as contextual information to train an embedding network. The outputs of the consistency network and the embedding networks are finally concatenated together to pass a softmax function to perform node classification. The two networks are optimized jointly using both labeled and unlabeled data to minimize a single semi-supervised objective function, including a cross-entropy loss, a consistency loss and an embedding loss. Extensive experimental results on popular image and text datasets have shown that the proposed method is able to improve the performance of existing graph embedding and node classification methods, and outperform many state-of-the-art approaches on both types of datasets.
- semi-supervised learning
- Image /Text Classication; Grap Embedding
- Dynamic Anchor Graph
- Semi-supervised learning
- Dynamic Anchor Graph Embedding
- Grid-structured/graph-structured data
- Image/text classification