已收录 273616 条政策
 政策提纲
  • 暂无提纲
Improving the Robustness of GraphSAINT via Stability Training
[摘要] Graph Neural Networks (GNNs) field has a dramatic development nowadays due to the strongrepresentation capabilities for data in non-Euclidean space, such as graph data. However, as thescale of the dataset continues to expand, sampling is commonly introduced to obtain scalable GNNs,which leads to the instability problem during training. For example, when Graph SAmpling basedINductive learning meThod (GraphSAINT) is applied for the link prediction task, it may not converge in training with a probability range from 0.1 to 0.4. This paper proposes the improved GraphSAINTs by introducing two normalization techniques and one Graph Neural Network (GNN) trickinto the traditional GraphSAINT to solve the problem of the training stability and obtain more robusttraining results. The improved GraphSAINTs successfully eliminate the instability during trainingand improve the robustness of the traditional model. Besides, we can also accelerate the trainingprocedure convergence of the traditional GraphSAINT and obtain a generally higher performancein the prediction accuracy by applying the improved GraphSAINTs. We validate our improvedmethods by using the experiments on the citation dataset of Open Graph Benchmark (OGB).
[发布日期]  [发布机构] 
[效力级别]  [学科分类] 环境工程
[关键词] Graph Neural Networks (GNNs);Training Stability;Normalization Techniques;GNN Tricks;Link Prediction [时效性] 
   浏览次数:2      统一登录查看全文      激活码登录查看全文