GNN-as-Judge: Unleashing the Power of LLMs for Few-Shot Graph Learning
ArXiv ML•Apr 13, 2026 04:00•research▸▾
research#gnn🔬 Research|Analyzed: Apr 13, 2026 04:10•
Published: Apr 13, 2026 04:00
•1 min read
•ArXiv MLAnalysis
This innovative framework brilliantly combines the deep semantic understanding of Large Language Models (LLMs) with the structural intelligence of Graph Neural Networks (GNNs). By introducing a collaborative pseudo-labeling strategy, the system masterfully overcomes the common data scarcity hurdles in text-attributed graphs. Ultimately, this approach significantly boosts few-shot semi-supervised learning, paving the way for more dynamic and resource-efficient AI applications!
Key Takeaways & Reference▶
- •Pioneering a collaborative dynamic between LLMs and GNNs to generate highly reliable pseudo-labels.
- •Effectively solving data scarcity challenges in few-shot semi-supervised learning for text-attributed graphs.
- •Developing a weakly-supervised fine-tuning algorithm that smoothly distills knowledge while filtering out label noise.
Reference / Citation
View Original"Specifically, GNN-as-Judge introduces a collaborative pseudo-labeling strategy that first identifies the most influenced unlabeled nodes from labeled nodes, then exploits both the agreement and disagreement patterns between LLMs and GNNs to generate reliable labels."