Graph neural networks (GNNs) have emerged as powerful surrogates for mesh-based computational fluid dynamics, but training them on high-resolution unstructured meshes with hundreds of thousands of nodes remains prohibitively expensive. We study a coarse-to-fine curriculum that accelerates convergence by first training on very coarse meshes and then progressively introducing medium and high resolutions (up to 3 X 10^5 nodes). Unlike multiscale GNN architectures, the model itself is unchanged; only the fidelity of the training data varies over time. We achieve comparable generalization accuracy while reducing total wall-clock time by up to 50%. Furthermore, on datasets where our model lacks the capacity to learn the underlying physics, using curriculum learning enables it to break through plateaus.
link to publication link to coda and data