Accurate 3D neuron segmentation from electron microscopy (EM) volumes is crucial for neuroscience research. However, the complex neuron morphology often leads to over-merge and over-segmentation results. Recent advancements utilize 3D CNNs to predict a 3D affinity map with improved accuracy but suffer from two challenges: high computational cost and limited input size, especially for practical deployment for large-scale EM volumes. To address these challenges, we propose a novel method to leverage lightweight 2D CNNs for efficient neuron segmentation. Our method employs a 2D Y-shape network to generate two embedding maps from adjacent 2D sections, which are then converted into an affinity map by measuring their embedding distance. While the 2D network better captures pixel dependencies inside sections with a larger input size, it overlooks inter-section dependencies. To overcome this, we introduce a cross-dimension affinity distillation (CAD) strategy that transfers inter-section dependency knowledge from a 3D teacher network to the 2D student network by ensuring consistency between their output affinity maps. Additionally, we design a feature grafting interaction (FGI) module to enhance knowledge transfer by grafting embedding maps from the 2D student onto those from the 3D teacher. Extensive experiments on multiple EM neuron segmentation datasets, including a newly built one by ourselves, demonstrate that our method achieves superior performance over state-of-the-art methods with only 1/20 inference latency.