Grasping densely stacked objects may cause collisions and result in failures, degenerating the functionality of robotic arms. In this paper, we propose a novel lightweight generative convolutional neural network with grasp priority called GP-Net to solve multiobject grasp tasks in densely stacked environments. Specifically, a calibrated global context (CGC) module is devised to model the global context while obtaining long-range dependencies to achieve salient feature representation. A grasp priority prediction (GPP) module is designed to assign high grasp priorities to top-level objects, resulting in better grasp performance. Moreover, a new loss function is proposed, which can guide the network to focus on high-priority objects effectively. Extensive experiments on several challenging benchmarks including REGRAD and VMRD demonstrate the superiority of our proposed GP-Net over representative state-of-the-art methods. We also tested our model in a real-world environment and obtained an average success rate of 83.3%, demonstrating that GP-Net has excellent generalization capabilities in real-world environments as well. The source code will be made publicly available.