The light condition of driving at night is poor, and the target is easily blocked, which makes it difficult for the detection algorithm to accurately determine the edges and shapes of targets. In addition, in the process of vehicle moving, fuzzy sense is easily generated on the captured objects, resulting in the difficulty of feature extraction of targets. To address these issues, this paper proposes an improved object detection algorithm for night-time driving based on YOLOv8n. Firstly, the DCN model is introduced into the C2f module and improved into the DCN_CSP2 module, which is used to replace the C2f module in the Backbone. This enhances the algorithm to capture the shape and edge information of target objects more accurately, improves the feature extraction capability, and reduces the computational burden. Secondly, the DWConv module is specifically introduced into the Neck to reduce the number of model parameters while maintaining computational performance, and improve the computational efficiency and achieve the lightweight of the model. Additionally, to address the issue of the original NMS algorithm potentially causing the loss of important targets when they are partially obscured, a decay function based on the overlap size between the candidate detection boxes and the reference box is introduced. This makes important targets more likely to be retained and participate in the subsequent NMS process, thereby improving the detection performance of the model. Analysis results show that compared with YOLOv8n, the improved algorithms achieve a 6.2% increase in mAP@50 and a 5.7% increase in mAP@50:95 on the rmsw_5k_night dataset, and reduce the computational burden and the number of model parameters, achieving a balance between model lightweight and high performance. The improved algorithm effectively enhances the detection capability for night-time targets, and it lays a solid foundation for the algorithm to be applied to the terminal devices.