To address the challenges of low offloading success rates and inefficient data transmission in the internet of vehicles (IoV), this paper proposes a multi-layer distributed dynamic offloading strategy for edge computing tasks in IoV based on multi-agent deep reinforcement learning. Firstly, a multi-layer distributed internet of vehicles edge computing system model is designed by integrating software defined network and mobile edge computing. The system model can realize collaborative scheduling optimization at different levels, which can better meet the needs of dynamic allocation of mobile vehicle resources and real-time processing of tasks. Then, considering the success rate of offloading and data transmission rate of vehicle computing tasks, a multi-agent deep reinforcement learning algorithm framework is proposed. The algorithm framework uses collaborative learning of multi-agent systems to enable the vehicle edge system to independently select the optimal task offloading decision. At the same time, the optimization of the action space search and the priority experience replay mechanism were introduced to further improve the effective search of the action space and the stability and accuracy of the task offloading decision. Finally, based on the above algorithm framework and optimization mechanism, a multi-layer distributed vehicle task offloading decision optimization algorithm is proposed. The algorithm can ensure that the vehicle can complete the computing task offloading with the minimum task transmission time and effective offloading success rate according to the current network status and task size. Simulation results show that, compared with the existing offloading methods, the proposed method improves the success rate of computing task offloading by 5%~20% and the efficiency of data transmission by 17.8% on average.