Integration of Adaptive Reinforcement Learning Models into the Control of Agricultural Drones

Authors

  • Yu. M. Rodinkov Vinnytsia National Technical University
  • A. Yu. Savitsky Vinnytsia National Technical University

DOI:

https://doi.org/10.31649/1997-9266-2025-182-5-187-191

Keywords:

unmanned aerial vehicles, agricultural spraying, simulation, adaptive control, product loss

Abstract

This paper presents an integrated approach to adaptive agricultural spraying using unmanned aerial vehicles (UAVs), leveraging reinforcement learning (RL) techniques, particularly the Proximal Policy Optimization (PPO) algorithm. The study focuses on the practical implementation of mathematical models in simulation and onboard control systems. It demonstrates how spray coverage error, chemical loss, and stochastic wind models can be formalized into a reward function and incorporated during RL agent training. The PPO algorithm was implemented using the Stable-Baselines3 library in the AirSim simulator. The agent was trained based on a complex input state vector, including position, wind velocity, crop density, and coverage maps. The training was carried out in stages, starting with low wind conditions and gradually increasing to gusty wind scenarios. The resulting policy was exported in ONNX format and optimized for real-time execution via TensorRT on an NVIDIA Jetson Nano platform, enabling efficient inference onboard the drone. The developed solution was tested in both simulation environments (AirSim, Gazebo) and a physical PX4 SITL platform. A series of experiments were conducted with simulated wind intensities ranging from 2 to 14 m/s. The proposed RL-based adaptive spraying strategy was compared with traditional fixed-parameter control methods. Results showed a reduction in average coverage error by up to 30 % and a decrease in chemical losses by 28 %, confirming the agent’s ability to adapt in real time. A key feature of this approach is its end-to-end practicality: for the first time, a complete development pipeline is presented — from mathematical modeling and training to onboard deployment and real-world validation. The article includes screenshots of the training process, simulated environments, error convergence curves, and the Gazebo GUI, offering transparency and reproducibility for future researchers. This work contributes to the advancement of autonomous precision agriculture systems and lays the groundwork for deploying self-learning UAVs in dynamic field environments.

Author Biographies

Yu. M. Rodinkov, Vinnytsia National Technical University

Post-Graduate Student of the Chair of Information Radio-electronic Technologies and Systems

A. Yu. Savitsky, Vinnytsia National Technical University

Cand. Sc. (Eng.), Associate Professor, Associate Professor of the Chair of Information Radio-electronic Technologies and Systems

References

A. Pretto, A. Bevilacqua, E. Menegatti, and E. Pagello “Cooperative robotics and autonomous vehicles in precision agriculture”, MDPI Journal Robotics , no 8 (4), 2019. https://doi.org/10.3390/robotics8040096 .

R. S. Sutton, and A. G. Barto, “Reinforcement Learning: An Introduction”, 2 ed, MIT Press, 2018, 552 p.

C. Zhang, and J. M. Kovacs, “The application of small unmanned aerial systems for precision agriculture: a review,” International Journal on Advances in Precision Agriculture, no. 13, pp. 693-712, 2012. https://doi.org/10.1007/s11119-012-9274-5.

J. Hu, T. Wang, J. C. Yang, Y. B. Lan, S. L. Lv, and Y. L. Zhang, “WSN-assisted UAV trajectory adjustment for pesticide drift control,” Sensors , MDPI Journal, no. 20 (19): 5473, 2020. https://doi.org/10.3390/s20195473 .

Z. Y. Hao, X. Z. Li, C. Meng, W. Yang, and M. Z. Liб “Adaptive spraying decision system for UAV based on RL,” International Journal of Agricultural and Biological Engineering, no. 15 (4), pp. 16-26, 2022. https://doi.org/10.25165/j.ijabe.20221504.6929 .

C. Kang, B. Park, and J. Choiб “Scheduling PID attitude and position control for quadrotor UAVs under external disturbances,” Sensors, MDPI Journal, no. 22(1) :150, 2022. https:/doi.org/10.3390/s22010150 .

Downloads

Abstract views: 8

Published

2025-10-31

How to Cite

[1]
Y. M. Rodinkov and A. Y. Savitsky, “Integration of Adaptive Reinforcement Learning Models into the Control of Agricultural Drones”, Вісник ВПІ, no. 5, pp. 187–191, Oct. 2025.

Issue

Section

Radioelectronics and radioelectronic equipment manufacturing

Metrics

Downloads

Download data is not yet available.