Skip to content

hidal00p/i-hate-drones

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

i-hate-drones

Piece of software that provides a workflow for training RL-based drone agents to fly autonomously in a simulated environement.

Problem setting

In the developed setting a quadcopter already posses a classical PID controller, which can be used to follow a precomputed trajectory. However, in the event of obstacles, that could emerge on the precomputed trajectory, revaluating for an obstacle free path is generally expensive.

In effort to avoid "classical" recomputation, and be more agile (have smaller reaction times), a PID controller is augmented with an RL-derived corrector. It attempts to correct rpm commands generated by the PID to maneuver in an obstacle free manner.

Software stack

This pakage is a wrapper around the following software tools which deliver the horsepower for the RL workflow:

Simulation is handled by the bullet engine (an open source physics engine).

gym-pybullet-drones provides an implementation of the PID controller for Bitcraze Fly quad, implementation of the interface with the bullet and some baseline gymnasium API. It is worth mentioning that there is no explicit dependence on gym-pybullet-drones as a python pakage, its source code was used directly for porting only minimal functionality. Moreover, this functionality was somewhat refactored.

Build

Make sure you have python3.9 or newer available on your system.

  • Clone the project and change directory into it.
  • Create a virtual env with the python version 3.9+, e.g.
$ python3.9 -m venv venv
  • Enter the virtual environment:
$ source venv/bin/activate
  • Install the dependencies and the package itself:
$ pip install .
  • Export the path to the assets directory
$ export ASSETS_DIR=$(pwd)/assets
  • Run demo script:
$ python demo/pid_flight.py

Usage

To learn how to use the project please run

$ python bee_rl/main.py --help

This will show the settings that are available for configuration.

If more fine grained configuration is desired, e.g.

  • Custom definition of MLP.
  • Custom callbacks to the stable baselines engine.
  • etc...

Then currently there is no way around diving a bit into the code. To taylor training to your needs more please see the implementation of the bee_rl/training_engine.py:TrainingEngine

References

About

RL environment for obstacle-free path following

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published