A framework for developing and evaluating adaptive handover algorithms using deep reinforcement learning.
Note: The code is currently being revised. A new version will be available soon.
HandoverOptimDRL is a framework designed to facilitate developing and evaluating handover algorithms using deep reinforcement learning, i.e., proximal policy optimization (PPO). It provides tools and environments to simulate the 3GPP handover protocol and train and evaluate a PPO-based handover protocol.
This repository contains the source code, data sets, and trained PPO model for the paper: A Deep Reinforcement Learning-based Approach for Adaptive Handover Protocols, see reference below.
To install the HandoverOptimDRL package, follow these steps:
-
Clone the repository:
git clone https://github.com/kit-cel/HandoverOptimDRL
-
Navigate to the project directory:
cd HandoverOptimDRL
-
Install the package:
python -m pip install .
i.e., to install it in editable mode/develop mode:
python -m pip install -e .
You are now ready to use the HandoverOptimDRL framework for your projects.
You can validate the PPO-based and 3GPP handover protocols using the run.py
file:
-
Validate PPO-based Protocol:
python run.py validate_ppo
-
Validate 3GPP Protocol:
python run.py validate_3gpp
If you use HandoverOptimDRL in your research, please cite the accompanying paper:
@inproceedings{handoveroptimdrl,
title={A Deep Reinforcement Learning-based Approach for Adaptive Handover Protocols},
author={Johannes Voigt and Peter J. Gu and Peter M. Rost},
year={2024},
organization={KIT - Karlsruhe Institute of Technology},
}
This project is licensed under the MIT License. See the LICENSE file for details.