LunarLander-v2 (OpenAI Gym) example, using rust, trc-rs and bevy
- Lunar Lander v2 created by OpenAI Gym, written in rust replacing the original model's python, using the rapier2d physics engine replacing box2d, and bevy, game engine replacing pygame.
- Solution for the lunar lander, based on the solution from DeepLearning.AI & Stanford University, using tch-rs (pytorch bindings for rust) replacing tensorflow. The solution was also powered with Double Deep Q-Network implementation using the information in the article Deep Reinforcement Learning with Double Q-learning” (Hasselt et al., 2015).
You need the rust compiler (cargo), then run:
cargo run --release
There is an human-controller implementation, for this lunar lander game, to enable it change the human_controller boolean to true in main.rs:
- let human_controller = false;
+ let human_controller = true;
- Version 2.3.0: Train model with Double Deep Q-Network, Duelling Deep Q-Network, prioritized experience replay and Noisy network.
- Version 1.1.0: Lunar Lander environment creation, DQN network implementation, with replay buffers, soft updates, and more base fundamentals for Deep Q-Networks.