Skip to content

For this project, the Reacher environment is being used where multiple double-jointed arms (agents) will be trained so that they can move to target locations and maintain their position at the target location for as many time steps as possible!

Notifications You must be signed in to change notification settings

chungpuonn/Multiple-Agents---Continuous-Control

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project 2: Continuous Control

Introduction

For this project, the Reacher environment will be used. The GIF showing below is the untrained agents.

Untrained Agent


In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.

The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.

Distributed Training

For this project, there are two separate versions of the Unity environment provided:

  • The first version contains a single agent.
  • The second version contains 20 identical agents, each with its own copy of the environment.

The second version is useful for algorithms like PPO, A3C, and D4PG that use multiple (non-interacting, parallel) copies of the same agent to distribute the task of gathering experience.

Solving the Environment

Option 1: Solve the First Version

The task is episodic, and in order to solve the environment, the agent must get an average score of +30 over 100 consecutive episodes.

Option 2: Solve the Second Version

The barrier for solving the second version of the environment is slightly different, to take into account the presence of many agents. In particular, the agents must get an average score of +30 (over 100 consecutive episodes, and over all agents). Specifically,

  • After each episode, we add up the rewards that each agent received (without discounting), to get a score for each agent. This yields 20 (potentially different) scores. We then take the average of these 20 scores.
  • This yields an average score for each episode (where the average is over all 20 agents).

The environment is considered solved, when the average (over 100 episodes) of those average scores is at least +30.


Getting Started

Dependencies

By following these instructions, you will install PyTorch, the ML-Agents toolkit, and a few more Python packages required to run this project.

Step 1

To set up your python environment to run the code in this repository, follow the instructions below.

  1. Create (and activate) a new environment with Python 3.6.

    • Linux or Mac:
    conda create --name drlnd python=3.6 
    
    source activate drlnd 
    
  2. Follow the instructions in this repository to perform a minimal install of OpenAI gym.

    • Install the box2d environment group by following the instructions here.

  3. Clone the repository (if you haven't already!), and navigate to the python/ folder. Then, install several dependencies.

    git clone https://github.com/chungpuonn/Multiple-Agents---Continuous-Control.git 
    
    cd Multiple-Agents---Continuous-Control/python 
    
    pip install . 
    
    pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org torch===0.4.0 torchvision===0.2.0 tensorflow==1.7.1 -f https://download.pytorch.org/whl/torch_stable.html
    
  4. Create an IPython kernel for the drlnd environment.

    python -m ipykernel install --user --name drlnd --display-name "drlnd" 
    

Step 2

  1. Download the environment from one of the links below. You need only select the environment that matches your operating system:

    (For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.

    (For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link (version 1) or this link (version 2) to obtain the "headless" version of the environment. You will not be able to watch the agent without enabling a virtual screen, but you will be able to train the agent. (To watch the agent, you should follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)

  2. Place the file in the DRLND GitHub repository, in the p2_continuous-control/ folder, and unzip (or decompress) the file.

Instructions

How-to run the code

source activate drlnd 
jupyter-notebook p2_continuous-control/Continuous_Control.ipynb

Open the Jupyter Notebook of "Navigation (Solution).ipynb", and then follow the written instruction inside to train and deploy a smart agent!

Note: Before running code in a notebook, change the kernel to match the drlnd environment by using the drop-down Kernel menu.

Jupyter Notebook Kernel


Follow the instructions in Continuous_Control.ipynb to get started with training your own agent!

  • Note: Option 2 is being selected in this project to solve the second version of Reacher environment where it is consist of 20 dual-joint robot-arm agents.

Result

  • View Project Report to have a better understanding about the details of the algorithm and implementation.

Trained Agent

About

For this project, the Reacher environment is being used where multiple double-jointed arms (agents) will be trained so that they can move to target locations and maintain their position at the target location for as many time steps as possible!

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published