For this project, you will train an agent to navigate (and collect bananas!) in a large, square world.
A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of your agent is to collect as many yellow bananas as possible while avoiding blue bananas.
The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:
0- move forward.1- move backward.2- turn left.3- turn right.
The task is episodic, and in order to solve the environment, your agent must get an average score of +13 over 100 consecutive episodes.
-
Download the environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.
(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.
-
Place the file in the DRLND GitHub repository, in the
p1_navigation/folder, and then write the correct path in the argument for creating the environment under the notebookNavigation_soln.ipynb:
env = UnityEnvironment(file_name="Banana_Windows_x86_64/Banana.exe")dqn_agent.py: code for the agent used in the environmentmodel.py: code containing the Q-Network used as the function approximator by the agentdqn.pth: saved model weights for the original DQN modelddqn.pth: saved model weights for the Double DQN modelNavigation.ipynb: explore the unity environmentNavigation_soln.ipynb: notebook containing the solutionNavigation_Pixels.ipynb: notebook containing the code for the pixel-action problem (see below)
Follow the instructions in Navigation_soln.ipynb to get started with training your own agent!
To watch a trained smart agent, follow the instructions below:
- DQN: The original DQN algorithm can be run by using the checkpoint
dqn.pthfor loading the trained model. Please make sure that while defining the agent you have chosen the parameterqnetworkasQNetworkand the parameterupdate_typeasdqn. - Double DQN: The original Double DQN algorithm can be run by using the checkpoint
ddqn.pthfor loading the trained model. Please make sure that while defining the agent you have chosen the parameterqnetworkasQNetworkand the parameterupdate_typeasdouble_dqn.
Plots showing the score per episode over all the episodes. The environment was solved in 359 episodes using Double DQN method.
| Double DQN | DQN |
|---|---|
![]() |
![]() |
After you have successfully completed the project, if you're looking for an additional challenge, you have come to the right place! In the project, your agent learned from information such as its velocity, along with ray-based perception of objects around its forward direction. A more challenging task would be to learn directly from pixels!
To solve this harder task, you'll need to download a new Unity environment. This environment is almost identical to the project environment, where the only difference is that the state is an 84 x 84 RGB image, corresponding to the agent's first-person view. (Note: Udacity students should not submit a project with this new environment.)
You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
Then, place the file in the p1_navigation/ folder in the DRLND GitHub repository, and unzip (or decompress) the file. Next, open Navigation_Pixels.ipynb and follow the instructions to learn how to use the Python API to control the agent.
(For AWS) If you'd like to train the agent on AWS, you must follow the instructions to set up X Server, and then download the environment for the Linux operating system above.


