Paper Title
DQN-VIZ: A Deep Q-Network Visualization System
Abstract
DQN and its variations are one of the primary deep reinforcement learning algorithms used for discrete action
spaces. Applying these algorithms to a particular task can be difficult and considering the number of parts involved,
debugging such implementations require lot of time and effort. Our objective here is to develop a library that in addition to
providing implementations of several popular variations of DQN algorithms gives access to a support system that can aid in
analyzing, recording and debugging whilst applying deep reinforcement learning to the problem at hand.
Keywords - Deep Reinforcement Learning, DQN, Double DQN, Dueling DQN, Recording