Skip to content

Latest commit

 

History

History
43 lines (33 loc) · 2.78 KB

CONTRIBUTING.md

File metadata and controls

43 lines (33 loc) · 2.78 KB

Contributing to simple_rl

Thanks for the interest! I've here put together a quick guide for contributing to the library.

As of August 2018, the standard pipeline for contributing is as follows:

  • Please follow package and coding conventions (see below).
  • If you add something substantive, please do the following:
    • Run the basic testing script and ensure all tests pass in both Python 2 and Python 3.
    • If you decide your contribution would benefit from a new example/test, please write a quick example file to put in the examples directory.
  • Issue a pull request to the main branch, which I will review as quickly as I can. If I haven't approved the request in three days, feel free to email me.

Library Standards

Please ensure:

  • Your code is compatible with both Python 2 and Python 3.
  • If you add any deep learning, the library will be moving toward PyTorch as its standard.
  • I encourage the use of https://www.pylint.org/.
  • Please include a brief log message for all commits (ex: use -m "message").
  • Files are all named lower case with underscores between words unless that file contains a Class.
  • Class files are named with PascalCase (so all words are capitalized) with the last word being "Class" (ex: QLearningAgentClass.py).

Coding conventions

Please:

  • Indent with spaces.
  • Spaces after all list items and algebraic operators (ex: ["a", "b", 5 + 6]).
  • Doc-strings follow the (now deprecated, sadly) Google doc-string format. Please use this until this contribution guide says otherwise.
  • Separate standard python imports from non-python imports at the top of each file, with python imports appearing first.

Things to Work On

If you'd like to help add on to the library, here are the key ways I'm hoping to extend its current features:

  • Planning: Finish MCTS [Coloum 2006], implement RTDP [Barto et al. 1995]
  • Deep RL: Write a DQN [Mnih et al. 2015] in PyTorch, possibly others (some kind of policy gradient).
  • Efficiency: Convert most defaultdict/dict uses to numpy.
  • Reproducibility: The new reproduce feature is limited in scope -- I'd love for someone to extend it to work with OO-MDPs, Planning, MarkovGames, POMDPs, and beyond.
  • Docs: Write a nice tutorial and give thorough documentation.
  • Visuals: Unify MDP visualization.
  • Misc: Additional testing.

Best, Dave Abel ([email protected])