As this bundle of data analysis code grew to the medium size collection of a python scripts, it is perfect for a little Clean Code exercise. The following steps where applied in order to create an easy to use and clean python application.
>> Go straight to that clean python application & user instructions >>
Some steps to create a nice ditigal working environment for a clean start:
- initialise a git repository for version controll and link it to a remote URL
- create and/or activate your virtual environment for this project
- move working files like
*.ipynb
out of this repository or at least - create a
.gitignore
file do hide certain files likenotes.txt
from git version controll
- draw diagram to plan code structure
- tea break β
Now its about time to dig into this code collection and start cleaning!
- delet unused imports
- restructure and refactor code as planed
- create python classes and modules to include all previously scattered processing functionality
- get rid of redundant / duplicate code
- make all function names camelCase
- rename variables to show there intention
- delet unnecessary functionality
like the processing ofglobal max
,global min
,global mean
andglobal average
values in the example code - deleted unnecessary columns from dataframe
likealgo
,dm tools
,target
andsize
values in the example code - more tea β
- write unittests
- solve bugs π
- add main.py script to inherit all processing functionality
- add command line interface for easy handling
- solve more bugs π
- add documentation to functions and classes
- write
requirements.txt
to collect used python libraries - write basic user instructions in
README.md
file for your git repository