Skip to content

tvdboom/ATOM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ATOM

Automated Tool for Optimized Modeling

A Python package for fast exploration of machine learning pipelines



πŸ“œ Overview

Author Mavs Β Β Β Β  Email [email protected] Β Β Β Β  Documentation Documentation Β Β Β Β  Slack Slack


General Information
Repository Project Status: Active Conda Recipe License: MIT Downloads
Release pdm-managed PyPI version Conda Version DOI
Compatibility Python 3.10|3.11 Conda Platforms
Build status Build and release Azure Pipelines codecov
Code analysis Linting and tests PEP8 Imports: isort ruff mypy



πŸ’‘ Introduction

During the exploration phase of a machine learning project, a data scientist tries to find the optimal pipeline for his specific use case. This usually involves applying standard data cleaning steps, creating or selecting useful features, trying out different models, etc. Testing multiple pipelines requires many lines of code, and writing it all in the same notebook often makes it long and cluttered. On the other hand, using multiple notebooks makes it harder to compare the results and to keep an overview. On top of that, refactoring the code for every test can be quite time-consuming. How many times have you conducted the same action to pre-process a raw dataset? How many times have you copy-and-pasted code from an old repository to re-use it in a new use case?

ATOM is here to help solve these common issues. The package acts as a wrapper of the whole machine learning pipeline, helping the data scientist to rapidly find a good model for his problem. Avoid endless imports and documentation lookups. Avoid rewriting the same code over and over again. With just a few lines of code, it's now possible to perform basic data cleaning steps, select relevant features and compare the performance of multiple models on a given dataset, providing quick insights on which pipeline performs best for the task at hand.

Example steps taken by ATOM's pipeline:

  1. Data Cleaning
    • Handle missing values
    • Encode categorical features
    • Detect and remove outliers
    • Balance the training set
  2. Feature engineering
    • Create new non-linear features
    • Select the most promising features
  3. Train and validate multiple models
    • Apply hyperparameter tuning
    • Fit the models on the training set
    • Evaluate the results on the test set
  4. Analyze the results
    • Get the scores on various metrics
    • Make plots to compare the model performances



diagram_pipeline



❗ Why you should use ATOM



πŸ› οΈ Installation

Install ATOM's newest release easily via pip:

$ pip install -U atom-ml

or via conda:

$ conda install -c conda-forge atom-ml



⚑ Usage

Colab Binder

ATOM contains a variety of classes and functions to perform data cleaning, feature engineering, model training, plotting and much more. The easiest way to use everything ATOM has to offer is through one of the main classes:

Let's walk you through an example. Click on the SageMaker Studio Lab badge on top of this section to run this example yourself.

Make the necessary imports and load the data.

import pandas as pd
from atom import ATOMClassifier

# Load the Australian Weather dataset
X = pd.read_csv("https://raw.githubusercontent.com/tvdboom/ATOM/master/examples/datasets/weatherAUS.csv")
X.head()

Initialize the ATOMClassifier or ATOMRegressor class. These two classes are convenient wrappers for the whole machine learning pipeline. Contrary to sklearn's API, they are initialized providing the data you want to manipulate.

atom = ATOMClassifier(X, y="RainTomorrow", n_rows=1000, verbose=2)

Data transformations are applied through atom's methods. For example, calling the impute method will initialize an Imputer instance, fit it on the training set and transform the whole dataset. The transformations are applied immediately after calling the method (no fit and transform commands necessary).

atom.impute(strat_num="median", strat_cat="most_frequent")
atom.encode(strategy="target", max_onehot=8)

Similarly, models are trained and evaluated using the run method. Here, we fit both a LinearDiscriminantAnalysis and AdaBoost model, and apply hyperparameter tuning.

atom.run(models=["LDA", "AdaB"], metric="auc", n_trials=10)

And lastly, analyze the results.

atom.results

atom.plot_roc()



Documentation Documentation

Relevant links
⭐ About Learn more about the package.
πŸš€ Getting started New to ATOM? Here's how to get you started!
πŸ‘¨β€πŸ’» User guide How to use ATOM and its features.
πŸŽ›οΈ API Reference The detailed reference for ATOM's API.
πŸ“‹ Examples Example notebooks show you what can be done and how.
πŸ“’ Chagelog What are the new features in the latest release?
❔ FAQ Get answers to frequently asked questions.
πŸ”§ Contributing Do you wan to contribute to the project? Read this before creating a PR.
🌳 Dependencies Which other packages does ATOM depend on?
πŸ“ƒ License Copyright and permissions under the MIT license.