Skip to content

Commit

Permalink
Update
Browse files Browse the repository at this point in the history
  • Loading branch information
Yuduo Wu committed Feb 6, 2019
1 parent 61f72dc commit 7c4d2d8
Show file tree
Hide file tree
Showing 4 changed files with 55 additions and 2 deletions.
57 changes: 55 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,6 +244,42 @@ allows them to upload and browse the code assets, submit distributed jobs, and q

- Build modular, reusable, strongly typed machine learning workflows.

### NNI - Neural Network Intelligence ([Microsoft](https://www.microsoft.com/en-us/))

> NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments. The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.
| [__homepage__](https://nni.readthedocs.io/en/latest/index.html) | [__github__](https://github.com/Microsoft/nni) |

#### Architecture:

<p align="center"><img src="images/microsoft-nni-arch.png" width="90%"/></p>

#### Components:

- **Experiment**: An experiment is one task of, for example, finding out the best hyperparameters of a model, finding out the best neural network architecture. It consists of trials and AutoML algorithms.

- **Search Space**: It means the feasible region for tuning the model. For example, the value range of each hyperparameters.

- **Configuration**: A configuration is an instance from the search space, that is, each hyperparameter has a specific value.

- **Trial**: Trial is an individual attempt at applying a new configuration (e.g., a set of hyperparameter values, a specific nerual architecture). Trial code should be able to run with the provided configuration.

- **Tuner**: Tuner is an AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.

- **Assessor**: Assessor analyzes trial’s intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.

- **Training Platform**: It means where trials are executed. Depending on your experiment’s configuration, it could be your local machine, or remote servers, or large-scale training platform (e.g., OpenPAI, Kubernetes).

### Auto-Keras

> Auto-Keras is an open source software library for automated machine learning (AutoML). It is developed by DATA Lab at Texas A&M University and community contributors. The ultimate goal of AutoML is to provide easily accessible deep learning tools to domain experts with limited data science or machine learning background. Auto-Keras provides functions to automatically search for architecture and hyperparameters of deep learning models.
| [__homepage__](https://autokeras.com/) | [__github__](https://github.com/jhfjhfj1/autokeras) | [__paper__](https://arxiv.org/abs/1806.10282) |

#### Architecture:

<p align="center"><img src="images/autokeras-arch.png" width="60%"/></p>

### MLFlow ([Databricks](https://databricks.com/))

> MLflow is an open source platform for managing the end-to-end machine learning lifecycle.
Expand Down Expand Up @@ -413,7 +449,7 @@ allows them to upload and browse the code assets, submit distributed jobs, and q
- Just-In-Time (JIT) compilation
- Ahead-Of-Time (AOT) compilation

### Swift for TensorFlow ([Google](https://www.google.com/about/)/[Apple](https://www.apple.com/))
### Swift for TensorFlow

> Swift for TensorFlow is a new way to develop machine learning models. It gives you the power of TensorFlow directly integrated into the [Swift programming language](https://swift.org/). With Swift, you can write the following imperative code, and Swift automatically turns it into **a single TensorFlow Graph** and runs it with the full performance of TensorFlow Sessions on CPU, GPU and TPU.
Expand All @@ -425,6 +461,10 @@ allows them to upload and browse the code assets, submit distributed jobs, and q

<p align="center"><img src="images/swift-compiler.png" width="90%"/></p>

Related project - DLVM (Modern Compiler Infrastructure for Deep Learning Systems):

| [__homepage__](http://dlvm.org/) | [__github__](https://github.com/dlvm-team) | [__paper1__](https://arxiv.org/abs/1711.03016) | [__paper2__](https://openreview.net/forum?id=SJo1PLzCW) | [__paper3__](http://learningsys.org/nips17/assets/papers/paper_23.pdf) |

### JAX - Autograd and XLA ([Google](https://www.google.com/about/))

> JAX is [Autograd](https://github.com/hips/autograd) and
Expand Down Expand Up @@ -476,11 +516,22 @@ to any order.

<p align="center"><img src="images/amazon-sagemaker-neo-arch.png" width="90%"/></p>

### Tensor Comprehensions ([Facebook](https://www.facebook.com/))

> Tensor Comprehensions (TC) is a fully-functional C++ library to *automatically* synthesize high-performance machine learning kernels using [Halide](https://github.com/halide/Halide), [ISL](http://isl.gforge.inria.fr/) and NVRTC or LLVM. TC additionally provides basic integration with Caffe2 and PyTorch. We provide more details in our paper on [arXiv](https://arxiv.org/abs/1802.04730).
| [__homepage__](https://facebookresearch.github.io/TensorComprehensions/) | [__github__](https://github.com/facebookresearch/TensorComprehensions) | [__paper__](https://arxiv.org/abs/1802.04730) | [__blog__](https://research.fb.com/announcing-tensor-comprehensions/) |

#### Architecture:

<p align="center"><img src="images/facebook-tensor-comprehensions-arch.png" width="90%"/></p>

### Glow - A community-driven approach to AI infrastructure ([Facebook](https://www.facebook.com/))

> Glow is a machine learning compiler that accelerates the performance of deep learning frameworks on different hardware platforms. It enables the ecosystem of hardware developers and researchers to focus on building next gen hardware accelerators that can be supported by deep learning frameworks like PyTorch.
| [ __homepage__](https://facebook.ai/developers/tools/glow) | [__github__](https://github.com/pytorch/glow) | [__blog__](https://code.fb.com/ml-applications/glow-a-community-driven-approach-to-ai-infrastructure/) | [__paper__](https://arxiv.org/abs/1805.00907) |
| [ __homepage__](https://facebook.ai/developers/tools/glow) | [__github__](https://github.com/pytorch/glow) |
[__paper__](https://arxiv.org/abs/1805.00907) | [__blog__](https://code.fb.com/ml-applications/glow-a-community-driven-approach-to-ai-infrastructure/) |

#### Architecture:

Expand Down Expand Up @@ -711,6 +762,8 @@ batch size during training

#### [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629)

#### [Federated Learning: Strategies for Improving Communication Efficiency](https://arxiv.org/abs/1610.05492)

#### [Practical Secure Aggregation for Privacy-Preserving Machine Learning](https://eprint.iacr.org/2017/281.pdf)

#### [Federated Multi-Task Learning](https://arxiv.org/abs/1705.10467)
Expand Down
Binary file added images/autokeras-arch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/facebook-tensor-comprehensions-arch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/microsoft-nni-arch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 7c4d2d8

Please sign in to comment.