Skip to content

Latest commit

 

History

History
36 lines (19 loc) · 1.96 KB

readme.md

File metadata and controls

36 lines (19 loc) · 1.96 KB

Getting started Seldon Core

There are 3 steps to using seldon-core.

  1. Install seldon-core onto a kubernetes cluster
  2. Wrap your components (usually runtime model servers) as Docker containers that respect the internal Seldon microservice API.
  3. Define your runtime service graph as a SeldonDeployment resource and deploy your model and serve predictions

steps

Install Seldon Core

To install seldon-core follow the installation guide.

Wrap Your Model

The components you want to run in production need to be wrapped as Docker containers that respect the Seldon microservice API. You can create models that serve predictions, routers that decide on where requests go, such as A-B Tests, Combiners that combine responses and transformers that provide generic components that can transform requests and/or responses.

To allow users to easily wrap machine learning components built using different langauges and toolkits we provide wrappers that allow you easily to build a docker container from your code that can be run inside seldon-core. Our current recommended tool is RedHat's Source-to-Image. Wrapping your models is discussed here.

Define Runtime Service Graph

To run your machine learning graph on Kubernetes you need to define how the components you created in the last step fit together to represent a service graph. This is defined inside a SeldonDeployment Kubernetes Custom resource. A guide to constructing this custom resource service graph is provided.

graph

Deploy and Serve Predictions

You can use kubectl to deploy your ML service like any other Kubernetes resource. This is discussed here.

Worked Examples