Skip to content

Serve contanerized machine learning models in microservice architecture with seldon-core or Tensorflow Serving

License

Notifications You must be signed in to change notification settings

data-max-hq/dog-breed-classification-ml

Repository files navigation

Dog Breed Classification 🐶

A demo that uses a ResNet model to predict the breed of a dog given in a photo.

Geting Started

There are two ways you can run this project:

  • Docker
  • Kubernetes

Start by cloning the repository

git clone https://github.com/data-max-hq/dog-breed-classification-ml.git

Run project with docker:

Prerequisites:

  • Docker
  • Docker Compose
  1. Open the project directory in terminal and type
    make requirements
  2. Train the model (only once)
    make local-train
  3. Deploy model using:
  • TensorFlow Serving
    make compose-tfserve
  • Seldon Serving
    make compose-seldon
  1. Open Streamlit UI at http://localhost:8502. Enjoy predicting 🪄

  2. Stop docker containers

    docker compose down

Run project with Kubernetes:

Prerequisites:

  • Docker
  • Helm
  • Helmfile
  • Minikube
  1. Deploy model using:
  • TensorFlow Serving
    1. Create a kubernetes cluster (minikube)
      make start-tfserve
    2. Build images
      make build-tfserve
    3. Load images to minikube
      make load-tfserve
    4. Install Kubeflow
      make install-kubeflow
      #Wait till all pods of kubeflow are running
    5. Expose Kubeflow port so you can access Kubeflow dashboard at http://localhost:8080 (optional)
      make port-kubeflow
    6. Deploy TensorFlow Serving, Ambassador and Streamlit
      make helm-tfserve
    7. Apply mapping resources
      make deploy-tfserve
    8. Expose Emissary-ingress port
      make port-emissary
  • Seldon Serving
    1. Create a kubernetes cluster (minikube)
    make start-seldon
    1. Build images
      make build-seldon
    2. Load images to minikube
      make load-seldon
    3. Install Kubeflow
      make install-kubeflow
      #Wait till all pods of kubeflow are running
    4. Expose Kubeflow port so you can access Kubeflow dashboard at http://localhost:8080 (optional)
      make port-kubeflow
    5. Deploy Seldon-Core, Ambassador and Streamlit
      make helm-seldon
    6. Deploy Seldon application and apply mapping resources
      make deploy-seldon
    7. Expose Emissary-ingress port
      make port
  1. Open Streamlit UI at http://localhost:8080/streamlit. Enjoy predicting 🪄
  2. Delete cluster
    make delete

About

Serve contanerized machine learning models in microservice architecture with seldon-core or Tensorflow Serving

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published