This project illustrates how to deploy Eclipse Vert.x application on Kubernetes and Openshift v3.
You first need a Kubernetes cluster. To run this example locally, you can use minikube.
Once installed just run:
minikube start
To check that your cluster is started run:
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 5s
To ease the deployment, we are going to link your local Docker to the Docker daemon running in minikube. This is not possible on external cluster. In this case you will need to push your images to a Docker image registry. To enable this link use:
eval $(minikube docker-env)
NOTE: The syntax may differ depending on your terminal. Run minikube docker-env
to get the right syntax.
Now, if you run docker ps
you should see a set of running containers - they are kubernetes containers.
Let’s start with a simple application exposing a HTTP endpoint. The code is in the vertx-kubernetes-applications/vertx-kubernetes-greeting-application
directory. The application is quite simple:
package io.vertx.example.kubernetes.greeting;
import io.vertx.core.Vertx;
import io.vertx.ext.web.Router;
public class GreetingApplication {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
Router router = Router.router(vertx);
router.get("/").handler(rc -> rc.response().end("Hello"));
router.get("/:name").handler(rc -> rc.response().end("Hello " + rc.pathParam("name")));
vertx.createHttpServer()
.requestHandler(router)
.listen(8080);
}
}
As we want to deploy this application as a container, no need to build a fat-jar. So, just prepare your application using:
mvn dependency:copy-dependencies compile
It just copies the dependencies and compile the sources.
At the root of the application directory, there is a Dockerfile
:
FROM fabric8/java-alpine-openjdk8-jre
EXPOSE 8080
# Copy dependencies
COPY target/dependency/* /deployment/libs/
# Copy classes
COPY target/classes /deployment/classes
ENV JAVA_APP_DIR=/deployment
ENV JAVA_LIB_DIR=/deployment/libs
ENV JAVA_CLASSPATH=${JAVA_APP_DIR}/classes:${JAVA_LIB_DIR}/*
ENV JAVA_MAIN_CLASS="io.vertx.example.kubernetes.greeting.GreetingApplication"
This Dockerfile
inherits from fabric8/java-alpine-openjdk8-jre
an image providing Java 8 configured for containers.
More details about this image here. The application is run as a simple java
-cp … io.vertx.example.kubernetes.greeting.GreetingApplication
.
Let’s build this docker image:
docker build -t vertx-kubernetes/vertx-kubernetes-greeting .
Remember, we have connected our local docker to the Kubernetes daemon. So, the image is built and available in the Kubernetes Docker registry:
$ docker images | grep vertx-kubernetes-greeting
vertx-kubernetes/vertx-kubernetes-greeting latest 1882bbfd05bb 43 seconds ago 83.4MB
Now that the image is ready. It’s time to instantiate it:
kubectl run vertx-greeting --image=vertx-kubernetes/vertx-kubernetes-greeting --image-pull-policy=IfNotPresent --port=8080
kubectl expose deployment vertx-greeting --type=NodePort
# Invoke the service:
curl $(minikube service vertx-greeting --url)
curl $(minikube service vertx-greeting --url)/vert.x
The first line create a deployment instantiating our image as a pod (containing our container). vertx-greeting
is the
name. The --image
parameter indicates the image. --image-pull-policy
instructs our deployment to not retrieve the
image for a Docker registry if it knows the image. Finally, --port
indicate on which port our application is expecting
requests.
The second line (with expose) creates a service of type NodePort exposing our application endpoint.
The two last lines are just invoking the service to be sure it’s working.
There are several ways to achieve an update. Let’s see one of the approach. First edit the code and rebuild the image:
mvn compile
docker build -t vertx-kubernetes/vertx-kubernetes-greeting .
Then, gets the name of the pod, and delete it:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
vertx-greeting-1277436214-1xsqd 1/1 Running 0 2m
$ kubectl delete po/vertx-greeting-1277436214-1xsqd
Because there is a deployment
running, it will automatically recreate the pod with the updated image.
Vert.x Service Discovery provides a bridge for Kubernetes, and so a Vert.x application can retrieve services exposed in Kubernetes.
The code of the application is available in the vertx-kubernetes-applications/vertx-kubernetes-client-application
directory.
Build it using:
mvn dependency:copy-dependencies compile
docker build -t vertx-kubernetes/vertx-kubernetes-client .
kubectl run vertx-client --image=vertx-kubernetes/vertx-kubernetes-client --image-pull-policy=IfNotPresent --port=8080
kubectl expose deployment vertx-client --type=NodePort
# Invoke the service
curl $(minikube service vertx-client --url)
IMPORTANT: This section is about OpenShift v3 - more details on the OpenShift web page.
OpenShift v3 is an enterprise-grade Kubernetes distribution extending Kubernetes feature set with a service catalog and build support. While it’s possible to use OpenShift as Kubernetes (so using the previous instructions), in this section we are going to look at Openshift specificities.
You can use OpenShift Online Starter for free or install minishift - the minikube equivalent for OpenShift.
For minishift, once installed just run:
minishift start
eval $(minishift oc-env)
The second line add oc
- the OpenShift client - to your $PATH
.
then, login using:
oc login -u developer -p developer
You can also open the OpenShift dashboard using: minishift console
. Login using the same credentials
(developer
/developer
). We are going to use the myproject
project. But nothing should be displayed in this project.
The first approach to deploy an application to OpenShift is to use a Docker build. Unlike with Kubernetes, we are going to delegate the image build to OpenShift. Again, the previous approach would still work, but we focus on OpenShift features.
The code of the application is located in vertx-openshift-applications/vertx-greeting-application
. The project contains
a Dockerfile
similar to the one used with Kubernetes:
FROM fabric8/java-alpine-openjdk8-jre
EXPOSE 8080
# Copy dependencies
COPY target/dependency/* /deployment/libs/
ENV JAVA_APP_DIR=/deployment
ENV JAVA_LIB_DIR=/deployment/libs
ENV JAVA_CLASSPATH=${JAVA_APP_DIR}/classes:${JAVA_LIB_DIR}/*
ENV JAVA_OPTIONS="-Dvertx.cacheDirBase=/tmp"
ENV JAVA_MAIN_CLASS="io.vertx.example.openshift.greeting.MyGreetingApp"
# Copy classes
COPY target/classes /deployment/classes
To deploy the application, run from the project directory the following commands:
oc new-build --binary --name=vertx-greeting-application -l app=vertx-greeting-application
mvn dependency:copy-dependencies compile
oc start-build vertx-greeting-application --from-dir=. --follow
oc new-app vertx-greeting-application -l app=vertx-greeting-application
oc expose service vertx-greeting-application
The first instruction creates a build in OpenShift named vertx-greeting-application
. Then we build the application
locally. Alternatively we could build the application during the Docker build, but because it would re-download
dependencies on every build. The third line triggers the build. It uploads the content of the current directory and run
the docker build in OpenShift. Then, the oc new-app
command creates a deployment configuration, a service and a pod from
our built image. Finally, the last line expose the created service externally so we can call it from our computer.
To invoke the application, you can get the url using:
$ oc get routes
vertx-greeting-application vertx-greeting-application-myproject.192.168.64.30.nip.io vertx-greeting-application ...
$ curl vertx-greeting-application-myproject.192.168.64.30.nip.io
Alternatively, in the OpenShift dashboard click on the route url.
To update the application, just update the code and run:
mvn dependency:copy-dependencies compile
oc start-build vertx-greeting-application --from-dir=. --follow
The update use a rolling-update strategy.
The Fabric8 Maven Plugin interacts with OpenShift and Kubernetes to deploy your application. It uses a source to image build strategy (S2I) with binary content. Basically, it requires a fat jar as input and send this fat jar to OpenShift. Then it instructs OpenShift to create a container with this fat-jar.
The configuration is done in the pom.xml
file of the project (vertx-openshift-applications/vertx-id-generator
).
mvn fabric8:deploy -Popenshift
This command:
-
packages the application as a fat jar
-
deploy the application in OpenShift
-
create the deployment config, service, pod and route
So you can use the application. Updating the application just requires re-executing the same command.
This example illustrates how Vert.x clustering can be used on Openshift. It uses the Infinispan Cluster Manager and a ping (discovery) protocol relying on the Kubernetes API.
-
the first node serve HTTP request and send a message (on the event bus to the second node)
-
the second node receives messages and replies
To allow the cluster manager to use the Kubernetes API, we need to grant some permissions:
oc policy add-role-to-user view -z default -n myproject
Then, the 2 applications are packaged using the Fabric8 Maven Plugin. Build and deploy them using:
cd vertx-openshift-applications/vertx-clustered-application/clustered-application-service
mvn fabric8:deploy -Popenshift
cd ../clustered-application-http
mvn fabric8:deploy -Popenshift