Skip to content

Docker support for AET - easy to set up local/developer environment with example Docker Swarm configuration.

License

Notifications You must be signed in to change notification settings

malaskowski/aet-docker

Repository files navigation

AET Docker

AET Docker Logo

This repository contains Dockerfiles of AET images and example Docker Swarm manifest that enables setting up simple AET instance. You may find released versions of AET Docker images at Docker Hub.

Try AET

Following section describes how to run AET using Docker Swarm. Alternative to this is installing AET using Helm. See AET Helm chart repository for more deails.

Run local instance using Docker Swarm

Make sure you have running Docker Swarm instance that has at least 4 vCPU and 8 GB of memory available. Read more in Prerequisites.

Follow these instructions to set up local AET instance:

  1. Download the latest example-aet-swarm.zip and unzip the files to the folder from where docker stack will be deployed (from now on we will call it AET_ROOT).
See details

You may run following script to automate this step:

curl -sS `curl -Ls -o /dev/null -w %{url_effective} https://github.com/malaskowski/aet-docker/releases/latest/download/example-aet-swarm.zip` > aet-swarm.zip \
&& unzip -q aet-swarm.zip && mv example-aet-swarm/* . \
&& rm -d example-aet-swarm && rm aet-swarm.zip

Contents of the AET_ROOT directory should look like:

├── aet-swarm.yml
├── bundles
│   └── aet-lighthouse-extension.jar
├── configs
│   ├── com.cognifide.aet.cleaner.CleanerScheduler-main.cfg
│   ├── com.cognifide.aet.proxy.RestProxyManager.cfg
│   ├── com.cognifide.aet.queues.DefaultJmsConnection.cfg
│   ├── com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg
│   ├── com.cognifide.aet.runner.MessagesManager.cfg
│   ├── com.cognifide.aet.runner.RunnerConfiguration.cfg
│   ├── com.cognifide.aet.vs.mongodb.MongoDBClient.cfg
│   ├── com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg
│   └── com.cognifide.aet.worker.listeners.WorkersListenersService.cfg
├── features
│   └── healthcheck-features.xml
├── secrets
│   └── KARAF_EXAMPLE_SECRET
└── report
  • If you are using docker-machine (otherwise ignore this point) you should change aet-swarm.yml volumes section for the karaf service to:
        volumes:
          - /osgi-configs/configs:/aet/configs # when using docker-machine, use mounted folder

You can find older versions in the release section.

  1. From the AET_ROOT run docker stack deploy -c aet-swarm.yml aet.
  2. Wait about 1-2 minutes until instance is ready to work.

Note, that you can always stop the instance by running 'docker stack rm aet' without loosing the data (volumes).

See details

When it is ready, you should see the HEALTHY information in the Karaf health check

You may also check the status of Karaf by executing

docker ps --format "table {{.Image}}\t{{.Status}}" --filter expose=8181/tcp

When you see status healthy it means Karaf is running correctly

IMAGE                     STATUS
malaskowski/aet_karaf:1.0.0   Up 3 minutes (healthy)

Run sample suite

Simply run:

docker run --rm malaskowski/aet_client

You should see similar output:

Suite started with correlation id: example-example-example-1611937786395
[16:29:46.578]: COMPARED: [success:   0, total:   0] ::: COLLECTED: [success:   0, total:   1]
Suite processing finished
Report url:
http://localhost/report.html?company=example&project=example&correlationId=example-example-example-1611937786395

Open the url which will show your first AET report! Find more about the report in the AET Docs.

Read more on how to run your custom suite in the Running AET Suite section.

User Documentation

Docker Images

AET ActiveMq

Hosts Apache ActiveMQ that is used as the communication bus by the AET components.

AET Browsermob

Hosts BrowserMob proxy that is used by AET to collect status codes and inject headers to requests.

AET Karaf

Hosts Apache Karaf OSGi applications container. It contains all AET modules (bundles): Runner, Workers, Web-API, Datastorage, Executor, Cleaner and runs them within OSGi context with all their dependencies required (no internet access required to provision). AET application core is located in the /aet/core directory. All custom AET extensions are kept in the /aet/custom directory. Before the start of a Karaf service, Docker secrets are exported to environment variables. Read more in secrets section.

AET Report

Runs Apache Server that hosts AET Report. The AET report application is placed under /usr/local/apache2/htdocs. Defines very basic VirtualHost (see aet.conf).

AET Docker Client

AET bash client embedded into Docker image with all its dependencies (jq, curl, xmllint).

AET instance with Docker Swarm

To see the details of what contains sample AET Docker Swarm instance, read the example-aet-swarm readme.

Notice - this instruction guides you on how to set up AET instance using single-node swarm cluster. This setup is not recommended for production use!

Prerequisites

  • Docker installed on your host (either "Docker" (e.g. Docker for Windows or Docker for Mac)
  • Docker swarm initialized. See this swarm-tutorial: create swarm for detailed instructions.
    • TLDR; run command: docker swarm init.
  • Make sure your swarm have at least 4 vCPU and 8 GB of memory available. Read more in Minimum requirements section.

Minimum requirements

To run example AET instance make sure that machine you run it at has at least enabled:

  • 4 vCPU
  • 8 GB of memory

How to modify Docker resources:

Configuration

OSGi configs

Thanks to the mounted OSGi configs you may now configure instance via AET_ROOT/configs configuration files.

com.cognifide.aet.cleaner.CleanerScheduler-main.cfg Read more here.

com.cognifide.aet.proxy.RestProxyManager.cfg Configures Proxy Server address. AET uses proxy for some features such as collecting status codes or modyfing request's header. Read more here.

com.cognifide.aet.queues.DefaultJmsConnection.cfg Configures JMS Server connection.

com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg Configures address for the Reports module. The reportDomain property should point to the externall address of the AET Reports service.

com.cognifide.aet.runner.MessagesManager.cfg Configures JMX endpoint of the JMS Server for managing messages via API.

com.cognifide.aet.runner.RunnerConfiguration.cfg Configures AET Runner.

com.cognifide.aet.vs.mongodb.MongoDBClient.cfg Configures Database connection. Additionally, setting allowAutoCreate allows creating new databases by AET (no need to create them manually first, including indexes).

com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg Configures Selenium Grid Hub address. Additionally enables configuring capabilities via chromeOptions.

com.cognifide.aet.worker.listeners.WorkersListenersService.cfg Configures number of AET Workers. Use those properties to scale up and down your AET instance's throughput. Read more below.

Throughput and scaling

AET instance speed depends on the direct number of browsers in the system and its configuration. Let's define a TOTAL_NUMBER_OF_BROWSERS which will be the number of selenium grid node instances multiplied by NODE_MAX_SESSION set for each node. For this default configuration, we have 6 Selenium Nodee replicast with a single instance of browser available on each node:

  chrome:
...
    environment:
...
      - NODE_MAX_SESSION=1
...
    deploy:
      replicas: 6
...

So, the TOTAL_NUMBER_OF_BROWSERS is 6 (6 replicas x 1 session). That number should be set for following configs:

  • maxMessagesInCollectorQueue in com.cognifide.aet.runner.RunnerConfiguration.cfg
  • collectorInstancesNo in com.cognifide.aet.worker.listeners.WorkersListenersService.cfg

Docker secrets

To read secrets from /run/secrets/ on Karaf startup, configure env KARAF_SECRETS_ON_STARTUP=true. This will enable scanning secrets from directory matching KARAF_* pattern and export them as environment variable. See the Karaf entrypoint for details.

E.g. If the file /run/secrets/KARAF_MY_SECRET is found, its content will be exported to MY_SECRET environment variable.

Updating instance

You may update configuration files directly from your host (unless you use docker-machine, see the workaround below). Karaf should automatically notice changes in the config files.

To update instance to the newer version

  1. Update aet-swarm.yml and/or configuration files in the AET_ROOT.
  2. Simply run docker stack deploy -c aet-swarm.yml aet

docker-machine config changes detection workaround

Please notice that when you are using docker-machine and Docker Tools, Karaf does not detect automatic changes in the config. You will need to restart Karaf service after applying changes in the configuration files (e.g. by removing aet_karaf service and running stack deploy).

Running AET Suite

There are couple of ways to start AET Suite.

Docker Client

You may use an image that embeds AET Bash client together with its dependencies by running:

docker run --rm malaskowski/aet_client

This will run a sample AET Suite. You should see the results in less than 30s.

To run your custom suite, let's say my-suite.xml, located in the current directory, you need to bind mount it as volume.

docker run --rm -v "$(pwd)/my-suite.xml:/aet/suite/my-suite.xml" malaskowski/aet_client http://host.docker.internal:8181 /aet/suite/my-suite.xml

Read more

The last 2 argumetns are AET Bash client arguments:

  • http://host.docker.internal:8181 URL of the AET instance,
  • /aet/suite/my-suite.xml path to the suite XML file inside the container.

Notice that we are using here host.docker.internal:8181 as the address of AET instance - that works only for Docker for Mac/Win with local AET setup (this is also the default value for this property). In other cases, use the AET server's IP/domain.

One more thing you may want to do is to preserve redirect.html and xUnit.xml files after the AET Client container's run ends its execution. Simply bind mound another volume e.g.:

docker run --rm -v "$(pwd)/my-suite.xml:/aet/suite/my-suite.xml" -v "$(pwd)/report:/aet/report" malaskowski/aet_client http://host.docker.internal:8181 /aet/suite/my-suite.xml

The results will be saved to the report directory:

.
├── my-suite.xml
├── report
│   ├── redirect.html
│   └── xUnit.xml

Other Clients

To run AET Suite simply define endpointDomain to AET Karaf IP with 8181 port, e.g.:

./aet.sh http://localhost:8181 or mvn aet:run -DendpointDomain=http://localhost:8181

Read more about running AET suite here.

Best practices

  1. Control changes in aet-swarm.yml and config files over time! Use version control system (e.g. GIT) to keep tracking changes of AET_ROOT contents.
  2. If you value your data - reports results and history of running suites, remember about backing up MongoDB volume. If you use external MongoDB, also back up its /data/db regularly!
  3. Provide at least minimum requirements machine for your docker cluster.

Available consoles

Note, that if you are using Docker Tools there will be your docker-machine ip instead of localhost

Troubleshooting

Example visualiser

If you want to see what's deployed on your instance, you may use dockersamples/visualizer by running:

docker service create \
  --name=viz \
  --publish=8090:8080/tcp \
  --constraint=node.role==manager \
  --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
  dockersamples/visualizer
  • Visualizer console: http://localhost:8090

Note, that if you are using Docker Tools there will be your docker-machine ip instead of localhost

Debugging

To debug bundles on Karaf set environment vairable KARAF_DEBUG=true and expose port 5005 on karaf service.

Logs

You may preview AET logs with docker service logs aet_karaf -f.


Common issues

Error response 500 after sending suite to AET

Make sure you have installed all prerequisites for the script client.


FAQ

How to use external MongoDB

Set the mongoURI property in the configs/com.cognifide.aet.vs.mongodb.MongoDBClient.cfg to point your mongodb instance uri.

How to use external Selenium Grid

After you set up external Selenium Grid, update the seleniumGridUrl property in the configs/com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg to Grid address.

How to set report domain

Set report-domain property in the com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg to point the domain.

How to expose AET Web API

AET Web API is hosed by the AET Karaf instance. In order to avoid CORS errors from the Report Application, AET Web API is exposed by the AET Report Apache Server (ProxyPass). By default it is set to work with Docker cluster managers such as Swarm or Kubernetes and points to http://karaf:8181/api. Use AET_WEB_API environment variable to change the URL of the AET Web API.

How to enable AET instance to run more tests simultaneously

Notice: those changes will impact your machine resources, be sure to extend the number of CPUs and memory if you scale up a number of browsers.

  1. Spawn more browsers by increasing number of Selenium Grid nodes or adding sessions to existing nodes. Calculate new TOTAL_NUMBER_OF_BROWSERS
  2. Set maxMessagesInCollectorQueue in configs/com.cognifide.aet.runner.RunnerConfiguration.cfg to new TOTAL_NUMBER_OF_BROWSERS.
  3. Set collectorInstancesNo in configs/com.cognifide.aet.worker.listeners.WorkersListenersService.cfg to new TOTAL_NUMBER_OF_BROWSERS.
  4. Update instance (see how to do it.

How to use external Selenium Grid nodes

External Selenium Grid node instance should have:

Check the address of the machine, where AET stack is running. By default, Selenium Grid HUB should be available on the 4444 port. Use this IP address when you run node, with command (replace {SGRID_IP} with this IP address):

java -Dwebdriver.chrome.driver="<path/to/chromedriver>" -jar <path/to/selenium-server-standalone.jar> -role node -hub http://{SGRID_IP}:4444/grid/register -browser "browserName=chrome,maxInstances=10" -maxSession 10

You should see the message that node joins selenium grid. Check it via selenium grid console: http://{SGRID_IP}:4444/grid/console

Read more about setting up your own Grid here:

Is there other way to run AET than with Docker Swarm cluster

Yes, AET system is a group of containers that form an instance together. You need a way to organize them and make visible to each other in order to have functional AET instance. This repository contains example instance setup with Docker Swarm, which is the most basic containers cluster manager that comes OOTB with Docker.

For more advanced setups of AET instance I'd recommend to look at Kubernetes or OpenShift systems (including services provided by cloud vendors). In that case you may find AET Helm chart helpful.


Building

Prerequisites

  • Docker installed on your host.
  1. Clone this repository
  2. Build all images using build.sh {tag}. You should see following images:
    malaskowski/aet_report:{tag}
    malaskowski/aet_karaf:{tag}
    malaskowski/aet_browsermob:{tag}
    malaskowski/aet_activemq:{tag}

Developer environment

In order to be able to easily deploy AET artifacts on your docker instance follow these steps:

  1. Follow the Run local instance using Docker Swarm guide (check the prerequisites first).
  2. In the aet-swarm.yml under karaf and report services there are volumes defined:
  karaf:
    ...
    volumes:
      - ./configs:/aet/custom/configs
      - ./bundles:/aet/custom/bundles
      - ./features:/aet/custom/features
      
   ...
      
   report:
     ...
     # volumes: <- volumes not active by default, to develop the report, uncomment it before deploying
     #  - ./report:/usr/local/apache2/htdocs
  1. In order to add custom extensions, add proper artifacts to the volumes you need.
  • bundles (jar files) to the bundles directory
  • OSGi feature files into the features
  • configs directory already contains default configs
  • report files into the report directory

To develop AET application core, add additional volumes to the karaf service:

  karaf:
    ...
    volumes:
      ...
      - ./core-configs:/aet/core/configs
      - ./core-bundles:/aet/core/bundles
      - ./core-features:/aet/core/features

and place proper AET artifacts accordingly to the core- directories.

If you use build command with -Pzip parameter, all needed artifacts will be placed in YOUR_AET_REPOSITORY/zip/target/packages-X.X.X-SNAPSHOT/. You only need to unpack needed zip archives into proper catalogs described in step 3.

  1. To start the instance, just run docker stack deploy -c aet-swarm.yml aet.