Skip to content

Deploy with docker-compose

In the following examples you will enable agent-based execution environments. Feel free to disable those parts if you do not intend to use them and prefer SSH-based ones.

Note

As a general rule, you should only deploy and expose the services you need. The OpenTestFactory orchestrator can interact with execution environments via SSH and via agents. If you do not intend to interact with SSH-based execution environments, you can disable this feature. Similarly, if you do not intend to interact with agent-based execution-environments, disable this feature and do not expose the associated services.

Please refer to the “Using the ‘allinone’ image” section for a detailed view on how to use the ‘allinone’ image.

Preparation

The OpenTestFactory orchestrator uses JWT tokens to ensure proper authorization.

It can generate a unique token at initialization time, but this should not be used in a proper production deployment: if the orchestrator restarts, a new token will be generated and the previous one will no longer be valid.

The proper way is to generate your own token(s), and configure the orchestrator so that it uses the provided public key to ensure the token(s) it receives are valid.

The deployment scripts in this guide expect a trusted_key.pub file present in a data directory.

If you already have a public/private key pair you want to use, copy the public key in data/trusted_key.pub.

If you do not have a key pair you want to use, the following commands will generate one for you and put it in the data directory:

mkdir data
openssl genrsa -out data/trusted_key.pem 4096
openssl rsa -pubout -in data/trusted_key.pem -out data/trusted_key.pub

To generate your token(s), you can use opentf-ctl, a Python script, or any JWT token generator of your liking. It must have an iss and a sub entry, and may contain additional entries.

opentf-ctl generate token using trusted_key.pem
import jwt  # pip install PyJWT[crypto]

ISSUER = 'your company'
USER = 'your name'

with open('data/trusted_key.pem', 'r') as f: pem = f.read()

# create a signed token
token = jwt.encode({'iss': ISSUER, 'sub': USER}, pem, algorithm='RS512')
print(token)

Assign this token value to an environment variable:

export TOKEN=eyJ0eXAiOiJKV1QiLC...
set TOKEN=eyJ0eXAiOiJKV1QiLC...
$Env:TOKEN = "eyJ0eXAiOiJKV1QiLC..."

The preparation steps are now complete. You are ready to deploy the orchestrator.

Agent-aware deployment

This example is for a very simple deployment, with a single service, the orchestrator.

The core endpoints are each exposed on their own port. The eventbus subscription port and the agent channel registration port are also exposed.

GET,POST     {host}:7774/workflows                       # receptionist
GET          {host}:7775/channelhandlers                 # observer
GET          {host}:7775/channels                        # observer
GET          {host}:7775/workflows                       # observer
GET          {host}:7775/workflows/status                # observer
GET          {host}:7775/workflows/{workflow_id}/status  # observer
DELETE       {host}:7776/workflows/{workflow_id}         # killswitch

GET, POST    {host}:38368/subscriptions                  # eventbus endpoints
DELETE       {host}:38368/subscriptions/{subscription_id}
POST         {host}:38368/publications

GET,POST     {host}:24368/agents                         # agentchannel endpoints
DELETE       {host}:24368/agents/{agent_id}
GET,POST,PUT {host}:24368/agents/{agent_id}/files/{file_id}

The docker-compose.yml file is very simple, it contains only one service, the orchestrator.

docker-compose.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# docker-compose up -d
# before start create a .env file defining ORCHESTRATOR_VERSION
# or fill it here

version: "3.4"
services:
  orchestrator:
    container_name: orchestrator
    image: opentestfactory/allinone:$ORCHESTRATOR_VERSION
    restart: always
    volumes:
    - type: bind
      source: ./data/trusted_key.pub
      target: /etc/squashtf/squashtf.pub
    ports:
    - "7774:7774"    # receptionist
    - "7775:7775"    # observer
    - "7776:7776"    # killswitch
    - "38368:38368"  # eventbus
    - "24368:24368"  # agent channel

There is one environment variable you should define, ORCHESTRATOR_VERSION.

You can define it in your current command line:

export ORCHESTRATOR_VERSION=latest
set ORCHESTRATOR_VERSION=latest
$Env:ORCHESTRATOR_VERSION = "latest"

Or you can create a .env file in the same directory you put your docker-compose.yml file, with the following content:

.env
ORCHESTRATOR_VERSION=latest

The data directory, also in the same directory you put your docker-compose.yml file should contain your public key.

To start the orchestrator, run the following command in the docker-compose.yml directory:

docker-compose up -d

You can then run workflows using the following command:

curl -X POST \
  --data-binary @workflow.yaml \
  -H "Authorization: Bearer ${TOKEN}" \
  -H "Content-type: application/x-yaml" \
  http://localhost:7774/workflows
curl -X POST ^
  --data-binary @workflow.yaml ^
  -H "Authorization: Bearer %TOKEN%" ^
  -H "Content-type: application/x-yaml" ^
  http://localhost:7774/workflows
curl.exe -X POST `
  --data-binary "@workflow.yaml" `
  -H "Authorization: Bearer $Env:TOKEN" `
  -H "Content-type: application/x-yaml" `
  http://localhost:7774/workflows

This will return a workflow ID.

You can check the progress of its execution by using the following command:

curl \
  -H "Authorization: Bearer ${TOKEN}" \
  http://localhost:7775/workflows/<workflow_id>
curl ^
  -H "Authorization: Bearer %TOKEN%" ^
  http://localhost:7775/workflows/<workflow_id>
curl.exe `
  -H "Authorization: Bearer $Env:TOKEN" `
  http://localhost:7775/workflows/<workflow_id>

Agent-aware deployment with quality gate, traefik, and dozzle

This example builds on the previous one. It uses Traefik as a reverse proxy, adds Dozzle as a log viewer, and exposes the quality gate service.

The use of a reverse proxy allows for a much nicer interaction with the orchestrator: you no longer have to use ports.

The eventbus still listen on its standard port, 38368.

POST         {host}/workflows                            # receptionist
GET          {host}/channelhandlers                      # observer
GET          {host}/channels                             # observer
GET          {host}/workflows                            # observer
GET          {host}/workflows/status                     # observer
GET          {host}/workflows/{workflow_id}/status       # observer
DELETE       {host}/workflows/{workflow_id}              # killswitch
GET          {host}/workflows/{workflow_id}/qualitygate  # qualitygate

GET, POST    {host}:38368/subscriptions                  # eventbus endpoints
DELETE       {host}:38368/subscriptions/{subscription_id}
POST         {host}:38368/publications

GET, POST    {host}/agents                               # agentchannel endpoints
DELETE       {host}/agents/{agent_id}
GET,POST,PUT {host}/agents/{agent_id}/files/{file_id}

The Dozzle log viewer is available at:

GET          {host}/logs

The docker-compose.yml file is now a bit longer. It declares and configure three services: traefik, the orchestrator, and dozzle.

docker-compose.yml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
# docker-compose up -d
# before start create a file .env with versions and other variables

# The orchestrator is accessible at http://host

# this "cluster" contains a "Dozzle" service, a lightweight log viewer. It
# filters the containers by label otf-orchestrator but feel free to change that.

version: "3.4"
services:

  traefik:
    container_name: traefik
    image: traefik:$TRAEFIK_VERSION
    command:
    # - "--log.level=DEBUG"
    - "--api.insecure=true"
    - "--providers.docker=true"
    - "--providers.docker.exposedbydefault=false"
    - "--entrypoints.web.address=:80"
    - "--entrypoints.eventbus.address=:38368"
    - "--entrypoints.traefik.address=:8081"

    ports:
    - "80:80"        # HTTP orchestrator exposed
    - "8081:8081"    # HTTP Traefik administration exposed
    - "38368:38368"  # Eventbus entrypoint for external plugins
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"

    restart: always

  orchestrator:
    container_name: orchestrator
    image: opentestfactory/allinone:$ORCHESTRATOR_VERSION
    restart: always
    volumes:
    - type: bind
      source: ./data/trusted_key.pub
      target: /etc/squashtf/squashtf.pub
    labels:
    - "otf-orchestrator"
    - "traefik.enable=true"

    ## Routing receptionist service calls over Http
    - "traefik.http.routers.receptionist-http.entrypoints=web"
    - "traefik.http.routers.receptionist-http.rule=PathPrefix(`/workflows`) && Method(`POST`)"
    - "traefik.http.routers.receptionist-http.service=receptionist"            
    - "traefik.http.services.receptionist.loadbalancer.server.port=7774"
    - "traefik.http.services.receptionist.loadbalancer.server.scheme=http"

    ## Routing observer service calls over Http   
    - "traefik.http.routers.observer-http.entrypoints=web"
    - "traefik.http.routers.observer-http.rule=(Path(`/workflows`) || Path(`/channels`) || Path(`/channelhandlers`) || PathPrefix(`/workflows/{[a-z0-9-]+}/status`)) && Method(`GET`)"
    - "traefik.http.routers.observer-http.service=observer"
    - "traefik.http.services.observer.loadbalancer.server.port=7775"
    - "traefik.http.services.observer.loadbalancer.server.scheme=http"

    ## Routers qualitygate config HTTP
    - "traefik.http.routers.qualitygate-http.rule=PathPrefix(`/workflows/{[a-z0-9-]+}/qualitygate`)"
    - "traefik.http.routers.qualitygate-http.service=qualitygate"
    ## Service qualitygate config
    - "traefik.http.services.qualitygate.loadbalancer.server.port=12312"
    - "traefik.http.services.qualitygate.loadbalancer.server.scheme=http"

    ## Routers agents config HTTP
    - "traefik.http.routers.agents-http.rule=PathPrefix(`/agents`)"
    - "traefik.http.routers.agents-http.service=agents"
    ## Service agents config
    - "traefik.http.services.agents.loadbalancer.server.port=24368"
    - "traefik.http.services.agents.loadbalancer.server.scheme=http"

    ## Routers eventbus config HTTP
    - "traefik.http.routers.eventbus-http.rule=PathPrefix(`/subscriptions`) || PathPrefix(`/publications`)"
    - "traefik.http.routers.eventbus-http.service=eventbus"
    ## Service eventbus config
    - "traefik.http.services.eventbus.loadbalancer.server.port=38368"
    - "traefik.http.services.eventbus.loadbalancer.server.scheme=http"

    ## Routers killswitch config HTTP
    - "traefik.http.routers.killswitch-http.rule=PathPrefix(`/workflows/{[a-z0-9-]+}`)"
    - "traefik.http.routers.killswitch-http.service=killswitch"
    ## Service killswitch config
    - "traefik.http.services.killswitch.loadbalancer.server.port=7776"
    - "traefik.http.services.killswitch.loadbalancer.server.scheme=http"

  # lightweight log viewer
  # home : https://github.com/amir20/dozzle
  dozzle:
    container_name: dozzle_log_provider
    image: amir20/dozzle:$DOZZLE_VERSION
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    command:
    - "--base=/logs"
    environment:
    - DOZZLE_FILTER="label=otf-orchestrator"
    #    - DOZZLE_USERNAME=admin
    #    - DOZZLE_PASSWORD=admin
    #    - DOZZLE_KEY=key
    labels:
    - "traefik.http.routers.logs.rule=PathPrefix(`/logs`)"
    - "traefik.http.routers.logs.service=dozzle"
    - "traefik.http.services.dozzle.loadbalancer.server.port=8080"
    - "traefik.http.services.dozzle.loadbalancer.server.scheme=http"
    - "traefik.enable=true"

The environment variables you should define in your .env file are as bellow. You can change the versions as needed.

.env
TRAEFIK_VERSION=v2.2.1
ORCHESTRATOR_VERSION=latest
DOZZLE_VERSION=latest

The data directory should contain your public key.

To start the orchestrator, run the following command in the docker-compose.yml directory:

docker-compose up -d

Next Steps

The orchestrator service you just deployed can be integrated in your CI/CD toolchain to run any time code is pushed to your repository to help you spot errors and inconsistencies in your code. But this is only the beginning of what you can do with the OpenTestFactory orchestrator. Ready to get started? Here are some helpful resources for taking your next steps with the OpenTestFactory orchestrator: