Skip to content

Deploy with Docker

In the following example, you will enable agent-based execution environments. Feel free to disable those parts if you do not intend to use them and prefer SSH-based ones.

Note

As a general rule, you should only deploy and expose the services you need. The OpenTestFactory orchestrator can interact with execution environments via SSH and via agents. If you do not intend to interact with SSH-based execution environments, you can disable this feature. Similarly, if you do not intend to interact with agent-based execution environments, disable this feature and do not expose the associated services.

Please refer to the “Using the ‘allinone’ Image” section for a detailed view of how to use the ‘allinone’ image.

Preparation

The OpenTestFactory orchestrator uses JWT tokens to ensure proper authorization.

It can generate a unique token at initialization time, but this should not be used in a proper production deployment: if the orchestrator restarts, a new token will be generated and the previous one will no longer be valid.

The proper way is to generate your own token(s), and configure the orchestrator so that it uses the provided public key to ensure the token(s) it receives are valid.

The deployment scripts in this guide expect a trusted_key.pub file present in a data directory.

If you already have a public/private key pair you want to use, copy the public key in data/trusted_key.pub.

If you do not have a key pair you want to use, the following commands will generate one for you and put it in the data directory:

mkdir data
openssl genrsa -out data/trusted_key.pem 4096
openssl rsa -pubout -in data/trusted_key.pem -out data/trusted_key.pub

To generate your token(s), you can use opentf-ctl, a Python script, or any JWT token generator of your liking. It must have an iss and a sub entry, and may contain additional entries.

opentf-ctl generate token using trusted_key.pem
import jwt  # Use 'pip install PyJWT[crypto]' to ensure the library is available

ISSUER = 'your company'
USER = 'your name'

with open('data/trusted_key.pem', 'r') as f: pem = f.read()

# create a signed token
token = jwt.encode({'iss': ISSUER, 'sub': USER}, pem, algorithm='RS512')
print(token)

Assign this token value to an environment variable:

export TOKEN=eyJ0eXAiOiJKV1QiLC...
set TOKEN=eyJ0eXAiOiJKV1QiLC...
$Env:TOKEN = "eyJ0eXAiOiJKV1QiLC..."

The preparation steps are now complete. You are ready to deploy the orchestrator.

Agent-aware Deployment

This example is for a very simple deployment, with a single service, the orchestrator.

The core endpoints are each exposed on their own port. The eventbus subscription port and the agent channel registration port are also exposed.

GET,POST     {host}:7774/workflows                       # receptionist
GET          {host}:7775/channelhandlers                 # observer
GET          {host}:7775/channels                        # observer
GET          {host}:7775/namespaces                      # observer
GET          {host}:7775/workflows                       # observer
GET          {host}:7775/workflows/status                # observer
GET          {host}:7775/workflows/{workflow_id}/status  # observer
DELETE       {host}:7776/workflows/{workflow_id}         # killswitch

POST         {host}:7796/workflows/{workflow_id}/insights         
                                                         # insightcollector
GET          {host}:34537/workflows/{workflow_id}/files/{attachment_id}         
                                                         # localstore
GET          {host}:12312/workflows/{workflow_id}/qualitygate
                                                         # qualitygate
GET, POST    {host}:38368/subscriptions                  # eventbus endpoints
DELETE       {host}:38368/subscriptions/{subscription_id}
POST         {host}:38368/publications

GET,POST     {host}:24368/agents                         # agentchannel endpoints
DELETE       {host}:24368/agents/{agent_id}
GET,POST,PUT {host}:24368/agents/{agent_id}/files/{file_id}
docker run -d \
           --name orchestrator \
           -p 7774:7774 \
           -p 7775:7775 \
           -p 7776:7776 \
           -p 7796:7796 \
           -p 12312:12312 \
           -p 24368:24368 \
           -p 38368:38368 \
           -p 34537:34537 \
           -v /path/to/data/trusted_key.pub:/etc/squashtf/trusted_key.pub \
            opentestfactory/allinone:latest
docker run -d ^
           --name orchestrator ^
           -p 7774:7774 ^
           -p 7775:7775 ^
           -p 7776:7776 ^
           -p 7796:7796 ^
           -p 12312:12312 ^
           -p 24368:24368 ^
           -p 38368:38368 ^
           -p 34537:34537 ^
           -v d:\path\to\data\trusted_key.pub:/etc/squashtf/trusted_key.pub ^
            opentestfactory/allinone:latest
docker run -d `
           --name orchestrator `
           -p 7774:7774 `
           -p 7775:7775 `
           -p 7776:7776 `
           -p 7796:7796 `
           -p 12312:12312 `
           -p 24368:24368 `
           -p 38368:38368 `
           -p 34537:34537 `
           -v d:\path\to\data\trusted_key.pub:/etc/squashtf/trusted_key.pub `
            opentestfactory/allinone:latest

It exposes the following services on the corresponding ports:

  • receptionist (port 7774)
  • observer (port 7775)
  • killswitch (port 7776)
  • insightcollector (port 7796)
  • qualitygate (port 12312)
  • agentchannel (port 24368)
  • eventbus (port 38368)
  • localstore (port 34537)

The orchestrator runs until one service fails or ends.

Registering Agents

You can then register as many agents as you like on your orchestrator instance:

opentf-agent --host http://127.0.0.1 --tags linux --token $TOKEN
opentf-agent --host http://127.0.0.1 --tags windows  --token $Env:TOKEN
opentf-agent --host http://127.0.0.1 --tags windows --token %TOKEN%

If your agents are running on other machines, you should adjust the --host parameter (and ensure your orchestrator is reachable from those machines).

Assessing your Deployment

Assuming you have at least one agent registered, you can run a workflow on your orchestrator instance:

opentf-ctl run workflow my_workflow.yaml

If at least one agent runs on Windows, you can use the following my_workflow.yaml file:

my_workflow.yaml
metadata:
  name: Basic Example
variables:
  GREETINGS: hello world
jobs:
  say-hello:
    runs-on: windows
    steps:
    - run: echo %GREETINGS%

And if you have at least one agent running on Linux, you can use the following instead:

my_workflow.yaml
metadata:
  name: Basic Example
variables:
  GREETINGS: hello world
jobs:
  say-hello:
    runs-on: linux
    steps:
    - run: echo $GREETINGS

Next Steps

The orchestrator service you just deployed can be integrated in your CI/CD toolchain to run any time code is pushed to your repository to help you spot errors and inconsistencies in your code. But this is only the beginning of what you can do with the OpenTestFactory orchestrator. Ready to get started? Here are some helpful resources for taking your next steps with the OpenTestFactory orchestrator: