Deploy with docker-compose
¶
In the following examples, you will enable agent-based execution environments. Feel free to disable those parts if you do not intend to use them and prefer SSH-based ones.
Note
As a general rule, you should only deploy and expose the services you need. The OpenTestFactory orchestrator can interact with execution environments via SSH and via agents. If you do not intend to interact with SSH-based execution environments, you can disable this feature. Similarly, if you do not intend to interact with agent-based execution environments, disable this feature and do not expose the associated services.
Please refer to the “Using the ‘allinone’ image” section for a detailed view of how to use the ‘allinone’ image.
Preparation¶
The OpenTestFactory orchestrator uses JWT tokens to ensure proper authorization.
It can generate a unique token at initialization time, but this should not be used in a proper production deployment: if the orchestrator restarts, a new token will be generated and the previous one will no longer be valid.
The proper way is to generate your token(s), and configure the orchestrator so that it uses the provided public key to ensure the token(s) it receives are valid.
The deployment scripts in this guide expect a trusted_key.pub
file present
in a data
directory.
If you already have a public/private key pair you want to use, copy the public
key in data/trusted_key.pub
.
If you do not have a key pair you want to use, the following commands will
generate one for you and put it in the data
directory:
mkdir data
openssl genrsa -out data/trusted_key.pem 4096
openssl rsa -pubout -in data/trusted_key.pem -out data/trusted_key.pub
To generate your token(s), you can use opentf-ctl
,
a Python script, or any JWT token generator of your liking. It must
have an iss
and a sub
entry, and may contain additional entries.
opentf-ctl generate token using trusted_key.pem
import jwt # pip install PyJWT[crypto]
ISSUER = 'your company'
USER = 'your name'
with open('data/trusted_key.pem', 'r') as f: pem = f.read()
# create a signed token
token = jwt.encode({'iss': ISSUER, 'sub': USER}, pem, algorithm='RS512')
print(token)
Assign this token value to an environment variable:
export TOKEN=eyJ0eXAiOiJKV1QiLC...
set TOKEN=eyJ0eXAiOiJKV1QiLC...
$Env:TOKEN = "eyJ0eXAiOiJKV1QiLC..."
The preparation steps are now complete. You are ready to deploy the orchestrator.
Agent-aware Deployment¶
This example is for a very simple deployment, with a single service, the orchestrator.
The core endpoints are each exposed on their own port. The eventbus subscription port and the agent channel registration port are also exposed.
GET,POST {host}:7774/workflows # receptionist
GET {host}:7775/channelhandlers # observer
GET {host}:7775/channels # observer
GET {host}:7775/namespaces # observer
GET {host}:7775/workflows # observer
GET {host}:7775/workflows/status # observer
GET {host}:7775/workflows/{workflow_id}/status # observer
DELETE {host}:7776/workflows/{workflow_id} # killswitch
POST {host}:7796/workflows/{workflow_id}/insights
# insightcollector
GET {host}:34537/workflows/{workflow_id}/files/{attachment_id}
# localstore
GET, POST {host}:38368/subscriptions # eventbus endpoints
DELETE {host}:38368/subscriptions/{subscription_id}
POST {host}:38368/publications
GET,POST {host}:24368/agents # agentchannel endpoints
DELETE {host}:24368/agents/{agent_id}
GET,POST,PUT {host}:24368/agents/{agent_id}/files/{file_id}
The docker-compose.yml
file is very simple, it contains only one service, the
orchestrator.
docker-compose.yml | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
There is one environment variable you should define, ORCHESTRATOR_VERSION
.
You can define it in your current command line:
export ORCHESTRATOR_VERSION=latest
set ORCHESTRATOR_VERSION=latest
$Env:ORCHESTRATOR_VERSION = "latest"
Or you can create a .env
file in the same directory you put your docker-compose.yml
file,
with the following content:
ORCHESTRATOR_VERSION=latest
The data
directory, also in the same directory you put your docker-compose.yml
file
should contain your public key.
To start the orchestrator, run the following command in the docker-compose.yml
directory:
docker-compose up -d
You can then run workflows using the following command:
curl -X POST \
--data-binary @workflow.yaml \
-H "Authorization: Bearer ${TOKEN}" \
-H "Content-type: application/x-yaml" \
http://localhost:7774/workflows
curl -X POST ^
--data-binary @workflow.yaml ^
-H "Authorization: Bearer %TOKEN%" ^
-H "Content-type: application/x-yaml" ^
http://localhost:7774/workflows
curl.exe -X POST `
--data-binary "@workflow.yaml" `
-H "Authorization: Bearer $Env:TOKEN" `
-H "Content-type: application/x-yaml" `
http://localhost:7774/workflows
This will return a workflow ID.
You can check the progress of its execution by using the following command:
curl \
-H "Authorization: Bearer ${TOKEN}" \
http://localhost:7775/workflows/<workflow_id>
curl ^
-H "Authorization: Bearer %TOKEN%" ^
http://localhost:7775/workflows/<workflow_id>
curl.exe `
-H "Authorization: Bearer $Env:TOKEN" `
http://localhost:7775/workflows/<workflow_id>
Agent-aware Deployment with Quality Gate, Traefik, and Dozzle¶
This example builds on the previous one. It uses Traefik (v2) as a reverse proxy, adds Dozzle as a log viewer, and exposes the quality gate service.
The use of a reverse proxy allows for a much nicer interaction with the orchestrator: you no longer have to use ports.
The eventbus still listen on its standard port, 38368
.
POST {host}/workflows # receptionist
GET {host}/channelhandlers # observer
GET {host}/channels # observer
GET {host}/namespaces # observer
GET {host}/workflows # observer
GET {host}/workflows/status # observer
GET {host}/workflows/{workflow_id}/status # observer
DELETE {host}/workflows/{workflow_id} # killswitch
POST {host}/workflows/{workflow_id}/insights # insightcollector
GET {host}/workflows/{workflow_id}/qualitygate # qualitygate
GET {host}/workflows/{workflow_id}/files/{attachment_id}
# localstore
GET, POST {host}:38368/subscriptions # eventbus endpoints
DELETE {host}:38368/subscriptions/{subscription_id}
POST {host}:38368/publications
GET, POST {host}/agents # agentchannel endpoints
DELETE {host}/agents/{agent_id}
GET,POST,PUT {host}/agents/{agent_id}/files/{file_id}
The Dozzle log viewer is available at:
GET {host}/logs
The docker-compose.yml
file is now a bit longer. It declares and configures three
services: traefik, the orchestrator, and dozzle.
Warning
Please note that the Traefik version used here is v2. If you are using a more recent version of Traefik, you will need to adjust the configuration.
For example in v2 rules were using things like PathPrefix
:
PathPrefix(`/workflows/{id:[a-z0-9-]+}/status`)
In Traefik v3 rules should use PathRegexp
instead (also note the leading ^
and the
missing {}
braces):
PathRegexp(`^/workflows/[a-z0-9-]+/status`)
Please refer to “Traefik v3 Migration Documentation” for more information.
docker-compose.yml | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
|
The environment variables you should define in your .env
file are as below. You
can change the versions as needed.
TRAEFIK_VERSION=v2.2.1
ORCHESTRATOR_VERSION=latest
DOZZLE_VERSION=latest
The data
directory should contain your public key.
To start the orchestrator, run the following command in the docker-compose.yml
directory:
docker-compose up -d
Next Steps¶
The orchestrator service you just deployed can be integrated into your CI/CD toolchain to run any time code is pushed to your repository to help you spot errors and inconsistencies in your code. But this is only the beginning of what you can do with the OpenTestFactory orchestrator. Ready to get started? Here are some helpful resources for taking your next steps with the OpenTestFactory orchestrator:
- “Using the ‘allinone’ Image” for a detailed view on how to use the ‘allinone’ image
- “Learn OpenTestFactory Orchestrator” for an in-depth tutorial
- “Configuration” to further configure your orchestrator instances
- “Agents” for more information on agents and execution environments
- “
opentf-ctl
” for a tool you can use to explore and interact with orchestrator instances