The ‘allinone’ image¶
Getting started¶
The OpenTestFactory orchestrator is a set of services running together. They may or may not run on the same machine, and they may or may not start at the same time.
The only prerequisite is that the EventBus, the service they use to communicate together, is available when they launch.
To ease the installation of the orchestrator, an ‘allinone’ docker image is provided on hub.docker.com/u/opentestfactory. It contains all core services.
The most current stable image is opentestfactory/allinone:latest
.
If you want a specific version, you can use a specific tag such as
2022-05
.
To get the latest image, use the following command:
docker pull opentestfactory/allinone:latest
If you want to use a specific distribution based on the OpenTestFactory
image you can replace opentestfactory/allinone:latest
with that
distribution image in the examples below. They may provide additional configuration
parameters, but they should support the ones described here.
An example of such a specific distribution is squashtest/squash-orchestrator
which
comes with additional plugins to integrate with the Squash TM test management system.
Using the ‘allinone’ image¶
You must configure three things when using the ‘allinone’ image:
- Startup
- Trusted Keys
- Plugins
Once you have completed those configuration steps, you can start your orchestrator and it will be ready to use.
Startup¶
By default, the ‘allinone’ image will start the core services and all plugins
it can find in the core and in the /app/plugins
directory (and subdirectories).
Three demonstration plugins that are part of the ‘allinone’ image are disabled by default:
- dummyee
- HelloWorld
- localpublisher
Configuration file¶
You can override those defaults by providing an alternative /app/squashtf.yaml
configuration file.
Here is the default one:
# squashtf.yaml
eventbus: python -m opentf.core.eventbus
services:
- ${{ CORE }}/core
plugins:
- ${{ CORE }}/plugins
- /app/plugins
aggregated:
- cucumber
- cypress
- robotframework
- junit
- postman
- skf
- soapui
disabled:
- dummyee
- HelloWorld
- localpublisher
If you have plugins installed in another location, add this location in the
plugins
section.
Services in the aggregated
list are not individually started, but are expected to be started
by an aggregation service.
If you want to enable or disable any of those plugins, remove or add their names
in the disabled
section.
Plugins included in the ‘allinone’ docker image
Here is the list of the plugins included in the ‘allinone’ image.
Name | Type | Description |
---|---|---|
agentchannel |
Channel handler | Handle agent-based execution environments |
inceptionee |
Channel handler | Handle the ‘inception’ execution environment |
sshchannel |
Channel handler | Handle SSH-based execution environments |
actionprovider |
Provider | Provide common Workflow functions |
cucumber |
Provider | Handle Cucumber interactions |
cypress |
Provider | Handle Cypress interactions |
junit |
Provider | Handle JUnit interactions |
postman |
Provider | Handle Postman interactions |
robotframework |
Provider | Handle Robot Framework interactions |
skf |
Provider | Handle SKF (Squash Keyword Framework) interactions |
soapui |
Provider | Handle SoapUI interactions |
allure.collector |
Collector | Handle Allure Report generation |
result.aggregator |
Collector | (Allure Report helper service) |
insightcollector |
Collector | Handle Execution log generation |
s3publisher |
Publisher | Publish results to a S3 bucket |
localpublisher |
Publisher | Publish results to a local directory |
interpreter |
Parser | Parse reports |
The actionprovider plugin is used by most provider plugins, disabling it may break them.
The default image also includes a service that can be disabled if you have no use for it. This service consumes events produced by report parsers.
Name | Description |
---|---|
QualityGate | Service |
Here is an example of mounting your configuration file:
docker run -d \
... \
-v /path/to/my_squashtf.yaml:/app/squashtf.yaml \
...
docker run -d ^
... ^
-v d:\path\to\my_squashtf.yaml:/app/squashtf.yaml ^
...
docker run -d `
... `
-v d:\path\to\my_squashtf.yaml:/app/squashtf.yaml `
...
Environment variables¶
You can specify the following environment variables for the reference image:
Environment variable | Description | Default value |
---|---|---|
DEBUG_LEVEL |
Logging level for core services and plugins | INFO |
{service}_DEBUG_LEVEL |
Logging level for a specific service or plugin | (unset) |
OPENTF_AUTHORIZATION_MODE |
Enabled authorizers | JWT |
OPENTF_AUTHORIZATION_POLICY_FILE |
Policy file to use for ABAC | (unset) |
OPENTF_TOKEN_AUTH_FILE |
Static token file for ABAC | (unset) |
OPENTF_TRUSTEDKEYS_AUTH_FILE |
Namespaces / trusted keys mapping file for JWT | (unset) |
OPENTF_ALLURE_ENABLED |
Enable Allure reports generation | (unset) |
OPENTF_BASE_URL |
Reverse proxy configuration | (unset) |
OPENTF_{service}_BASE_URL |
Reverse proxy configuration | (unset) |
OPENTF_REVERSEPROXY |
Reverse proxy configuration | (unset) |
HTTP_PROXY |
Proxy to use for HTTP requests | (unset) |
HTTPS_PROXY |
Proxy to use for HTTPS requests | (unset) |
NO_PROXY |
Proxy bypass | (unset) |
CURL_CA_BUNDLE |
Trusted certificates (for self-signed certs) | (unset) |
PUBLIC_KEY |
Trusted key to use to authenticate requests | (unset) |
SSH_CHANNEL_POOLS |
Pools definitions | (unset) |
SSH_CHANNEL_HOST |
Execution environment hostname | (unset) |
SSH_CHANNEL_PORT |
Execution environment access port | (unset) |
SSH_CHANNEL_USER |
Execution environment username | (unset) |
SSH_CHANNEL_TAGS |
Execution environment tags | (unset) |
SSH_CHANNEL_PASSWORD |
Execution environment password | (unset) |
{provider}_PROVIDER_HOOKS |
Provider hooks definitions | (unset) |
{channel handler}_CHANNEL_HOOKS |
Channel handler hooks definitions | (unset) |
QUALITYGATE_DEFINITIONS |
Quality gate definitions | (unset) |
INTERPRETER_CUSTOM_RULES |
Custom interpreter rules definitions | (unset) |
OBSERVER_RETENTION_POLICY |
Observer retention policy | (unset) |
TRACKERPUBLISHER_INSTANCES |
Tracker publisher instances definitions | (unset) |
Advanced launcher environment variables
The following additional environment variables are typically not needed for regular use. They are provided for advanced use cases.
Environment variable | Description | Default value |
---|---|---|
OPENTF_DEBUG (or simply DEBUG ) |
Enable debug mode for the launcher | (unset) |
OPENTF_LOGGING_REDIRECT |
Target stream for services logs | sys.stderr |
OPENTF_HEALTHCHECK_DELAY |
Launcher watchdog configuration | 60 |
OPENTF_LAUNCHERMANIFEST |
Launcher configuration definition | squashtf.yaml |
OPENTF_CONTEXT |
Configuration context | allinone |
OPENTF_PLUGINDESCRIPTOR |
Plugins descriptors names | plugin.yaml |
OPENTF_SERVICEDESCRIPTOR |
Services descriptors names | service.yaml |
OPENTF_EVENTBUS_WARMUPDELAY |
Event bus readiness checks | 2 |
OPENTF_EVENTBUS_WARMUPURL |
Event bus readiness checks | http://127.0.0.1:38368/subscriptions |
OPENTF_EVENTBUSCONFIG |
Event bus configuration | conf/eventbus.yaml |
You can set the DEBUG
or OPENTF_DEBUG
environment variable to display debug
information in the console for the launcher. It can be useful if you want to
investigate the startup process. Setting the DEBUG
or OPENTF_DEBUG
environment
variable does not enable debug-level logs for the launched services.
If an environment variable named OPENTF_LOGGING_REDIRECT
is specified, its value is
used as the target stream for logs. If it is not set, the default behavior
(targeting sys.stderr
) will apply.
The OPENTF_HEALTHCHECK_DELAY
environment variables specifies how often the launcher
checks the health of the services and plugins. It defaults to 60 seconds.
The OPENTF_LAUNCHERMANIFEST
environment variable specifies the launcher configuration
definition file to use, relative to the current launcher’s directory. It defaults to
squashtf.yaml
.
If the OPENTF_CONTEXT
environment variable is defined, it will override the
context used to start the services and plugins. If the environment variable is
not set, the allinone
context will be used.
The OPENTF_PLUGINDESCRIPTOR
and OPENTF_SERVICEDESCRIPTOR
environment variables
specify the plugins and services descriptors names to use. They default to plugin.yaml
and service.yaml
respectively. Changing those values will break the default allinone
Docker image.
The OPENTF_EVENTBUS_WARMUPDELAY
and OPENTF_EVENTBUS_WARMUPURL
environment variables
specify how long the launcher waits for the event bus to be ready and the URL to use
to check its readiness. They default to 2 seconds and http://127.9.9.1:38368/subscriptions
.
The OPENTF_EVENTBUSCONFIG
environment variable specifies the event bus configuration
file to use, relative to the current launcher’s directory. It defaults to conf/eventbus.yaml
.
You can set the {service}_DEBUG_LEVEL
(all upper-cased) and DEBUG_LEVEL
environment
variables to DEBUG
to add additional information in the console for the launched services.
It defaults to INFO
. (Please note that setting DEBUG_LEVEL
to DEBUG
will produce tons
of logs.)
The possible values for {service}_DEBUG_LEVEL
and DEBUG_LEVEL
are NOTSET
, DEBUG
,
INFO
, WARNING
, ERROR
, and FATAL
. Those values are from the most verbose, NOTSET
,
which shows all logs, to the least verbose, FATAL
, which only shows fatal errors.
For a given service, if {service}_DEBUG_LEVEL
is not defined then the value of DEBUG_LEVEL
is used (or INFO
if DEBUG_LEVEL
is not defined either).
The OPENTF_AUTHORIZATION_MODE
, OPENTF_AUTHORIZATION_POLICY_FILE
,
OPENTF_TOKEN_AUTH_FILE
, and OPENTF_TRUSTEDKEYS_AUTH_FILE
environment variables
together allow configuring authenticating and authorizing. Please refer to
“Authenticating” for more information.
If those variables remain unset, the default JWT-based access control mode is used.
The OPENTF_ALLURE_ENABLED
environment variable allows enabling Allure reports generation.
By default, Allure reports generation is disabled. If you want to enable it, you need to
set this variable value to true
, yes
, or on
.
The OPENTF_BASE_URL
, OPENTF_{service}_BASE_URL
, and OPENTF_REVERSEPROXY
environment
variables allow configuring the orchestrator if it is behind one or more proxies. Please
refer to “Installing behind a reverse proxy” for
more information.
The HTTP_PROXY
, HTTPS_PROXY
, and NO_PROXY
environment variables define the proxy
configuration the orchestrator services must use to access external services. Please note
that you may also need to define the proxy configuration to use in your execution
environment(s).
The CURL_CA_BUNDLE
environment variable allows defining a certificate authority bundle
if self-signed certificates are used in your work environment.
The PUBLIC_KEY
environment variable is an uncomplicated way to provide one trusted key to
the orchestrator. Tokens verified by this public key will have unrestricted access to the
default
namespace. Please refer to
“Providing your own trusted key through the PUBLIC_KEY environment variable”
below for more information.
The SSH_*
environment variables are used to configure the SSH channel plugin. Please
refer to “SSH Channel Configuration” for more information.
The {provider}_PROVIDER_HOOKS
environment variables (all upper-cased), if defined, are
used by the corresponding provider plugins to read their hooks definitions. Please refer to
“Common provider settings” for more
information.
The {channel handler}_CHANNEL_HOOKS
environment variables (all upper-cased), if defined, are
used by the corresponding channel handlers to read their hooks definitions. Please refer to
“Agent Channel plugin” and “SSH Channel plugin”
for more information.
The QUALITYGATE_DEFINITIONS
environment variable, if defined, is used to configure the
quality gate plugin. Please refer to “Quality Gate service”
for more information.
The INTERPRETER_CUSTOM_RULES
environment variable, if defined, is used to configure the
Surefire interpreter plugin. Please refer to “Surefire parser service”
for more information.
The OBSERVER_RETENTION_POLICY
environment variable, if defined, is used to configure the
observer retention policy. Please refer to “Observer service”
for more information.
The TRACKERPUBLISHER_INSTANCES
environment variable, if defined, is used to configure the
tracker publisher plugin. Please refer to “Tracker Publisher plugin”
for more information.
Those environment variables are provided to your orchestrator image in the usual way:
docker run -d \
--name orchestrator \
...
-e PUBLIC_KEY="ssh-rsa AAA..." \
-e DEBUG_LEVEL=DEBUG \
opentestfactory/allinone:latest
docker run -d ^
--name orchestrator ^
...
-e PUBLIC_KEY="ssh-rsa AAA..." ^
-e DEBUG_LEVEL=DEBUG ^
opentestfactory/allinone:latest
docker run -d `
--name orchestrator `
...
-e PUBLIC_KEY="ssh-rsa AAA..." `
-e DEBUG_LEVEL=DEBUG `
opentestfactory/allinone:latest
Ports mapping¶
The orchestrator exposes ports so that clients can access them. Here is the list of ports exposed by the reference image:
- receptionist (port 7774)
- observer (port 7775)
- killswitch (port 7776)
- insightcollector (port 7796)
- eventbus (port 38368)
- qualitygate (port 12312)
- localstore (port 34537)
- agentchannel (port 24368)
The first three ports must always be mapped: they are the orchestrator entry points.
The fourth port, the insightcollector, should be mapped if you want to generate reports
with the opentf-ctl generate report ... using
command.
The fourth port, the eventbus, should be mapped if you want to deploy additional plugins in other images or locations.
The fifth port, the qualitygate, should be mapped if you want to use the quality gate service.
The sixth port, the localstore, should be mapped if you want to retrieve workflow
attachments with the opentf-ctl cp
command.
The last port, the agent channel, should be mapped if you want to use agent-based execution environments. It should not be mapped if you do not intend to use such execution environments.
Here is an example mapping the core and agent channel ports:
docker run -d \
--name orchestrator \
-p 7774:7774 \
-p 7775:7775 \
-p 7776:7776 \
-p 24368:24368 \
opentestfactory/allinone:latest
docker run -d ^
--name orchestrator ^
-p 7774:7774 ^
-p 7775:7775 ^
-p 7776:7776 ^
-p 24368:24368 ^
opentestfactory/allinone:latest
docker run -d `
--name orchestrator `
-p 7774:7774 `
-p 7775:7775 `
-p 7776:7776 `
-p 24368:24368 `
opentestfactory/allinone:latest
Trusted Keys¶
To send requests to the exposed services, you need a signed JWT token.
You can let the orchestrator create a JWT token for you (fine for testing, but do not use this in a production environment, as it is recreated whenever you restart the orchestrator) or provide your trusted key(s) and create your own JWT tokens.
-
Getting the created JWT token¶
A unique JWT token is created only if no trusted key is provided. The orchestrator will generate a temporary private key, create and sign a JWT token using it, and then use the corresponding public key as its only trusted key. The temporary private key is not kept anywhere.
The created JWT token is displayed in the logs of the orchestrator. Look for
"Creating temporary JWT token"
:docker logs orchestrator 2>&1 \ | grep --after-context=10 "Creating temporary JWT token"
:: CMD does not display context around a pattern, so the easiest way is :: to use the `more` command to display the log page per page. The :: token is typically in the first few pages. docker logs orchestrator 2>&1 | more
docker logs orchestrator 2>&1 ` | Select-String -Pattern 'Creating temporary JWT token' -Context 1,10
A different JWT token will be created each time the orchestrator is started. This token has unrestricted access to the
default
namespace.Do not use this created JWT token in a production environment.
-
Providing your own trusted key through the PUBLIC_KEY environment variable¶
If you only intend to use one trusted key, you can provide it through the
PUBLIC_KEY
environment variable, which then must contain the public key in SSH format. It looks like this:type-name base64-encoded-ssh-public-key [comment]
Here is an example of passing a public key through the
PUBLIC_KEY
variable:docker run ... \ -e PUBLIC_KEY="ssh-rsa AAA..." \ ...
docker run ... ^ -e PUBLIC_KEY="ssh-rsa AAA..." ^ ...
docker run ... ` -e PUBLIC_KEY="ssh-rsa AAA..." ` ...
Tokens verified by this key will have unrestricted access to the
default
namespace.Note
If your public key starts with something like
-----BEGIN PUBLIC KEY-----
, you need to convert it. Assuming your public key is in themykey.pub
file, the following command will convert it to the proper format:ssh-keygen -i -m PKCS8 -f mykey.pub
-
Providing your own trusted keys through files¶
If you intend to use multiple trusted keys, you must provide them through the file system.
Your trusted key(s) should be in the orchestrator’s
/etc/squashtf
directory. The easiest way is to put them in a volume and mount it on/etc/squashtf
.If your public keys are in a
trusted_keys
directory, here is an example of mounting it (all files in this directory will be available to the orchestrator, be sure to put your private keys elsewhere):docker run ... \ -v /path/to/trusted_keys:/etc/squashtf \ ...
docker run ... ^ -v d:\path\to\trusted_keys:/etc/squashtf ^ ...
docker run ... ` -v d:\path\to\trusted_keys:/etc/squashtf ` ...
If you only have one public key, and if you do not want to pass it via an environment variable, you can mount it directly. Here is an example of mounting a single
trusted_key.pub
public key:docker run ... \ -v /path/to/trusted_key.pub:/etc/squashtf/squashtf.pub \ ...
docker run ... ^ -v d:\path\to\trusted_key.pub:/etc/squashtf/squashtf.pub ^ ...
docker run ... ` -v d:\path\to\trusted_key.pub:/etc/squashtf/squashtf.pub ` ...
Tokens verified by those keys will have unrestricted access to the
default
namespace, but this can be changed if you enable access control on your instance.
Generating private and public keys¶
If you want to use your own trusted key(s), you can use already generated private and public keys or use the following commands to generate them:
openssl genrsa -out trusted_key.pem 4096
openssl rsa -pubout -in trusted_key.pem -out trusted_key.pub
Creating JWT tokens¶
The orchestrator validates tokens according to its known trusted keys (it will try each key if more than one is supplied until it finds one that validates the token).
It uses the sub
and exp
claims in the payload and rejects tokens that are past
their expiration time if one is specified in the token.
To create JWT tokens from a private key, you can use opentf-ctl
,
a Python script, or any other JWT token creator of your liking. The token must
have an iss
and a sub
entry and may contain additional entries.
opentf-ctl generate token using trusted_key.pem
It will interactively prompt for the needed information.
If you do not have access to a JWT creator tool or cannot install the opentf-ctl
tool, but still have access to Python, you can use the following Python script (be
sure to install the PyJWT[crypto]
library, using for example
pip install PyJWT[crypto]
):
import jwt # requires PyJWT[crypto]
ISSUER = 'your company'
USER = 'your name'
with open('trusted_key.pem', 'r') as f: pem = f.read()
with open('trusted_key.pub', 'r') as f: pub= f.read()
# create a signed token
token = jwt.encode({'iss': ISSUER, 'sub': USER}, pem, algorithm='RS512')
print(token)
# verify it
payload = jwt.decode(token, pub, algorithms=['RS512'])
print(payload)
Plugins¶
Most plugins do not need specific configuration, but those that must access external resources do.
In the core plugins, the SSH channel plugin (and the S3 publisher plugin if you use it) must be configured.
Here are their respective detailed configuration documentations:
SSH Channel Configuration¶
If you do not intend to use an SSH-accessible execution environment, but instead only use agent-based execution environments, you do not have to configure the SSH Channel plugin.
If you intend to use just one SSH-accessible execution environment, you can simply specify it through environment variables.
SSH_CHANNEL_HOST
: required, either a hostname or an IP address.SSH_CHANNEL_PORT
: optional, the port number (22 by default).SSH_CHANNEL_USER
: required, the user to use to log in to the execution environment.SSH_CHANNEL_PASSWORD
: required, the corresponding password.SSH_CHANNEL_TAGS
: required, a comma-separated list of tags this environment can manage.
If you have more than one execution environment you intend to access via SSH, you will have to provide a pools definitions file.
SSH_CHANNEL_POOLS
: optional, a path to the pools definitions.
If SSH_CHANNEL_POOLS
is set, it must point to a YAML file which will look like this:
pools:
demo:
- host: demo.example.com
username: demo
password: 1234
tags: [ssh, windows]
demo2:
- host: host.example.com
port: 22
username: alice
ssh_host_keys: /data/ssh/known_hosts
key_filename: /data/ssh/example.pem
missing_host_key_policy: reject
tags: [ssh, linux]
- hosts: [foo.example.com, bar.example.com]
port: 22
username: bob
ssh_host_keys: /data/ssh/known_hosts
key_filename: /data/ssh/secret.pem
passphrase: secret
missing_host_key_policy: auto-add
tags: [ssh, linux]
Please refer to “SSH Channel Configuration” for more information on pools.
You provide those environments variables to your instance the usual way:
docker run -d \
--name orchestrator \
...
-e SSH_CHANNEL_HOST=test.example.com \
-e SSH_CHANNEL_USER=jane \
-e SSH_CHANNEL_PASSWORD=secret \
-e SSH_CHANNEL_TAGS=linux,robotframework \
opentestfactory/allinone:latest
docker run -d ^
--name orchestrator ^
...
-e SSH_CHANNEL_HOST=test.example.com ^
-e SSH_CHANNEL_USER=jane ^
-e SSH_CHANNEL_PASSWORD=secret ^
-e SSH_CHANNEL_TAGS=linux,robotframework ^
opentestfactory/allinone:latest
docker run -d `
--name orchestrator `
...
-e SSH_CHANNEL_HOST=test.example.com `
-e SSH_CHANNEL_USER=jane `
-e SSH_CHANNEL_PASSWORD=secret `
-e SSH_CHANNEL_TAGS=linux,robotframework `
opentestfactory/allinone:latest
If you use a pools definitions file, in addition to passing the SSH_CHANNEL_POOLS
environment variable you must provide the pools definitions:
docker run -d \
--name orchestrator \
...
-e SSH_CHANNEL_POOLS=/app/pools.yaml \
-v /path/to/pools.yaml:/app/pools.yaml \
opentestfactory/allinone:latest
docker run -d ^
--name orchestrator ^
...
-e SSH_CHANNEL_POOLS=/app/pools.yaml ^
-v d:\path\to\pools.yaml:/app/pools.yaml ^
opentestfactory/allinone:latest
docker run -d `
--name orchestrator `
...
-e SSH_CHANNEL_POOLS=/app/pools.yaml `
-v d:\path\to\pools.yaml:/app/pools.yaml `
opentestfactory/allinone:latest
If you specify both an execution environment and pools definitions, they will be merged, the specified execution environment possibly overriding an existing item in the pool (same host/port).
S3 Publisher Configuration¶
If you intend to use the S3 Publisher plugin (so that your tests results are uploaded to an
S3 bucket), you must provide a s3credentials.json
configuration file and mount
it as /app/s3publisher/s3credentials.json
.
It works with any S3-compatible provider (AWS of course, but also Scaleway and others).
You must provide the four following entries: region_name
, endpoint_url
,
aws_access_key_id
, and aws_secret_access_key
.
The s3credentials.json
file you provide will look like this:
{
"region_name": "fr-par",
"endpoint_url": "https://s3.fr-par.scw.cloud",
"aws_access_key_id": "access_key",
"aws_secret_access_key": "secret_access_key"
}
Here is an example of mounting a my_s3_credentials.json
configuration file:
docker run ... \
-v /path/to/my_s3credentials.json:/app/s3publisher/s3credentials.json \
...
docker run ... ^
-v d:\path\to\my_s3credentials.json:/app/s3publisher/s3credentials.json ^
...
docker run ... `
-v d:\path\to\my_s3credentials.json:/app/s3publisher/s3credentials.json `
...
Installing behind a reverse proxy¶
If your OpenTestFactory orchestrator is deployed behind a reverse proxy, you can define the following environment variables to adjust the orchestrator’s behavior.
OPENTF_REVERSEPROXY
OPENTF_BASE_URL
OPENTF_{service}_BASE_URL
If none of those variables are defined, the orchestrator will not attempt to guess whether it is behind a proxy or not.
OPENTF_{service}_BASE_URL
takes precedence over OPENTF_BASE_URL
, which takes precedence
over OPENTF_REVERSEPROXY
.
The observer service is the only service that currently makes use of those environment variables. (It is the only service that may return URLs.)
OPENTF_REVERSEPROXY
¶
When the orchestrator is running behind a proxy server, it may see the request as coming from that server rather than the real client. Proxies set various headers to track where the request came from.
This environment variable should only be used if the orchestrator is actually behind such a proxy, and should be configured with the number of proxies that are chained in front of it.
Not all proxies set all the headers. Since incoming headers can be faked, you must set how many proxies are setting each header so the orchestrator knows what to trust.
The following headers can be used:
X-Forwarded-For
X-Forwarded-Proto
The OPENTF_REVERSEPROXY
environment variable can take the following values: auto
or
a series of up to 2 integers separated by commas.
The integers are x_for
and x_proto
.
Unspecified values are assumed to be 0
.
auto
is equivalent to 1,1
. That is, x_for
is 1
, x_proto
is 1
.
x_for
sets the number of values to trust forX-Forwarded-For
headerx_proto
sets the number of values to trust forX-Forwarded-Proto
header
OPENTF_BASE_URL
and OPENTF_{service}_BASE_URL
¶
When the orchestrator is running behind a proxy server that does not set standard headers, it is possible to configure the orchestrator with a base URL to use in the URL it provides.
If both OPENTF_BASE_URL
and OPENTF_{service}_BASE_URL
are defined (where {service}
is
the upper-cased service name), the service will use the value specified by
OPENTF_{service}_BASE_URL
.
The base URL specified must provide a protocol (http
or https
), a hostname, possibly
a port, and possibly a prefix. Trailing /
is allowed and ignored.
The following are possible base URLs:
https://orchestrator.example.com:444/prefix
https://example.com
http://1.2.3.4
https://example.com/orchestrator/
The observer service is the only service that currently makes use of those environment variables. (It is the only service that may return URLs.)
Example configuration for the observer service¶
Assuming the orchestrator has been launched as such:
docker run -d \
--name orchestrator \
...
-e OPENTF_BASE_URL=http://example.com \
-e OPENTF_OBSERVER_BASE_URL=http://www.example.com/prefix \
opentestfactory/allinone:latest
docker run -d ^
--name orchestrator ^
...
-e OPENTF_BASE_URL=http://example.com ^
-e OPENTF_OBSERVER_BASE_URL=http://www.example.com/prefix ^
opentestfactory/allinone:latest
docker run -d `
--name orchestrator `
...
-e OPENTF_BASE_URL=http://example.com `
-e OPENTF_OBSERVER_BASE_URL=http://www.example.com/prefix `
opentestfactory/allinone:latest
The observer service will use http://www.example.com/prefix
as its base URL, and
hence the links it returns will be of the form:
http://www.example.com/prefix/workflows/{workflow_id}/status[?page=x&per_page=y]
Deploying¶
You will typically deploy your orchestrator using docker-compose
or Kubernetes. Please
refer to “Deploy with docker-compose” and
“Deploy with Kubernetes” for more information on how to deploy in
such environments.
If you do not have access to such an environment, you can quickly deploy an orchestrator instance using Docker only.
Example¶
The following command starts the orchestrator so that it can use one existing execution environment, with self-generated trusted keys (do not do this in a production setup):
docker run -d \
--name orchestrator \
-p 7774:7774 \
-p 7775:7775 \
-p 7776:7776 \
-e SSH_CHANNEL_HOST=the_environment_ip_or_hostname \
-e SSH_CHANNEL_USER=user \
-e SSH_CHANNEL_PASSWORD=secret \
-e SSH_CHANNEL_TAGS=ssh,linux,robotframework \
opentestfactory/allinone:latest
docker run -d ^
--name orchestrator ^
-p 7774:7774 ^
-p 7775:7775 ^
-p 7776:7776 ^
-e SSH_CHANNEL_HOST=the_environment_ip_or_hostname ^
-e SSH_CHANNEL_USER=user ^
-e SSH_CHANNEL_PASSWORD=secret ^
-e SSH_CHANNEL_TAGS=ssh,linux,robotframework ^
opentestfactory/allinone:latest
docker run -d `
--name orchestrator `
-p 7774:7774 `
-p 7775:7775 `
-p 7776:7776 `
-e SSH_CHANNEL_HOST=the_environment_ip_or_hostname `
-e SSH_CHANNEL_USER=user `
-e SSH_CHANNEL_PASSWORD=secret `
-e SSH_CHANNEL_TAGS=ssh,linux,robotframework `
opentestfactory/allinone:latest
It exposes the following services on the corresponding ports:
- receptionist (port 7774)
- observer (port 7775)
- killswitch (port 7776)
The orchestrator runs until one service fails or ends.
Assessing your deployment¶
Assuming you have deployed and configured a Robot Framework execution environment, you can run the following workflow to ensure everything is OK.
Put this in a robotdemo.yaml
file:
apiVersion: opentestfactory.org/v1alpha1
kind: Workflow
metadata:
name: RobotFramework Example
variables:
SERVER: production
jobs:
keyword-driven:
runs-on: [ssh, robotframework]
steps:
- run: echo $SERVER
- uses: actions/checkout@v2
with:
repository: https://github.com/robotframework/RobotDemo.git
- run: 'ls -al'
working-directory: RobotDemo
- uses: robotframework/robot@v1
with:
datasource: RobotDemo/keyword_driven.robot
data-driven:
runs-on: [ssh, robotframework]
name: Data driven tests
steps:
- uses: actions/checkout@v2
with:
repository: https://github.com/robotframework/RobotDemo.git
- uses: robotframework/robot@v1
with:
datasource: RobotDemo/data_driven.robot
Then run your workflow:
curl -X POST \
--data-binary @robotdemo.yaml \
-H "Authorization: Bearer <yourtoken>" \
-H "Content-type: application/x-yaml" \
http://<ip>:7774/workflows
curl -X POST ^
--data-binary @robotdemo.yaml ^
-H "Authorization: Bearer <yourtoken>" ^
-H "Content-type: application/x-yaml" ^
http://<ip>:7774/workflows
curl.exe -X POST `
--data-binary '@robotdemo.yaml' `
-H "Authorization: Bearer <yourtoken>" `
-H "Content-type: application/x-yaml" `
http://<ip>:7774/workflows
If the installation is OK, the above command should produce something like the following:
{
"apiVersion":"v1",
"kind":"Status",
"metadata":{},
"code":201,
"details":{
"workflow_id":"a6d0a643-cf7b-4697-b568-1c909fe1a643"
},
"message":"Workflow RobotFramework Example accepted (workflow_id=a6d0a643-cf7b-4697-b568-1c909fe1a643).",
"reason":"Created",
"status":"Success"
}
The workflow will then be processed by the orchestrator. You can check its progress using the following command (adjusting the workflow ID as per your workflow):
curl -H "Authorization: Bearer <yourtoken>" \
http://<ip>:7775/workflows/a6d0a643-cf7b-4697-b568-1c909fe1a643/status
curl -H "Authorization: Bearer <yourtoken>" ^
http://<ip>:7775/workflows/a6d0a643-cf7b-4697-b568-1c909fe1a643/status
curl.exe -H "Authorization: Bearer <yourtoken>" `
http://<ip>:7775/workflows/a6d0a643-cf7b-4697-b568-1c909fe1a643/status
Doing it the easy way¶
Using curl
is a quick and dirty way to run a workflow, suitable for a quick test,
but it is probably easier in the long run to use other tools, such as the
opentf-tool
set of tools:
The following command will start a workflow and display its progress in a human-readable format:
opentf-ctl run workflow robotdemo.yaml --wait
Troubleshooting¶
By default, the logs start at INFO
level. You can configure the orchestrator image
to display more details by defining the following environment variables:
DEBUG_LEVEL
:INFO
by default, can be set toDEBUG
to get more informationOPENTF_DEBUG
: unset by default, can be set to get more startup information
The following command will start the orchestrator with the maximum level of logs:
docker run -d \
--name orchestrator \
-p 7774:7774 \
-p 7775:7775 \
-p 7776:7776 \
-p 7796:7796 \
-p 38368:38368 \
-p 34537:34537 \
-p 24368:24368 \
-p 12312:12312 \
-e OPENTF_DEBUG=true \
-e DEBUG_LEVEL=DEBUG \
opentestfactory/allinone:latest
docker run -d ^
--name orchestrator ^
-p 7774:7774 ^
-p 7775:7775 ^
-p 7776:7776 ^
-p 7796:7796 ^
-p 38368:38368 ^
-p 34537:34537 ^
-p 24368:24368 ^
-p 12312:12312 \
-e OPENTF_DEBUG=true ^
-e DEBUG_LEVEL=DEBUG ^
opentestfactory/allinone:latest
docker run -d `
--name orchestrator `
-p 7774:7774 `
-p 7775:7775 `
-p 7776:7776 `
-p 7796:7796 `
-p 38368:38368 `
-p 34537:34537 `
-p 24368:24368 `
-p 12312:12312 \
-e OPENTF_DEBUG=true `
-e DEBUG_LEVEL=DEBUG `
opentestfactory/allinone:latest
If you want to see how a given workflow is progressing, you can use the
opentf-ctl
tool:
opentf-ctl get workflow my_workflow_id --job_depth=5 --step_depth=5
It can display information on running workflows, and up until one hour after their completion.
Next Steps¶
Here are some helpful resources for taking your next steps with the OpenTestFactory orchestrator:
- “Tools” for information on tools that help you manage and use orchestrator instances
- “Guides” for specific uses cases and examples, such as
deploying the orchestrator using
docker-compose
or Kubernetes - “Configuration” for an in-depth view of configuring the services of the orchestrator