Skip to content

Overview

This set of documents presents the OpenTestFactory orchestrator specification. A reference implementation is available that implements this specification. In case of discrepancies, the specification should prevail.

It is intended for people wanting to improve and extend the OpenTestFactory orchestrator and for those who want to write their own compliant implementation.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

Architecture and protocols

The OpenTestFactory orchestrator is a set of services working together, following a publisher/subscriber pattern.

An EventBus serves as an event broker. It receives subscription and publication requests from the OpenTestFactory orchestrator services and plugins and dispatches the received publications accordingly.

The published events must be valid JSON documents. The events part of this specification are versioned and are defined by schemas.

All services should use the HTTP/HTTPS protocol for their communications, with the possible exception of exchanges between some channel services and certain execution environment agents in which case the communication protocol is imposed by the environment.

They are implemented as REST web services.

They are decoupled and respect the “Command Query Responsibility Segregation” principle (CQRS).

Communications are authenticated by signed JWT tokens.

This specification does not impose a technology for the implementation of the various services. Components of different technologies can coexist.

Endpoints

The EventBus provides a set of endpoints which may or may not be exposed:

POST /subscriptions
DELETE /subscriptions/{subscription_id}
GET /subscriptions
POST /publications

The OpenTestFactory orchestrator must expose three endpoints:

POST /workflows                                   # the Receptionist endpoint
DELETE /workflows/{workflow_id}                   # the Killswitch endpoint
GET /workflows/{workflow_id}/status               # the Observer endpoint

Only the exposed endpoints are intended to process external requests received directly.

Their interfaces are described in the EventBus and Core services sections.

Core services and Plugins

The core services handle the Receptionist, the Killswitch, and the Observer endpoints and the workflows orchestration.

The other services part of this specification are plugins and are grouped into four categories: channels, generators, providers, and publishers.

They do not receive external requests and do not directly discuss with each other. When they start, they must subscribe to the EventBus, specifying the events they wish to receive. They must publish the events they produce the same way. They may communicate with external services (for example, a report publisher plugin will send test reports, after possible processing, to a reporting tool or to a test manager).

The events are self-supporting, in the sense that they contain all the information necessary for their processing.

Core services

The orchestrator core services handle workflows.

There can be multiple workflows running at any given time. An implementation can queue or limit the number of workflows it receives or processes simultaneously.

A workflow is a set of jobs. Jobs can be processed simultaneously, as long as they have no dependencies on other jobs. If a job depends on other jobs, it must await their completion before processing. There are no dependencies between workflows.

Each job is either a generator or a sequence of steps. A generator produces jobs that will eventually result in sequences of steps.

Those sequences of steps run on an execution environment. During the processing of a sequence of steps, no other sequence of steps uses the same execution environment.

Except for the ordering defined by the possibly specified dependencies, jobs can be processed in any order, and possibly simultaneously if multiple execution environments are available.

For a given sequence of steps, steps are run in order, and a step cannot run before its predecessor has completed.

The orchestrator core is responsible for the proper and timely handling of the corresponding events.

Plugins

There are four defined categories of plugins that match typical needs, but other categories can be developed: a plugin is a service that subscribes to some events and that may publish other events.

Channel plugins

Channel plugins handle the link with execution environments.

When a job is about to be processed, the core services publish an event requesting an execution environment.

Channel plugins can make offers upon receiving this event. Those offers are time-limited. The core services must select at most one such offer and publish execution events referring to this offer.

A channel plugin must not offer an execution environment that already is in use for another job and must not offer an execution environment that it has proposed if the offer time-limit has not been reached yet.

If a channel plugin receives an execution event referring an expired offer, it may either publish a rejection event or process the execution event if the targeted execution environment is not in use for another job or job offer.

Upon job completion, the core services must publish an event releasing the targeted execution environment. The channel plugin handling the execution environment is then free to release or reuse it.

Generator plugins

Generator plugins generate sets of jobs.

When a generator job is about to be processed, the core services publish an event requesting its expansion. Generator jobs have a type and possibly parameters.

Matching generator plugins can provide a set of jobs upon receiving such events. The core services will then select at most one such expansion.

The generator plugins are used to query external sources and convert the result to jobs.

For example, a generator plugin can query a test case manager, get a test suite to execute, and produce the jobs that will result in the test suite test case executions.

Those generated jobs cannot depend on jobs defined elsewhere in the workflow, but they are otherwise regular jobs and are processed like any other jobs.

flowchart LR
A["job:\ngenerator: example/foo@v1"] -.-> B([Generator plugin])
B -.-> C[job1:\n...\njob2:\n...]

Provider plugins

Provider plugins generate sequences of steps.

When a sequence of steps is being processed, the core services publish an event requesting an expansion for each function step it encounters. Functions steps have a type and possibly parameters.

Matching provider plugins can provide a sequence of steps upon receiving such events. The core services will then select at most one such expansion.

The provider plugins are used to wrap up a series of steps as a single step, easing the creation and readability of workflows.

The generated sequence of steps can contain other function steps which will in turn expand to sequences of steps.

flowchart LR
A["step:\n- uses: example/foo@v1"] -.-> B([Provider plugin])
B -.-> C["- run: echo foo\n- uses: example/bar@v1"]

Publisher plugins

Publisher plugins consume execution results. They typically do not produce non-notification events.

They collect execution results and possibly transform them and then send them to external tools, such as a test case manager or a BI platform.

Workflow syntax

The workflow file is written in YAML. The syntax can be used for evaluating contextual information, literals, operators, and functions. Contextual information includes workflow, resources, and environment variables. When using run in a workflow step to run shell commands, the syntax supports setting environment variables, setting output parameters for subsequent steps, and setting error or debug messages.

Environment variables

The OpenTestFactory orchestrator sets default environment variables for each OpenTestFactory workflow run. Custom environment variables can also be set in a workflow file.