Skip to content

Deploy with kubectl

Kubernetes clusters are common nowadays. There are many ways to deploy applications on a Kubernetes cluster, and there is a wide choice of common middleware.

In the following examples you will use plain manifest files and use kubectl to handle them. You need to have access to an existing Kubernetes cluster, and you need to have enough privileges to create secrets, services, deployments, and ingress on an existing namespace.

Traefik will be used as the ingress controller, but there are little to no Traefik-specific instructions, so you will be able to easily use another ingress controller.

Note

As a general rule, you should only deploy and expose the services you need. The OpenTestFactory orchestrator can interact with execution environments via SSH and via agents. If you do not intend to interact with SSH-based execution environments, you can disable this feature. Similarly, if you do not intend to interact with agent-based execution environments, disable this feature and do not expose the associated services.

Please refer to the “Using the ‘allinone’ image” section for a detailed view of how to use the ‘allinone’ image.

Preparation

The OpenTestFactory orchestrator uses JWT tokens to ensure proper authorization.

It can generate a unique token at initialization time, but this should not be used in a proper production deployment: if the orchestrator restarts, a new token will be generated and the previous one will no longer be valid.

The proper way is to generate your token(s), and configure the orchestrator so that it uses the provided public key to ensure the token(s) it receives are valid.

The deployment scripts in this guide expect a opentf-trusted-keys secret present in the target namespace.

If you already have a public/private key pair you want to use, you can create the secret using kubectl:

kubectl create secret generic opentf-trusted-keys \
  --from-file=./trusted_key.pub \
  --namespace my_namespace
kubectl create secret generic opentf-trusted-keys ^
  --from-file=./trusted_key.pub ^
  --namespace my_namespace
kubectl create secret generic opentf-trusted-keys `
  --from-file=./trusted_key.pub `
  --namespace my_namespace

Your secret can contain more than one public key, and you can adjust their names if you like:

kubectl create secret generic opentf-trusted-keys \
  --from-file=trusted_1.pub=./trusted_key.pub \
  --from-file=trusted_2.pub=./my_other_trusted_key.pub \
  --namespace my_namespace
kubectl create secret generic opentf-trusted-keys ^
  --from-file=trusted_1.pub=./trusted_key.pub ^
  --from-file=trusted_2.pub=./my_other_trusted_key.pub ^
  --namespace my_namespace
kubectl create secret generic opentf-trusted-keys `
  --from-file=trusted_1.pub=./trusted_key.pub `
  --from-file=trusted_2.pub=./my_other_trusted_key.pub `
  --namespace my_namespace

If you do not have a key pair you want to use, the following commands will generate one for you and put it in the data directory:

openssl genrsa -out trusted_key.pem 4096
openssl rsa -pubout -in trusted_key.pem -out trusted_key.pub

To generate your token(s), you can use opentf-ctl, a Python script, or any JWT token generator of your liking. It must have an iss and a sub entry, and may contain additional entries.

opentf-ctl generate token using trusted_key.pem
import jwt  # pip install PyJWT[crypto]

ISSUER = 'your company'
USER = 'your name'

with open('trusted_key.pem', 'r') as f: pem = f.read()

# create a signed token
token = jwt.encode({'iss': ISSUER, 'sub': USER}, pem, algorithm='RS512')
print(token)

Assign this token value to an environment variable:

export TOKEN=eyJ0eXAiOiJKV1QiLC...
set TOKEN=eyJ0eXAiOiJKV1QiLC...
$Env:TOKEN = "eyJ0eXAiOiJKV1QiLC..."

The preparation steps are now complete. You are ready to deploy the orchestrator.

Basic deployment

This example is for a simple deployment, with a single service, the orchestrator.

The core endpoints are each exposed on their own URLs.

POST   http://example.com/workflows                         # receptionist
GET    http://example.com/workflows/{workflow_id}/status    # observer
DELETE http://example.com/workflows/{workflow_id}           # killswitch

As it does not refer execution environments, it will only be able to handle inception workflows.

The deploy.yaml manifest is simple, it contains only one service, the orchestrator.

deploy.yaml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otf-orchestrator
  labels:
    app: otf-orchestrator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otf-orchestrator
  template:
    metadata:
      labels:
        app: otf-orchestrator
    spec:
      containers:
      - name: orchestrator
        image: opentestfactory/allinone:DOCKER_TAG
        imagePullPolicy: "Always"
        ports:
        - containerPort: 7774
        - containerPort: 7775
        - containerPort: 7776
        - containerPort: 38368
        resources:
          limits:
            memory: "2Gi"
          requests:
            memory: "512Mi"
            cpu: "0.2"
        volumeMounts:
        - name: opentf-trusted-key
          mountPath: /etc/squashtf
      volumes:
      - name: opentf-trusted-key
        secret:
          secretName: opentf-trusted-keys

---
apiVersion: v1
kind: Service
metadata:
  name: otf-orchestrator
spec:
  selector:
    app: otf-orchestrator
  ports:
    - protocol: TCP
      port: 7774
      name: receptionist
    - protocol: TCP
      port: 38368
      name: eventbus
    - protocol: TCP
      port: 7775
      name: observer
    - protocol: TCP
      port: 7776
      name: killswitch

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: otf-orchestrator
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /workflows
        backend:
          serviceName: otf-orchestrator
          servicePort: receptionist
      - path: /workflows/{id:[a-z0-9-]+}/status
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /workflows/status
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /channels
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /channelhandlers
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /workflows/{id:[a-z0-9-]+}
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: killswitch

You can then deploy this orchestrator on your cluster using:

kubectl apply -f deploy.yaml

The orchestrator will run in your default namespace. Add the --namespace mynamespace if you want to deploy it in the mynamespace namespace (which must exist).

kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
orchestrator-0   1/1     Running   0          5s

Basic deployment with SSH execution environments

This second example builds on the previous one by adding SSH execution environments.

An execution environment is a place where most steps are executed. In a typical deployment, you want to have at least one execution environment.

Assuming you have two such execution environments, robotframework.example.com and junit.example.com, accessible via SSH on port 2222, you can create a my_pools.yaml file with the following ConfigMap resource:

pools.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: v1
kind: ConfigMap
metadata:
  name: pools
data:
  my_pools.yaml: |
    pools:
      my_target:
      - host: robotframework.example.com
        username: jane
        password: secret
        missing_host_key_policy: auto-add
        port: 2222
        tags: [ssh, linux, robotframework]
      - host: junit.example.com
        username: joe
        password: password123
        missing_host_key_policy: auto-add
        port: 2222
        tags: [ssh, linux, junit]

Your updated deploy.yaml is almost identical to the one you used in the first example, the only changes being (1) creating a volume using the aforementioned ConfigMap resource, (2) mounting it on /app/pools and (3) telling the orchestrator to use this pools definition:

deploy.yaml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otf-orchestrator
  labels:
    app: otf-orchestrator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otf-orchestrator
  template:
    metadata:
      labels:
        app: otf-orchestrator
    spec:
      containers:
      - name: orchestrator
        image: opentestfactory/allinone:DOCKER_TAG
        imagePullPolicy: "Always"
        ports:
        - containerPort: 7774
        - containerPort: 7775
        - containerPort: 7776
        - containerPort: 38368
        resources:
          limits:
            memory: "2Gi"
          requests:
            memory: "512Mi"
            cpu: "0.2"
        env:  #(3)
        - name: SSH_CHANNEL_POOLS
          value: /app/pools/pools.yaml
        volumeMounts:
        - name: opentf-trusted-key
          mountPath: /etc/squashtf
        - name: pools-volume  # (2)
          mountPath: /app/pools
      volumes:
      - name: opentf-trusted-key
        secret:
          secretName: opentf-trusted-keys
      - name: pools-volume  # (1)
        configMap:
          name: pools

---
apiVersion: v1
kind: Service
metadata:
  name: otf-orchestrator
spec:
  selector:
    app: otf-orchestrator
  ports:
    - protocol: TCP
      port: 7774
      name: receptionist
    - protocol: TCP
      port: 38368
      name: eventbus
    - protocol: TCP
      port: 7775
      name: observer
    - protocol: TCP
      port: 7776
      name: killswitch

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: otf-orchestrator
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /workflows
        backend:
          serviceName: otf-orchestrator
          servicePort: receptionist
      - path: /workflows/{id:[a-z0-9-]+}/status
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /workflows/status
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /channels
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /channelhandlers
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /workflows/{id:[a-z0-9-]+}
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: killswitch

You can then deploy this orchestrator on your cluster using:

kubectl apply -f deploy.yaml -f pools.yaml

The orchestrator will run in your default namespace. Add the --namespace mynamespace if you want to deploy it in the mynamespace namespace (which must exist).

kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
orchestrator-0   1/1     Running   0          4s

Agents-aware deployment, with quality gate

This example is for a simple deployment that allows for agents to provide execution environments.

Agents are tools that are deployed on execution environments that query for work to do. They are useful if you cannot install an SSH server on your execution environments, or if the SSH server implementation available on its operating system has limitations that prevent proper execution (such as on Windows).

As before, the core endpoints are each exposed on their own URLs. Compared to the previous example, a couple of new endpoints are exposed.

POST         http://example.com/workflows                            # receptionist
GET          http://example.com/channelhandlers                      # observer
GET          http://example.com/channels                             # observer
GET          http://example.com/workflows                            # observer
GET          http://example.com/workflows/status                     # observer
GET          http://example.com/workflows/{workflow_id}/status       # observer
DELETE       http://example.com/workflows/{workflow_id}              # killswitch
GET          http://example.com/workflows/{workflow_id}/qualitygate  # qualitygate

GET, POST    http://example.com/agents                               # agents endpoints
DELETE       http://example.com/agents/{agent_id}
GET,POST,PUT http://example.com/agents/{agent_id}/files/{file_id}

The deploy.yaml manifest is still simple, it contains only one service, the orchestrator.

Compared to the previous example, the deployment exposes a new port, 24368, and a route to this port is defined in the Ingress resource. External agents will use this route to register and interact with the orchestrator.

deploy.yaml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otf-orchestrator
  labels:
    app: otf-orchestrator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otf-orchestrator
  template:
    metadata:
      labels:
        app: otf-orchestrator
    spec:
      containers:
      - name: orchestrator
        image: opentestfactory/allinone:DOCKER_TAG
        imagePullPolicy: "Always"
        ports:
        - containerPort: 7774
        - containerPort: 7775
        - containerPort: 7776
        - containerPort: 38368
        - containerPort: 24368
        - containerPort: 12312
        resources:
          limits:
            memory: "2Gi"
          requests:
            memory: "512Mi"
            cpu: "0.2"
        volumeMounts:
        - name: opentf-trusted-key
          mountPath: /etc/squashtf
      volumes:
      - name: opentf-trusted-key
        secret:
          secretName: opentf-trusted-keys

---
apiVersion: v1
kind: Service
metadata:
  name: otf-orchestrator
spec:
  selector:
    app: otf-orchestrator
  ports:
    - protocol: TCP
      port: 7774
      name: receptionist
    - protocol: TCP
      port: 38368
      name: eventbus
    - protocol: TCP
      port: 7775
      name: observer
    - protocol: TCP
      port: 7776
      name: killswitch
    - protocol: TCP
      port: 24368
      name: agentchannel
    - protocol: TCP
      port: 12312
      name: qualitygate
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: otf-orchestrator
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /workflows
        backend:
          serviceName: otf-orchestrator
          servicePort: receptionist
      - path: /workflows/{id:[a-z0-9-]+}/status
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /channels
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /channelhandlers
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /workflows/status
        backend:
          serviceName: otf-orchestrator
          servicePort: observer
      - path: /workflows/{id:[a-z0-9-]+}/qualitygate
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: qualitygate
      - path: /workflows/{id:[a-z0-9-]+}
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: killswitch
      - path: /agents
        backend:
          serviceName: otf-orchestrator
          servicePort: agentchannel
      - path: /agents/{id:[a-z0-9-]+}
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: agentchannel
      - path: /agents/{id:[a-z0-9-]+}/files/{file:[a-z0-9-]+}
        # the '{id:...}' notation is specific to traefix, please adjust it
        # if using another ingress controller
        backend:
          serviceName: otf-orchestrator
          servicePort: agentchannel

You can then deploy this orchestrator on your cluster using:

kubectl apply -f deploy.yaml

The orchestrator will run in your default namespace. Add the --namespace mynamespace if you want to deploy it in the mynamespace namespace (which must exist).

Agents will then be able to register to this orchestrator. If you have access to a Windows machine with Robot Framework installed, you can start an agent on it so that it can be used to run Robot Framework tests:

chcp 65001
opentf-agent ^
  --tags windows,robotframework ^
  --host http://example.com/ ^
  --port 80 ^
  --token %TOKEN%
chcp 65001
opentf-agent `
  --tags windows,robotframework `
  --host http://example.com/ `
  --port 80 `
  --token $Env:TOKEN

Next Steps

The orchestrator service you just deployed can be integrated in your CI/CD toolchain to run any time code is pushed to your repository to help you spot errors and inconsistencies in your code. But this is only the beginning of what you can do with the OpenTestFactory orchestrator. Ready to get started? Here are some helpful resources for taking your next steps with the OpenTestFactory orchestrator: