Deploy with kubectl
¶
Kubernetes clusters are common nowadays. There are many ways to deploy applications on a Kubernetes cluster, and there is a wide choice of common middleware.
In the following examples, you will use plain manifest files and use
kubectl
to
handle them. You need to have access to an existing Kubernetes cluster, and you
need to have enough privileges to create ConfigMaps
, Services
, Deployments
, and
Ingresses
on an existing namespace.
Traefik (v2) will be used as the ingress controller, but there are little to no Traefik-specific instructions, so you will be able to easily use another ingress controller.
Note
As a general rule, you should only deploy and expose the services you need. The OpenTestFactory orchestrator can interact with execution environments via SSH and agents. If you do not intend to interact with SSH-based execution environments, you can disable this feature. Similarly, if you do not intend to interact with agent-based execution environments, disable this feature and do not expose the associated services.
Please refer to the “Using the ‘allinone’ Image” section for a detailed view of how to use the ‘allinone’ image.
Preparation¶
The OpenTestFactory orchestrator uses JWT tokens to ensure proper authorization.
It can generate a unique token at initialization time, but this should not be used in a proper production deployment: if the orchestrator restarts, a new token will be generated and the previous one will no longer be valid.
The proper way is to generate your token(s), and configure the orchestrator so that it uses the provided public key to ensure the token(s) it receives are valid.
The deployment scripts in this guide expect a opentf-trusted-keys
configmap
present in the target namespace.
If you already have a public/private key pair you want to use, you can create
the configmap using kubectl
:
kubectl create configmap opentf-trusted-keys \
--from-file=./trusted_key.pub \
--namespace my_namespace
kubectl create configmap opentf-trusted-keys ^
--from-file=./trusted_key.pub ^
--namespace my_namespace
kubectl create configmap opentf-trusted-keys `
--from-file=./trusted_key.pub `
--namespace my_namespace
Your configmap can contain more than one public key, and you can adjust their names if you like:
kubectl create configmap opentf-trusted-keys \
--from-file=trusted_1.pub=./trusted_key.pub \
--from-file=trusted_2.pub=./my_other_trusted_key.pub \
--namespace my_namespace
kubectl create configmap opentf-trusted-keys ^
--from-file=trusted_1.pub=./trusted_key.pub ^
--from-file=trusted_2.pub=./my_other_trusted_key.pub ^
--namespace my_namespace
kubectl create configmap opentf-trusted-keys `
--from-file=trusted_1.pub=./trusted_key.pub `
--from-file=trusted_2.pub=./my_other_trusted_key.pub `
--namespace my_namespace
If you do not have a key pair you want to use, the following commands will
generate one for you and put it in the data
directory:
openssl genrsa -out trusted_key.pem 4096
openssl rsa -pubout -in trusted_key.pem -out trusted_key.pub
To generate your token(s), you can use opentf-ctl
,
a Python script, or any JWT token generator of your liking.
It must have an iss
and a sub
entry, and may contain additional entries.
opentf-ctl generate token using trusted_key.pem
import jwt # pip install PyJWT[crypto]
ISSUER = 'your company'
USER = 'your name'
with open('trusted_key.pem', 'r') as f: pem = f.read()
# create a signed token
token = jwt.encode({'iss': ISSUER, 'sub': USER}, pem, algorithm='RS512')
print(token)
Assign this token value to an environment variable:
export TOKEN=eyJ0eXAiOiJKV1QiLC...
set TOKEN=eyJ0eXAiOiJKV1QiLC...
$Env:TOKEN = "eyJ0eXAiOiJKV1QiLC..."
The preparation steps are now complete. You are ready to deploy the orchestrator.
Minimal Deployment¶
This example is for a minimal deployment, with a single service, the orchestrator.
The core endpoints are each exposed on their own URLs. Please note that in this example the eventbus service is not exposed.
POST http://example.com/workflows # receptionist
GET http://example.com/workflows/{workflow_id}/status # observer
DELETE http://example.com/workflows/{workflow_id} # killswitch
As it does not refer to execution environments, it will only be able to handle inception workflows.
The deploy.yaml
manifest is simple, it contains only one service, the
orchestrator.
deploy.yaml | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
|
You can then deploy this orchestrator on your cluster using:
kubectl apply -f deploy.yaml
The orchestrator will run in your default namespace. Add the --namespace mynamespace
if you want to
deploy it in the mynamespace
namespace (which must exist).
kubectl get pods
NAME READY STATUS RESTARTS AGE
orchestrator-0 1/1 Running 0 5s
Basic Deployment with SSH Execution Environments¶
This second example builds on the previous one by adding SSH execution environments. It provides for a more realistic deployment.
An execution environment is a place where most steps are executed. In a typical deployment, you want to have at least one execution environment.
Assuming you have two such execution environments, robotframework.example.com
and
junit.example.com
, accessible via SSH on port 2222, you can create a my_pools.yaml
file with the following ConfigMap
resource:
pools.yaml | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Compared to the previous minimal example, new endpoints are exposed. The eventbus ones are exposed, as well as more observer endpoints and localstore and insightcollector endpoints:
GET, POST http://example.com/subscriptions # eventbus
POST http://example.com/publications # eventbus
POST http://example.com/workflows # receptionist
GET http://example.com/channels # observer
GET http://example.com/channelhandlers # observer
GET http://example.com/namespaces # observer
GET http://example.com/version # observer
GET http://example.com/workflows # observer
GET http://example.com/workflows/status # observer
GET http://example.com/workflows/{workflow_id}/status # observer
GET http://example.com/workflows/{workflow_id}/files/{attachment_id}
# localstore
POST http://example.com/workflows/{workflow_id}/insights # insightcollector
DELETE http://example.com/workflows/{workflow_id} # killswitch
The deploy.yaml
manifest is quite similar to the one you used in the first example, at
least for its Deployment
part: the only changes being (1) creating a volume using the
aforementioned ConfigMap
resource, (2) mounting it on /app/pools
, (3) telling the
orchestrator to use this pools definition, and (4) exposing the insightcollector, eventbus,
and localstore ports.
The Service
part now exposes the insightcollector, eventbus, and localstore endpoints.
An IngressRoute
resource, which is specific to Traefik, is used in place of the Ingress
resource you used in the previous example. It is handy as it allows routing
incoming requests to a specific port based on the path and HTTP method, but it is not
mandatory: a more generic routing mechanism will be used in the
Agents-aware Deployment section below.
Warning
Please note that the Traefik version used below is v2. If
you are using a more recent version of Traefik, you will need to adjust the IngressRoute
configuration.
Please refer to “Traefik v3 Migration Documentation” for more information.
deploy.yaml | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
|
You can then deploy this orchestrator on your cluster using:
kubectl apply -f deploy.yaml -f pools.yaml
The orchestrator will run in your default namespace. Add the --namespace mynamespace
if you want to
deploy it in the mynamespace
namespace (which must exist).
kubectl get pods
NAME READY STATUS RESTARTS AGE
orchestrator-0 1/1 Running 0 4s
Agents-aware Deployment, with Quality Gate¶
This example is for a simple deployment that allows agents to provide execution environments.
Agents are tools that are deployed on execution environments that query for work to do. They are useful if you cannot install an SSH server on your execution environments, or if the SSH server implementation available on its operating system has limitations that prevent proper execution (such as on Windows).
As before, the core endpoints are each exposed on their own URLs. Compared to the previous example, a couple of new endpoints are exposed, one for the quality gate service and a few for the agent handler service.
POST http://example.com/receptionist/workflows # receptionist
GET http://example.com/observer/channelhandlers # observer
GET http://example.com/observer/channels # observer
GET http://example.com/observer/namespaces # observer
GET http://example.com/observer/version # observer
GET http://example.com/observer/workflows # observer
GET http://example.com/observer/workflows/status # observer
GET http://example.com/observer/workflows/{workflow_id}/status # observer
GET http://example.com/localstore/workflows/{workflow_id}/files/{attachment_id}
# localstore
POST http://example.com/workflows/{workflow_id}/insights # insightcollector
DELETE http://example.com/killswitch/workflows/{workflow_id} # killswitch
GET http://example.com/qualitygate/workflows/{workflow_id}/qualitygate
# qualitygate
GET, POST http://example.com/agentchannel/agents # agents endpoints
DELETE http://example.com/agentchannel/agents/{agent_id}
GET,POST,PUT http://example.com/agentchannel/agents/{agent_id}/files/{file_id}
The deploy.yaml
manifest is still simple, it still contains only one service, the
orchestrator.
Compared to the previous example, the deployment exposes two new ports, 12312
and 24368
,
and routes to those ports are defined in an Ingress
resource. External agents will
use this route to register and interact with the orchestrator.
Instead of using an IngressRoute
manifest, as in the previous example, it uses a series
of Ingress
resources, and the exposed routes have a prefix. This is a common way to
work around the HTTP method filtering which is not supported by some ingress controllers.
Tip
The principle is simple: exposed routes have a prefix that is used to disambiguate
the routes, and the Ingress
resources remove the prefixes before sending the
requests to the appropriate services.
The nginx ingress controller offers similar features using the
nginx.ingress.kubernetes.io/use-regex: "true"
and
nginx.ingress.kubernetes.io/rewrite-target: /$2
annotations. Please refer to
“Ingress-Nginx Controller Documentation”
for more information if you are using nginx.
deploy.yaml | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 |
|
You can then deploy this orchestrator on your cluster using:
kubectl apply -f deploy.yaml
The orchestrator will run in your default namespace. Add the --namespace mynamespace
if you want to
deploy it in the mynamespace
namespace (which must exist).
Agents will then be able to register to this orchestrator. If you have access to a Windows machine with Robot Framework installed, you can start an agent on it so that it can be used to run Robot Framework tests:
chcp 65001
opentf-agent ^
--tags windows,robotframework ^
--host http://example.com/agentchannel ^
--port 80 ^
--token %TOKEN%
chcp 65001
opentf-agent `
--tags windows,robotframework `
--host http://example.com/agentchannel `
--port 80 `
--token $Env:TOKEN
Next Steps¶
The orchestrator service you just deployed can be integrated in your CI/CD tool chain to run any time code is pushed to your repository to help you spot errors and inconsistencies in your code. But this is only the beginning of what you can do with the OpenTestFactory orchestrator. Ready to get started? Here are some helpful resources for taking your next steps with the OpenTestFactory orchestrator:
- “Using the ‘allinone’ Image” for a detailed view of how to use the ‘allinone’ image
- “Learn OpenTestFactory Orchestrator” for an in-depth tutorial
- “Configuration” to further configure your orchestrator instances
- “Agents” for more information on agents and execution environments
- “
opentf-ctl
” for a tool you can use to explore and interact with orchestrator instances