Workflow syntax for OpenTestFactory Orchestrator¶
Note
This workflow syntax closely matches GitHub Actions workflow syntax. If you are new to GitHub actions and want to learn more, see “Workflow syntax for GitHub Actions.”
A workflow is a configurable automated process made up of one or more jobs. You must create a YAML file to define your workflow configuration.
About YAML Syntax for Workflows¶
Workflow files must use YAML syntax and may have a .yml
or .yaml
file extension. If you
are new to YAML and want to learn more, see
“Learn YAML in Y minutes.”
You may store workflow files anywhere in your repository.
Usage Limits¶
There may be sizing limits on OpenTestFactory orchestrator workflows. The following are the minimum that must be supported.
- Jobs - There can be at least 1024 jobs per workflow (including generated jobs).
- Steps - There can be at least 1024 steps per job (including generated steps).
- Job matrix - A job matrix may generate at least 256 jobs per workflow run.
- Concurrent jobs - There can be at least 5 concurrent jobs per workflow run.
An implementation may provide lower minimums if it provides a way to extend its limits so that they reach or exceed the above minimums.
Mandatory and Optional Sections¶
A workflow must have at least a metadata
section and a jobs
section. It may have
additional variables
, resources
, hooks
, defaults
, and outputs
sections. Those additional
sections may be empty or omitted if not used.
metadata:
name: mona
variables:
...
resources:
...
hooks:
...
defaults:
...
jobs:
...
outputs:
...
metadata
¶
metadata.name
¶
Required The name of the workflow. This name can be used by tools to provide a human-friendly reference.
metadata.namespace
¶
The name of the namespace the workflow is part of. If unspecified, the workflow will be part of the default
namespace.
variables
¶
A map
of environment variables that are available to all jobs and steps in the workflow. You can
also set environment variables that are only available to a job or step. For more information,
see jobs.<job_id>.variables
and jobs.<job_id>.steps.variables
.
Variables in the variables
map cannot be defined in terms of other variables in the map.
When more than one environment variable is defined with the same name, the orchestrator uses the most specific environment variable. For example, an environment variable defined in a step will override job and workflow variables with the same name, while the step executes. An environment variable defined for a job will override a workflow variable with the same name, while the job executes.
Variables specified in the workflow file are by default subject to the execution environment shell’s
expansions and substitution rules. Use the verbatim: true
option or the target shell’s escape
conventions if you need the literal content.
Examples of variables
This first example defines a simple SERVER
variable, which is subject to the execution environment
shell’s expansions and substitution rules.
variables:
SERVER: production
echo "$SERVER" # -> production
This second example defines three variables that are subject to the execution environment shell’s
expansions and substitution rules, and a fourth one, NOT_INTERPRETED
, which is not.
INTERPRETED_1: The current directory is `pwd`
INTERPRETED_2:
value: The current directory is `pwd`
verbatim: false
INTERPRETED_3:
value: The current directory is `pwd`
NOT_INTERPRETED:
value: Use `pwd` to get the current directory
verbatim: true
echo "$INTERPRETED_1" # -> The current directory is /home/user
echo "$INTERPRETED_2" # -> The current directory is /home/user
echo "$INTERPRETED_3" # -> The current directory is /home/user
echo "$NOT_INTERPRETED" # -> Use `pwd` to get the current directory
resources
¶
A map
of resources that are available to all jobs and steps in the workflow. You can define
three kinds of resources: testmanagers
, repositories
, and files
.
testmanagers
and repositories
resources are a way to reuse resources in your
workflow. files
resources allow providing local or external files to your workflow.
resources.repositories
¶
An array of source code repository definitions. Each repository definition is a map and must have the name
, type
, repository
, and endpoint
entries.
Example: Defining a repository resource
This example defines a myrepo
repository:
resources:
repositories:
- name: myrepo
type: bitbucket
repository: example/my-example-repo
endpoint: https://bitbucket.org
resources.repositories[*].name
¶
Required The name of the repository, which is used to refer to the definition. It must be unique among the repository definitions.
resources.repositories[*].type
¶
Required The type of the repository. git
, github
, gitlab
, and bitbucket
must be supported.
resources.repositories[*].repository
¶
Required The reference of the repository.
resources.repositories[*].endpoint
¶
Required The endpoint to use to access the source code manager.
resources.files
¶
An array of externally-provided files.
Example: Defining a file resource
This example defines one file. This file will have to be joined when launching the workflow:
resources:
files:
- dataset
The following function will put a copy of the provided file in the current execution environment so that the following steps can use it:
- uses: actions/put-file@v1
with:
file: dataset
path: dataset.xml
hooks
¶
An array of hook definitions that apply to all jobs and steps in the workflow, including generated ones.
Each hook definition is a map
. It specifies the events that trigger it, and the actions it performs when
triggered. It may also contain an if
conditional that further limit its scope.
Hooks specified in a workflow complement the hooks defined by installed channel handlers and
provider plugins. Their before
actions are performed before the ones defined by the channel
handlers or provider plugins and their after
actions are performed after the ones defined by the
channel handlers or provider plugins.
before_steps from workflow-defined hook 1
before_steps from workflow-defined hook 2
before_steps from channel-handler-or-provider-defined hook
steps from event
after_steps from channel-handler-or-provider-defined hook
after_steps from workflow-defined hooks 2
after_steps from workflow-defined hooks 1
hooks[*].name
¶
Required The name of the hook, as displayed by the orchestrator.
Example of hooks[*].name
hooks:
- name: my-hook
hooks[*].events
¶
Required A non-empty array of events that trigger the hook.
Events related to functions are described by their categoryPrefix
, category
, and
categoryVersion
labels. Events related to job setup and teardown are described by the
channel
specifier.
You can use the _
placeholder to match any category
or categoryPrefix
if you have not
specified the other one. You can specify a given categoryVersion
, but this must not be the
only specifier. You must use the setup
or teardown
value for the channel
specifier.
Placeholders are not allowed in this context.
Examples: Declaring trigger events
Note
The following examples are not exhaustive. They are meant to illustrate the syntax. They
all miss the before
and after
sections.
This hook will be triggered by my-plugin/foo
, my-plugin/bar
, and my-other-plugin/baz
:
hooks:
- name: my-fishy-hook
events:
- categoryPrefix: my-plugin
- categoryPrefix: my-other-plugin
category: _
All providers will trigger this hook whenever they are used (either explicitly in the workflow, or in generated jobs or steps):
hooks:
- name: my-global-hook
events:
- category: _
This hook will be triggered by all job setups (either explicit jobs or generated jobs):
hooks:
- name: my-job-hook
events:
- channel: setup
hooks[*].before
¶
Required if hooks[*].after
not specified The steps to include before performing the triggering task.
For hooks triggered by job setups, a specific use-workspace: {path}
step is allowed. It
forces the workspace path. If multiple hooks are triggered by the event, the last specified
value is used.
For hooks triggered by job teardowns, a specific keep-workspace: {boolean}
step is allowed.
It prevents the workspace cleanup if the boolean evaluates to true
. If multiple hooks are
triggered by the event, the last applied is used.
Example: Defining a hook that applies for all functions
This hook will be triggered before any function is called, and will echo foo!
:
hooks:
- name: foo
events:
- category: _
before:
- run: echo "foo!"
Example: Defining a hook that applies for all steps jobs
This hook will be triggered before any job setup, and will force the workspace to /tmp
:
hooks:
- name: my-job-hook
events:
- channel: setup
before:
- run: echo "setting workspace to /tmp"
- use-workspace: /tmp
hooks[*].after
¶
Required if hooks[*].before
not specified The steps to include after performing the triggering task.
Example: Creating a hook that wrap a my-plugin/my-task
function
hooks:
- name: my before/after hook
events:
- categoryPrefix: my-plugin
category: my-task
before:
- run: echo before my-plugin/my-task
after:
- run: echo after my-plugin/my-task
hooks[*].if
¶
You can use the if
conditional to prevent a hook from being triggered unless a condition is met.
You can use any supported context and expression to create a conditional.
When you use expressions in an if
conditional, you may omit the expression syntax (${{ }}
) because
the orchestrator automatically evaluates the if
conditional as an expression. For more information,
see “Expressions.”
defaults
¶
A map of default settings that will apply to all jobs in the workflow. You can also set default settings
that are only available to a job. For more information, see jobs.<job_id>.defaults
.
When more than one default setting is defined with the same name, the orchestrator uses the most specific default setting. For example, a default setting defined in a job will override a default setting that has the same name defined in a workflow.
This keyword can reference several contexts. For more information, see “Contexts.”
defaults.run
¶
You can provide default shell
and working-directory
options for all run
steps
in a workflow. You can also set default settings for run
that are only available to a job. For more
information, see jobs.<job_id>.defaults.run
.
When more than one default setting is defined with the same name, the orchestrator uses the most specific default setting. For example, a default setting defined in a job will override a default setting that has the same name defined in a workflow.
Example: Set the default shell and working directory
defaults:
run:
shell: bash
working-directory: ./scripts
strategy
¶
Use strategy
to configure how jobs run in the workflow. The strategy
keyword is a map
that
contains the following key.
strategy.max-parallel
¶
By default the orchestrator runs all jobs in parallel. You can use the strategy.max-parallel
keyword to
configure the maximum number of jobs that can run at the same time.
The value of max-parallel
can be an expression. Allowed expression contexts include opentf
, inputs
, and
variables
. For more information about expressions, see “Expressions.”
An implementation may provide its own default value and maximum value for this keyword. It is not an error to provide a value that exceeds that maximum value, but the orchestrator will use its maximum value instead.
timeout-minutes
¶
The maximum number of minutes to let a workflow run before the orchestrator automatically cancels it. Default: 360.
The value of timeout-minutes
can be an expression. Allowed expression contexts include opentf
, inputs
, and
variables
. For more information about expressions, see “Expressions.”
jobs
¶
A workflow run is made up of one or more jobs
, which run in parallel by default. To run jobs
sequentially, you can define dependencies on other jobs using the jobs.<job_id>.needs
keyword.
A job contains either a collection of sub-jobs or a sequence of tasks called steps
, but can not
contain both.
Jobs that contains a collection of sub-jobs have a jobs.<job_id>.generator
key while jobs
that contains a sequence of steps have a jobs.<job_id>.steps
key.
Each job runs in an execution environment specified by runs-on
.
You can run an unlimited number of jobs as long as you are within the workflow usage limits. For more information, see “Usage limits.”
jobs.<job_id>
¶
Use jobs.<job_id>
to give your job a unique identifier. The key job_id
is a string and its value is
a map of the job’s configuration data. You must replace <job_id>
with a string that is unique to
the jobs
object. The <job_id>
must start with a letter or _
and contain only alphanumeric characters,
-
, or_
.
Example: Creating jobs
In this example, two jobs have been created, and their job_id
values are my_first_job
and
my_second_job
.
jobs:
my_first_job:
name: My first job
my_second_job:
name: My second job
jobs.<job_id>.name
¶
The name of the job displayed by the orchestrator.
jobs.<job_id>.needs
¶
Use jobs.<job_id>.needs
to identify any jobs that must complete successfully before this job will run.
It can be a string or an array of strings. If a job fails or is skipped, all jobs that need it are skipped
unless the jobs use a conditional expression that causes the job to continue. If a run contains a series
of jobs that need each other, a failure or skip applies to all jobs in the dependency chain from the
point of failure or skip onward. If you would like a job to run even if a job it is dependent on did
not succeed, use the always()
conditional expression in jobs.<job_id>.if
.
Example: Requiring successful dependent jobs
jobs:
job1:
job2:
needs: job1
job3:
needs: [job1, job2]
In this example, job1
must complete successfully before job2
begins, and job3
waits for both job1
and job2
to complete.
The jobs in this example run sequentially:
job1
job2
job3
Example: Not requiring successful dependent jobs
jobs:
job1:
job2:
needs: job1
job3:
if: ${{ always() }}
needs: [job1, job2]
In this example, job3
uses the always()
conditional expression so that it always runs after
job1
and job2
have completed, regardless of whether they were successful. For more information,
see “Expressions.”
jobs.<job_id>.if
¶
You can use the jobs.<job_id>.if
conditional to prevent a job from running unless a
condition is met. You can use any supported context and expression to create a conditional.
For more information on which contexts are supported in this key, see “Contexts.”
When you use expressions in an if
conditional, you may omit the ${{ }}
expression syntax
because the orchestrator automatically evaluates the if
conditional as an expression.
However, this rule does not apply everywhere.
You must use the ${{ }}
expression syntax or escape with ''
, ""
, or ()
when the
expression starts with !
, since !
is reserved notation in YAML format.
Note
Using the ${{ }}
expression syntax turns the contents into a string, and strings are
truthy. For example, if: true && ${{ false }}
will evaluate to true
. For more
information, see “Expressions.”
Example: Only run job for specific condition
This example uses if
to control when the production-deploy
job can run. It will only run if the
TARGET
environment variable is set to production
. Otherwise the job will be marked as skipped.
metadata:
name: example workflow
jobs:
production-deploy:
if: ${{ variables.TARGET == 'production' }}
runs-on: linux
steps:
- ...
jobs.<job_id>.runs-on
¶
Use jobs.<job_id>.runs-on
to define the execution environment to run the job on.
You can provide runs-on
as:
- A single string
- A single variable containing a string
- An array of strings, variables containing strings, or a combination of both
If you do not specify runs-on
, the job will run on any available execution environment (excluding
the inception
execution environment, which is only used if explicitly specified).
If you specify an array of strings or variables, your workflow will execute on any execution environment
that matches all of the specified runs-on
values. For example, here the job will only run
on an execution environment that has the tags linux
, x64
, and gpu
:
runs-on: [linux, x64, gpu]
You can mix stings and variables in an array. For example:
variables:
CHOSEN_OS: linux
jobs:
test:
runs-on: [robotframework, "${{ variables.CHOSEN_OS }}"]
steps:
- run: echo Hello world!
Note
Quotation marks are not required around simple strings like ‘linux’, but they are required
for expressions like "${{ variables.CHOSEN_OS }}"
.
Example: Specifying an operating system
runs-on: linux
or
runs-on: [linux, robotframework]
For more information, see Execution environments.
jobs.<jobs_id>.outputs
¶
You can use jobs.<job_id>.outputs
to create a map
of outputs for a job. Job outputs are available to
all downstream jobs that depend on this job. For more information on defining job dependencies, see
jobs.<job_id>.needs
.
Job outputs are Unicode strings, and job outputs containing expressions are evaluated at the end of each job.
To use job outputs in a dependent job, you can use the needs
context. For more information, see
“Contexts.”
Example: Defining outputs for a job
jobs:
job1:
runs-on: linux
# Map a step output to a job output
outputs:
output1: ${{ steps.step1.outputs.test }}
output2: ${{ steps.step2.outputs.test }}
steps:
- id: step1
run: echo "::set-output name=test::hello"
- id: step2
run: echo "::set-output name=test::world"
job2:
runs-on: linux
needs: job1
steps:
- run: echo ${{needs.job1.outputs.output1}} ${{needs.job1.outputs.output2}}
jobs.<job_id>.variables
¶
A map
of environment variables that are available for all steps in the job. You can also set
environment variables for the entire workflow or an individual step. For more information, see
variables
and jobs.<job_id>.steps.variables
.
When more than one environment variable is defined with the same name, the orchestrator uses the most specific environment variable. For example, an environment variable defined in a step will override job and workflow variables with the same name, while the step executes. A variable defined for a job will override a workflow variable with the same name, while the job executes.
Variables specified in the workflow file are by default subject to the execution environment shell’s
expansions and substitution rules. Use the verbatim: true
option or the target shell’s escape
conventions if you need the literal content.
Example of jobs.<job_id>.variables
jobs:
job1:
variables:
FIRST_NAME: Mona
LAST_NAME:
value: O'Hare
verbatim: true
jobs.<job_id>.defaults
¶
Use jobs.<job_id>.defaults
to create a map
of default settings that will apply to all steps in
the job. You can also set default settings for the entire workflow. For more information, see
defaults
.
When more than one default setting is defined with the same name, the orchestrator uses the most specific default setting. For example, a default setting defined in a job will override a default setting that has the same name defined in a workflow.
jobs.<job_id>.defaults.run
¶
Use jobs.<job_id>.defaults.run
to provide default shell
and working-directory
to all run
steps
in the job.
You can provide default shell
and working-directory
options for all run
steps in a job. You can
also set default settings for run
for the entire workflow. For more information, see
defaults.run
.
These can be overridden at the jobs.<job_id>.defaults.run
and jobs.<job_id>.steps[*].run
levels.
When more than one default setting is defined with the same name, the orchestrator uses the most specific default setting. For example, a default setting defined in a job will override a default setting that has the same name defined in a workflow.
jobs.<job_id>.defaults.run.shell
¶
Use shell
to define the shell
for a step. This keyword can reference several contexts. For more
information, see “Contexts.”
jobs.<job_id>.defaults.run.working-directory
¶
Use working-directory
to define the working-directory
for the shell
for a step. This keyword can
reference several contexts. For more information, see “Contexts.”
Tip
Ensure the working-directory
you assign exists on the execution environment before you run
your shell in it.
When more than one default setting is defined with the same name, the orchestrator uses the most specific default setting. For example, a default setting defined in a job will override a default setting that has the same name defined in a workflow.
Example: Setting default run
step options for a job
jobs:
job1:
runs-on: linux
defaults:
run:
shell: bash
working-directory: scripts
jobs.<job_id>.generator
¶
Selects a generator to run. A generator creates a series of jobs that run as sub-jobs of the current job.
We strongly recommend that you include the version of the generator you are using. If you do not specify a version, it could break your workflow or cause unexpected behavior when the generator owner publishes an update.
- Using the specific major generator version allows you to receive critical fixes and security patches while still maintaining compatibility. It also assures that your workflow should still work.
Some generators require inputs that you must set using the with
keyword. Review the generator’s README
file to determine the inputs required.
Example: Using versioned generators
jobs:
my_job:
# Reference the major version of a release
generator: foobar/hello_world@v1
my_second_job:
# Reference a minor version of a release
generator: foobar/hello_world@v1.2
jobs.<job_id>.with
¶
When a job is used to call a generator, you can use with
to provide a map of inputs that are passed
to the generator.
Any inputs that you pass must match the input specifications defined in the called generator.
Unlike jobs.<job_id>.steps[*].with
, the inputs you pass with jobs.<job_id>.with
are not
available as environment variables in the called generator. Instead, you can reference the inputs
by using the inputs
context.
Example of jobs.<job_id>.with
Defines the three input parameters (first_name
, middle_name
, and last_name
) defined by the hello_world
generator.
jobs:
generator: foobar/hello_world@v1
with:
first_name: Mona
middle_name: The
last_name: Octocat
jobs.<job_id>.with.<input_id>
¶
A pair consisting of a string identifier for the input and the value of the input. The identifier
must match the name of an input defined by the called generator plugin. The data type of the value
must match the type of the value defined by inputs.<input_id>.type
in the called generator.
Allowed expression contexts: opentf
, inputs
, variables
, needs
, and resources
.
jobs.<job_id>.steps
¶
A job can contain a sequence of tasks called steps
. Steps can run commands or functions. Not all steps
run functions, but all functions run as a step. Each step runs in its own process in the execution
environment and has access to the workspace and file system. Because steps run in their own process,
changes to environment variables are not preserved between steps. The orchestrator provides built-in
steps to set up and complete a job.
You can run an unlimited number of steps as long as you are within the job usage limits. For more information, see “Usage limits.”
Example of jobs.<job_id>.steps
metadata:
name: Greeting from Mona
jobs:
my-job:
name: My Job
runs-on: linux
steps:
- name: Print a greeting
variables:
MY_VAR: Hi there! My name is
FIRST_NAME: Mona
MIDDLE_NAME: The
LAST_NAME: Octocat
run: |
echo $MY_VAR $FIRST_NAME $MIDDLE_NAME $LAST_NAME.
jobs.<job_id>.steps[*].id
¶
A unique identifier for the step. You can use the id
to reference the step in contexts.
For more information, see “Contexts.”
jobs.<job_id>.steps[*].if
¶
You can use the if
conditional to prevent a step from running unless a condition is met.
You can use any supported context and expression to create a conditional. For more
information on which contexts are supported in this key, see “Contexts.”
When you use expressions in an if
conditional, you may omit the ${{ }}
expression syntax
because the orchestrator automatically evaluates the if
conditional as an expression.
However, this rule does not apply everywhere.
You must use the ${{ }}
expression syntax or escape with ''
, ""
, or ()
when the
expression starts with !
, since !
is reserved notation in YAML format.
Using the ${{ }}
expression syntax turns the contents into a string, and strings are
truthy. For example, if: true && ${{ false }}
will evaluate to true. For more information,
see “Expressions.”
Example: Using contexts
This step only runs when the execution environment OS is Windows.
steps:
- name: My first step
if: ${{ runner.os == 'windows' }}
run: echo This step is running on windows.
Example: Using status check functions
The “My backup step” step only runs when the previous step of a job fails. For more information, see “Expressions.”
steps:
- name: My first step
uses: monacorp/function-name@v1
- name: My backup step
if: ${{ failure() }}
uses: actions/heroku@v2
jobs.<job_id>.steps[*].name
¶
A name for your step to display on the orchestrator.
jobs.<job_id>.steps[*].uses
¶
Selects a function to run as part of a step in your job. A function is a reusable unit of code.
We strongly recommend that you include the version of the function you are using. If you do not specify a version, it could break your workflow or cause unexpected behavior when the provider owner publishes an update.
- Using the specific major function version allows you to receive critical fixes and security patches while still maintaining compatibility. It also assures that your workflow should still work.
Functions may require inputs that you must set using the with
keyword. Review the provider’s README
file to determine the inputs required.
Providers are plugins that add functions. For more details, see “Providers” chapter.
Example: Using versioned functions
steps:
# Reference the major version of a release
- uses: actions/setup-node@v1
# Reference a minor version of a release
- uses: actions/setup-node@v1.2
jobs.<job_id>.steps[*].run
¶
Runs command-line programs that do not exceed 21,000 characters using the execution environment’s shell.
If you do not provide a name
, the step name will default to the text specified in the run
command.
Commands run using non-login shells by default. You can choose a different shell and
customize the shell used to run commands. For more information, see “jobs.<job_id>.steps[*].shell
.”
Each run
keyword represents a new process and shell in the execution environment. When you provide multi-line
commands, each line runs in the same shell. For example:
- A single-line command:
- name: Install Dependencies
run: npm install
- A multi-line command:
- name: Clean install dependencies and build
run: |
npm ci
npm run build
jobs.<job_id>.steps[*].working-directory
¶
Using the working-directory
keyword, you can specify the working directory of where to run the command.
- name: Clean temp directory
run: rm -rf *
working-directory: ./temp
Alternatively, you can specify a default working directory for all run
steps in a job, or for all
run
steps in the entire workflow. For more information, see “defaults.run.working-directory
”
and “job.<job_id>.defaults.run.working-directory
.”
You can also use a run
step to run a script. For more information, see
“Essential features.”
jobs.<job_id>.steps[*].shell
¶
You can override the default shell settings in the execution environment using
the shell
keyword. You can use built-in shell
keywords, or you can define a
custom set of shell options. The shell command that is run internally executes
a temporary file that contains the commands specified in the run
keyword.
Supported platform | shell parameter |
Description | Command run internally |
---|---|---|---|
All | bash |
The default shell on non-Windows platforms. When specifying a bash shell on Windows, the bash shell included with Git for Windows is used. | bash --noprofile --norc -eo pipefail {0} |
All | python |
Executes the python command. | python {0} |
All | pwsh |
The PowerShell Core. The orchestrator appends the extension .ps1 to your script name. |
pwsh -command ". '{0}'" |
Windows | cmd |
The default shell on Windows. The orchestrator appends the extension .cmd to your script name and substitutes for {0} . |
%ComSpec% /D /E:ON /V:OFF /S /C "CALL "{0}"" |
Windows | powershell |
The PowerShell Desktop. The orchestrator appends the extension .ps1 to your script name. |
powershell -command ". '{0}'" |
For pwsh
and python
, the command used must be installed in the execution environment.
For powershell
, the execution policy must allow script execution on the execution
environment. See “about_Execution_Policies” for more information.
Alternatively, you can specify a default shell for all run
steps in a job, or for all run
steps in the entire workflow. For more information, see “defaults.run.shell
” and “jobs.<job_id>.defaults.run.shell
.”
Example: Running a script using bash
steps:
- name: Display the path
shell: bash
run: echo $PATH
Example: Running a script using Windows cmd
steps:
- name: Display the path
shell: cmd
run: echo %PATH%
Example: Running a script using PowerShell Core
steps:
- name: Display the path
shell: pwsh
run: echo ${env:PATH}
Example: Using PowerShell Desktop to run a command
steps:
- name: Display the path
shell: powershell
run: echo ${env:PATH}
Example: Running a python script
steps:
- name: Display the path
run: |
import os
print(os.environ['PATH'])
shell: python
Custom shell¶
You can set the shell
value to a template string using
command [options...] {0} [more_options...]
The orchestrator interprets the first white-space–delimited word of the string as the command and inserts the file name for the temporary script at {0}
.
For example:
steps:
- name: Display the environment variables and their values
run: |
print %ENV
shell: perl {0}
The command used, perl
in this example, must be installed in the execution environment.
Exit codes and error action preference¶
For built-in shell keywords, we provide the following defaults that are executed by the execution environments. You should use these guidelines when running shell scripts.
-
bash
:- Fail-fast behavior using
set -e o pipefail
: Default forbash
and built-in shell. It is also the default when you do not provide an option on non-Windows platforms. - You can opt out of fail-fast and take full control by providing a template string to the shell
options. For example,
bash {0}
. sh
-like shells exit with the exit code of the last command executed in a script, which is also the default behavior for functions. The execution environment will report the status of the step as fail/succeed based on this exit code.
- Fail-fast behavior using
-
powershell
/pwsh
:- Fail-fast behavior when possible. For
pwsh
andpowershell
built-in shell, we will prepend$ErrorActionPreference = 'stop'
to script contents. - We append
if ((Test-Path -LiteralPath variable:\LASTEXITCODE)) { exit $LASTEXITCODE }
to powershell scripts so that functions statuses reflect the script’s last exit code. - Users can always opt out by not using the build-in shell, and providing a custom shell option
like
pwsh -File {0}
orpowershell -Command "& '{0}'"
, depending on need.
- Fail-fast behavior when possible. For
-
cmd
- There does not seem to be a way to fully opt into fail-fast behavior other than writing your script to check each error code and respond accordingly. Because we can’t provide that behavior by default, you need to write this behavior into your script.
cmd.exe
will exit with the error level of the last program it executed, and it will return the error code to the execution environment. This behavior is internally consistent with the previoussh
andpwsh
default behavior and is thecmd.exe
default, so this behavior remains intact.
jobs.<job_id>.steps[*].with
¶
A map
of the input parameters defined by the function. Each input parameter is a key/value pair.
Input parameters are set as environment variables. The variable is prefixed with INPUT_
and converted
to upper case.
Example of jobs.<job_id>.steps[*].with
Defines the three input parameters (first_name
, middle_name
, and last_name
) defined by the
hello_world
function. These input variables will be accessible to the hello-world
function as
INPUT_FIRST_NAME
, INPUT_MIDDLE_NAME
, and INPUT_LAST_NAME
environment variables.
jobs:
my_first_job:
steps:
- name: My first step
uses: actions/hello_world@master
with:
first_name: Mona
middle_name: The
last_name: Octocat
jobs.<job_id>.steps[*].variables
¶
Sets environment variables for steps to use in the execution environment. You can also set environment
variables for the entire workflow or a job. For more information, see variables
and
jobs.<job_id>.variables
.
When more than one environment variable is defined with the same name, the orchestrator uses the most specific environment variable. For example, an environment variable defined in a step will override job and workflow variables with the same name, while the step executes. A variable defined for a job will override a workflow variable with the same name, while the job executes.
Variables specified in the workflow file are by default subject to the execution environment shell’s
expansions and substitution rules. Use the verbatim: true
option or the target shell’s escape
conventions if you need the literal content.
Example of jobs.<job_id>.steps[*].variables
steps:
- name: My first function
variables:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FIRST_NAME: Mona
LAST_NAME:
value: O'ctocat
verbatim: true
jobs.<job_id>.steps[*].continue-on-error
¶
Prevents a job from failing when a step fails. Set to true
to allow a job to pass when this step fails.
The value of continue-on-error
can be an expression. Allowed expression contexts include opentf
, inputs
,
variables
, needs
, job
, runner
, and steps
. For more information about expressions, see
“Expressions.”
jobs.<job_id>.steps[*].timeout-minutes
¶
The maximum number of minutes to run the step before killing the process.
The value of timeout-minutes
can be an expression. Allowed expression contexts include opentf
, inputs
,
variables
, needs
, job
, runner
, and steps
. For more information about expressions, see
“Expressions.”
jobs.<job_id>.timeout-minutes
¶
The maximum number of minutes to let a job run before the orchestrator automatically cancels it. Default: 360
The value of timeout-minutes
can be an expression. Allowed expression contexts include opentf
, inputs
,
variables
, and needs
. For more information about expressions, see “Expressions.”
jobs.<job_id>.continue-on-error
¶
Prevents a workflow run from failing when a job fails. Set to true
to allow a workflow run to pass when this job fails.
The value of continue-on-error
can be an expression. Allowed expression contexts include opentf
, inputs
,
variables
, and needs
. For more information about expressions, see “Expressions.”
Example: Preventing a specific failing job from failing a workflow run
You can allow specific jobs to fail without failing the workflow run. For example, if you
wanted to only allow an experimental job to fail without failing the workflow run if an
englobing environment variable EXPERIMENTAL
set to true
job-1:
runs-on: linux
continue-on-error: ${{ variables.EXPERIMENTAL == 'true' }}
outputs
¶
A map
of outputs for a called workflow. Called workflow outputs are available to all callers.
Each output has an identifier, an optional description
, and a value
. The value
must be
set to the value of an output from a job within the called workflow.
In the example below, two outputs are defined for this reusable workflow: workflow_output1
and workflow_output2
. These are mapped to outputs called job_output1
and job_output2
,
both from a job called my_job
.
Example of outputs
# Map the workflow outputs to job outputs
outputs:
workflow_output1:
description: "The first job output"
value: ${{ jobs.my_job.outputs.job_output1 }}
workflow_output2:
description: "The second job output"
value: ${{ jobs.my_job.outputs.job_output2 }}
For information on how to reference a job output, see jobs.<job_id>.outputs
.