`Forgejo Actions` provides continuous integration driven from the files found in the `.forgejo/workflows` directory of a repository. Note that `Forgejo` does not run the jobs, it relies on the [`Forgejo runner`](https://code.forgejo.org/forgejo/runner) to do so. It needs to be installed separately.
For jobs to run in LXC containers, the `Forgejo runner` needs passwordless sudo access for all `lxc-*` commands on a Debian GNU/Linux `bookworm` system where [LXC](https://linuxcontainers.org/lxc/) is installed. The [LXC helpers](https://code.forgejo.org/forgejo/lxc-helpers/) can be used as follows to create a suitable container:
> **NOTE:** Multiarch [Go](https://go.dev/) builds and [binfmt](https://github.com/tonistiigi/binfmt) need `bookworm` to produce and test binaries on a single machine for people who do not have access to dedicated hardware. If this is not needed, installing the `Forgejo runner` on `bullseye` will also work.
The `Forgejo runner` can then be installed and run within the `myrunner` container.
> **Warning:** LXC containers do not provide a level of security that makes them safe for potentially malicious users to run jobs. They provide an excellent isolation for jobs that may accidentally damage the system they run on.
The `Forgejo runner` needs to connect to a `Forgejo` instance and must be registered before doing so. It will give it permission to read the repositories and send back information to `Forgejo` such as the logs or its status.
# Whether to use privileged mode or not when launching task containers (privileged mode is required for Docker-in-Docker).
privileged: false
# And other options to be used when the container is started (eg, --add-host=my.forgejo.url:host-gateway).
options:
# The parent directory of a job's working directory.
# If it's empty, /workspace will be used.
workdir_parent:
# Volumes (including bind mounts) can be mounted to containers. Glob syntax is supported, see https://github.com/gobwas/glob
# You can specify multiple volumes. If the sequence is empty, no volumes can be mounted.
# For example, if you only allow containers to mount the `data` volume and all the json files in `/src`, you should change the config to:
# valid_volumes:
# - data
# - /src/*.json
# If you want to allow any volume, please use the following configuration:
# valid_volumes:
# - '**'
valid_volumes: []
# overrides the docker client host with the specified one.
# If it's empty, act_runner will find an available docker host automatically.
# If it's "-", act_runner will find an available docker host automatically, but the docker host won't be mounted to the job containers and service containers.
# If it's not empty or "-", the specified docker host will be used. An error will be returned if it doesn't work.
docker_host: ""
host:
# The parent directory of a job's working directory.
When a `Forgejo runner` creates its own Docker or Podman networks, IPv6 is not enabled by default, and must be enabled explicitly in the `Forgejo runner` configuration.
**Docker only**: The Docker daemon requires additional configuration to enable IPv6. To make use of IPv6 with Docker, you need to provide an `/etc/docker/daemon.json` configuration file with at least the following keys:
```json
{
"ipv6": true,
"experimental": true,
"ip6tables": true,
"fixed-cidr-v6": "fd00:d0ca:1::/64",
"default-address-pools": [
{ "base": "172.17.0.0/16", "size": 24 },
{ "base": "fd00:d0ca:2::/104", "size": 112 }
]
}
```
Afterwards restart the Docker daemon with `systemctl restart docker.service`.
> **NOTE**: These are example values. While this setup should work out of the box, it may not meet your requirements. Please refer to the Docker documentation regarding [enabling IPv6](https://docs.docker.com/config/daemon/ipv6/#use-ipv6-for-the-default-bridge-network) and [allocating IPv6 addresses to subnets dynamically](https://docs.docker.com/config/daemon/ipv6/#dynamic-ipv6-subnet-allocation).
**Docker & Podman**:
To test IPv6 connectivity in `Forgejo runner`-created networks, create a small workflow such as the following:
```yaml
---
on: push
jobs:
ipv6:
runs-on: docker
steps:
- run: |
apt update; apt install --yes curl
curl -s -o /dev/null http://ipv6.google.com
```
If you run this action with `forgejo-runner exec`, you should expect this job fail:
[ipv6.yml/ipv6] Cleaning up container for job ipv6
[ipv6.yml/ipv6] Cleaning up network for job ipv6, and network name is: FORGEJO-ACTIONS-TASK-push_WORKFLOW-ipv6-yml_JOB-ipv6-network
[ipv6.yml/ipv6] 🏁 Job failed
```
To actually enable IPv6 with `forgejo-runner exec`, the flag `--enable-ipv6` must be provided. If you run this again with `forgejo-runner exec --enable-ipv6`, the job should succeed:
[ipv6.yml/ipv6] Cleaning up container for job ipv6
[ipv6.yml/ipv6] Cleaning up network for job ipv6, and network name is: FORGEJO-ACTIONS-TASK-push_WORKFLOW-ipv6-yml_JOB-ipv6-network
[ipv6.yml/ipv6] 🏁 Job succeeded
```
Finally, if this test was successful, enable IPv6 in the `config.yml` file of the `Forgejo runner` daemon and restart the daemon:
```yaml
container:
enable_ipv6: true
```
Now, `Forgejo runner` will create networks with IPv6 enabled, and workflow containers will be assigned addresses from the pools defined in the Docker daemon configuration.
The workflows / tasks defined in the files found in `.forgejo/workflows` must specify the environment they need to run with `runs-on`. Each `Forgejo runner` declares, when they connect to the `Forgejo` instance the list of labels they support so `Forgejo` sends them tasks accordingly. For instance if a job within a workflow has:
Declaring a label means that the `Forgejo runner` will accept tasks that specify this label in the `runs-on` field.
So, to mimick the GitHub runners, the `runs-on` field can be set to `ubuntu-22.04:docker://node:20-bullseye` for instance.
With this, the Forgejo runner will respond to `runs-on: ubuntu-22.04` and will use the `node:20-bullseye` image from hub.docker.com.
This image is quite capable of running many of the workflows that are designed for the GitHub runners.
For a slightly bigger image, use `ghcr.io/catthehacker/ubuntu:act-22.04` instead of `node:20-bullseye` which should be compatible with most actions while remaining relatively small.
There exist larger images used that can go up to 20GB compressed with more software installed if needed.
If `runs-on` is matched to a label mapped to `docker://`, the rest of it is interpreted as the default container image to use if no other is specified. The runner will execute all the steps, as root, within a container created from that image. The default container image can be overridden by a workflow to use `alpine:3.18` as follows.
-`node20:docker://node:20-bookworm` == `node20:docker://docker.io/node:20-bookworm` defines `node20` to be the `node:20-bookworm` image from hub.docker.com
-`docker:docker://code.forgejo.org/oci/alpine:3.18` defines `docker` to be the `alpine:3.18` image from https://code.forgejo.org/oci/-/packages/container/alpine/3.18
If `runs-on` is matched to a label mapped to `lxc://`, the rest of it is interpreted as the default [template and release](https://images.linuxcontainers.org/) to use if no other is specified. The runner will execute all the steps, as root, within a [LXC container](https://linuxcontainers.org/) created from that template and release. The default template is `debian` and the default release is `bullseye`.
[nodejs](https://nodejs.org/en/download/) version 20 is installed.
They can be overridden by a workflow to use `debian` and `bookworm` as follows.
```yaml
runs-on: lxc
container:
image: debian:bookwork
```
See the user documentation for `jobs.<job_id>.container` for more information.
Labels examples:
-`bookworm:lxc://debian:bookworm` defines bookworm to be an LXC container running Debian GNU/Linux bookworm.
### shell
If `runs-on` is matched to a label mapped to `host://-self-hosted``, the runner will execute all the steps in a shell forked from the runner, directly on the host.
Label example:
-`self-hosted:host://-self-hosted` defines `self-hosted` to be a shell
The [`forgejo-actions-runner`](https://github.com/NixOS/nixpkgs/blob/ac6977498b1246f21af08f3cf25ea7b602d94b99/pkgs/development/tools/continuous-integration/forgejo-actions-runner/default.nix) recipe is released in NixOS.
Please note that the `services.forgejo-actions-runner.instances.<name>.labels` key may be set to `[]` (an empty list) to use the packaged Forgejo instance list. One of `virtualisation.docker.enable` or `virtualisation.podman.enable` will need to be set. The default Forgejo image list is populated with docker images.
If you would like to use docker runners in combination with [cache actions](#cache-configuration), be sure to add docker bridge interfaces "br-\*" to the firewalls' trusted interfaces:
It is possible to use [other runners](https://codeberg.org/forgejo-contrib/delightful-forgejo#user-content-forgejo-actions-runners) instead of `Forgejo runner`. As long as they can connect to a `Forgejo` instance using the [same protocol](https://codeberg.org/forgejo/forgejo/src/branch/forgejo/routers/api/actions), they will be given tasks to run.