Fixes #22722
Currently, it is not possible to force push to a branch with branch
protection rules in place. There are often times where this is necessary
(CI workflows/administrative tasks etc).
The current workaround is to rename/remove the branch protection,
perform the force push, and then reinstate the protections.
Provide an additional section in the branch protection rules to allow
users to specify which users with push access can also force push to the
branch. The default value of the rule will be set to `Disabled`, and the
UI is intuitive and very similar to the `Push` section.
It is worth noting in this implementation that allowing force push does
not override regular push access, and both will need to be enabled for a
user to force push.
This applies to manual force push to a remote, and also in Gitea UI
updating a PR by rebase (which requires force push)
This modifies the `BranchProtection` API structs to add:
- `enable_force_push bool`
- `enable_force_push_whitelist bool`
- `force_push_whitelist_usernames string[]`
- `force_push_whitelist_teams string[]`
- `force_push_whitelist_deploy_keys bool`
<img width="943" alt="image"
src="https://github.com/go-gitea/gitea/assets/79623665/7491899c-d816-45d5-be84-8512abd156bf">
branch `test` being a protected branch:
![image](https://github.com/go-gitea/gitea/assets/79623665/e018e6e9-b7b2-4bd3-808e-4947d7da35cc)
<img width="1038" alt="image"
src="https://github.com/go-gitea/gitea/assets/79623665/57ead13e-9006-459f-b83c-7079e6f4c654">
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
(cherry picked from commit 12cb1d2998f2a307713ce979f8d585711e92061c)
Fix #31657.
According to the
[doc](https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#onschedule)
of GitHub Actions, The timezone for cron should be UTC, not the local
timezone. And Gitea Actions doesn't have any reasons to change this, so
I think it's a bug.
However, Gitea Actions has extended the syntax, as it supports
descriptors like `@weekly` and `@every 5m`, and supports specifying the
timezone like `TZ=UTC 0 10 * * *`. So we can make it use UTC only when
the timezone is not specified, to be compatible with GitHub Actions, and
also respect the user's specified.
It does break the feature because the times to run tasks would be
changed, and it may confuse users. So I don't think we should backport
this.
## ⚠️ BREAKING ⚠️
If the server's local time zone is not UTC, a scheduled task would run
at a different time after upgrading Gitea to this version.
(cherry picked from commit 21a73ae642b15982a911837775c9583deb47220c)
Fix #31707.
Also related to #31715.
Some Actions resources could has different types of ownership. It could
be:
- global: all repos and orgs/users can use it.
- org/user level: only the org/user can use it.
- repo level: only the repo can use it.
There are two ways to distinguish org/user level from repo level:
1. `{owner_id: 1, repo_id: 2}` for repo level, and `{owner_id: 1,
repo_id: 0}` for org level.
2. `{owner_id: 0, repo_id: 2}` for repo level, and `{owner_id: 1,
repo_id: 0}` for org level.
The first way seems more reasonable, but it may not be true. The point
is that although a resource, like a runner, belongs to a repo (it can be
used by the repo), the runner doesn't belong to the repo's org (other
repos in the same org cannot use the runner). So, the second method
makes more sense.
And the first way is not user-friendly to query, we must set the repo id
to zero to avoid wrong results.
So, #31715 should be right. And the most simple way to fix #31707 is
just:
```diff
- shared.GetRegistrationToken(ctx, ctx.Repo.Repository.OwnerID, ctx.Repo.Repository.ID)
+ shared.GetRegistrationToken(ctx, 0, ctx.Repo.Repository.ID)
```
However, it is quite intuitive to set both owner id and repo id since
the repo belongs to the owner. So I prefer to be compatible with it. If
we get both owner id and repo id not zero when creating or finding, it's
very clear that the caller want one with repo level, but set owner id
accidentally. So it's OK to accept it but fix the owner id to zero.
(cherry picked from commit a33e74d40d356e8f628ac06a131cb203a3609dec)
Fix #26685
If a commit status comes from Gitea Actions and the user cannot access
the repo's actions unit (the user does not have the permission or the
actions unit is disabled), a 404 page will occur after clicking the
"Details" link. We should hide the "Details" link in this case.
<img
src="https://github.com/go-gitea/gitea/assets/15528715/68361714-b784-4bb5-baab-efde4221f466"
width="400px" />
(cherry picked from commit 7dec8de9147b20c014d68bb1020afe28a263b95a)
Conflicts:
routers/web/repo/commit.go
trivial context commit
This is an implementation of a quota engine, and the API routes to
manage its settings. This does *not* contain any enforcement code: this
is just the bedrock, the engine itself.
The goal of the engine is to be flexible and future proof: to be nimble
enough to build on it further, without having to rewrite large parts of
it.
It might feel a little more complicated than necessary, because the goal
was to be able to support scenarios only very few Forgejo instances
need, scenarios the vast majority of mostly smaller instances simply do
not care about. The goal is to support both big and small, and for that,
we need a solid, flexible foundation.
There are thee big parts to the engine: counting quota use, setting
limits, and evaluating whether the usage is within the limits. Sounds
simple on paper, less so in practice!
Quota counting
==============
Quota is counted based on repo ownership, whenever possible, because
repo owners are in ultimate control over the resources they use: they
can delete repos, attachments, everything, even if they don't *own*
those themselves. They can clean up, and will always have the permission
and access required to do so. Would we count quota based on the owning
user, that could lead to situations where a user is unable to free up
space, because they uploaded a big attachment to a repo that has been
taken private since. It's both more fair, and much safer to count quota
against repo owners.
This means that if user A uploads an attachment to an issue opened
against organization O, that will count towards the quota of
organization O, rather than user A.
One's quota usage stats can be queried using the `/user/quota` API
endpoint. To figure out what's eating into it, the
`/user/repos?order_by=size`, `/user/quota/attachments`,
`/user/quota/artifacts`, and `/user/quota/packages` endpoints should be
consulted. There's also `/user/quota/check?subject=<...>` to check
whether the signed-in user is within a particular quota limit.
Quotas are counted based on sizes stored in the database.
Setting quota limits
====================
There are different "subjects" one can limit usage for. At this time,
only size-based limits are implemented, which are:
- `size:all`: As the name would imply, the total size of everything
Forgejo tracks.
- `size:repos:all`: The total size of all repositories (not including
LFS).
- `size:repos:public`: The total size of all public repositories (not
including LFS).
- `size:repos:private`: The total size of all private repositories (not
including LFS).
- `sizeall`: The total size of all git data (including all
repositories, and LFS).
- `sizelfs`: The size of all git LFS data (either in private or
public repos).
- `size:assets:all`: The size of all assets tracked by Forgejo.
- `size:assets:attachments:all`: The size of all kinds of attachments
tracked by Forgejo.
- `size:assets:attachments:issues`: Size of all attachments attached to
issues, including issue comments.
- `size:assets:attachments:releases`: Size of all attachments attached
to releases. This does *not* include automatically generated archives.
- `size:assets:artifacts`: Size of all Action artifacts.
- `size:assets:packages:all`: Size of all Packages.
- `size:wiki`: Wiki size
Wiki size is currently not tracked, and the engine will always deem it
within quota.
These subjects are built into Rules, which set a limit on *all* subjects
within a rule. Thus, we can create a rule that says: "1Gb limit on all
release assets, all packages, and git LFS, combined". For a rule to
stand, the total sum of all subjects must be below the rule's limit.
Rules are in turn collected into groups. A group is just a name, and a
list of rules. For a group to stand, all of its rules must stand. Thus,
if we have a group with two rules, one that sets a combined 1Gb limit on
release assets, all packages, and git LFS, and another rule that sets a
256Mb limit on packages, if the user has 512Mb of packages, the group
will not stand, because the second rule deems it over quota. Similarly,
if the user has only 128Mb of packages, but 900Mb of release assets, the
group will not stand, because the combined size of packages and release
assets is over the 1Gb limit of the first rule.
Groups themselves are collected into Group Lists. A group list stands
when *any* of the groups within stand. This allows an administrator to
set conservative defaults, but then place select users into additional
groups that increase some aspect of their limits.
To top it off, it is possible to set the default quota groups a user
belongs to in `app.ini`. If there's no explicit assignment, the engine
will use the default groups. This makes it possible to avoid having to
assign each and every user a list of quota groups, and only those need
to be explicitly assigned who need a different set of groups than the
defaults.
If a user has any quota groups assigned to them, the default list will
not be considered for them.
The management APIs
===================
This commit contains the engine itself, its unit tests, and the quota
management APIs. It does not contain any enforcement.
The APIs are documented in-code, and in the swagger docs, and the
integration tests can serve as an example on how to use them.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
- add package counter to repo/user/org overview pages
- add go unit tests for repo/user has/count packages
- add many more unit tests for packages model
- fix error for non-existing packages in DeletePackageByID and SetRepositoryLink
Replace a double select with a simple select.
The complication originates from the initial implementation which
deleted packages instead of selecting them. It was justified to
workaround a problem in MySQL. But it is just a waste of resources
when collecting a list of IDs.
- In the spirit of #4635
- Notify the owner when their account is getting enrolled into TOTP. The
message is changed according if they have security keys or not.
- Integration test added.
- Currently if the password, primary mail, TOTP or security keys are
changed, no notification is made of that and makes compromising an
account a bit easier as it's essentially undetectable until the original
person tries to log in. Although other changes should be made as
well (re-authing before allowing a password change), this should go a
long way of improving the account security in Forgejo.
- Adds a mail notification for password and primary mail changes. For
the primary mail change, a mail notification is sent to the old primary
mail.
- Add a mail notification when TOTP or a security keys is removed, if no
other 2FA method is configured the mail will also contain that 2FA is
no longer needed to log into their account.
- `MakeEmailAddressPrimary` is refactored to the user service package,
as it now involves calling the mailer service.
- Unit tests added.
- Integration tests added.
This leverages the existing `sync_external_users` cron job to
synchronize the `IsActive` flag on users who use an OAuth2 provider set
to synchronize. This synchronization is done by checking for expired
access tokens, and using the stored refresh token to request a new
access token. If the response back from the OAuth2 provider is the
`invalid_grant` error code, the user is marked as inactive. However, the
user is able to reactivate their account by logging in the web browser
through their OAuth2 flow.
Also changed to support this is that a linked `ExternalLoginUser` is
always created upon a login or signup via OAuth2.
Ideally, we would also refresh permissions from the configured OAuth
provider (e.g., admin, restricted and group mappings) to match the
implementation of LDAP. However, the OAuth library used for this `goth`,
doesn't seem to support issuing a session via refresh tokens. The
interface provides a [`RefreshToken`
method](https://github.com/markbates/goth/blob/master/provider.go#L20),
but the returned `oauth.Token` doesn't implement the `goth.Session` we
would need to call `FetchUser`. Due to specific implementations, we
would need to build a compatibility function for every provider, since
they cast to concrete types (e.g.
[Azure](https://github.com/markbates/goth/blob/master/providers/azureadv2/azureadv2.go#L132))
---------
Co-authored-by: Kyle D <kdumontnu@gmail.com>
(cherry picked from commit 416c36f3034e228a27258b5a8a15eec4e5e426ba)
Conflicts:
- tests/integration/auth_ldap_test.go
Trivial conflict resolved by manually applying the change.
- routers/web/auth/oauth.go
Technically not a conflict, but the original PR removed the
modules/util import, which in our version, is still in use. Added it
back.
Resolves https://github.com/go-gitea/gitea/issues/26996
Added default sorting for milestones by name.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
---
Conflict resolution: trivial, was due to the improvement made to 'the due
date sorting' strings.
(cherry picked from commit e8d4b7a8b198eca3b0bd117efb422d7d7cac93fe)
This commit allows the `forgejo-cli actions register` command to change
an existing runner's secret, as discussed in #4610.
It refactors `RegisterRunner` to extract the code that hashes the token,
moving this code to a method called `UpdateSecret` on `ActionRunner`.
A test for the method has been added.
The `RegisterRunner` function is updated so that:
- it relies on `ActionRunner.UpdateSecret` when creating new runners,
- it checks whether an existing runner's secret still matches the one
passed on the command line,
- it updates the runner's secret if it wasn't created and it no longer
matches.
A test has been added for the new behaviour.
This commit adds a new flag, `--keep-labels`, to the runner registration CLI command. If this flag is present and the runner being registered already exists, it will prevent the runners' labels from being reset.
In order to accomplish this, the signature of the `RegisterRunner` function from the `models/actions` package has been modified so that the labels argument can be nil. If it is, the part of the function that updates the record will not change the runner.
Various tests have been added for this function, for the following cases: new runner with labels, new runner without label, existing runner with labels, existing runner without labels.
The flag has been added to the CLI command, the action function has been updated to read the labels parameters through a separate function (`getLabels`), and test cases for this function have been added.
<!--
Before submitting a PR, please read the contributing guidelines:
https://codeberg.org/forgejo/forgejo/src/branch/forgejo/CONTRIBUTING.md
-->
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/4610
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Emmanuel BENOÎT <tseeker@nocternity.net>
Co-committed-by: Emmanuel BENOÎT <tseeker@nocternity.net>
- Add an early-return to `LoadSchedules` and `LoadRepos` of the
`SpecList` type, @Beowulf noticed that useless queries were being run
every 30 seconds. These stemmed from these two functions being run even
if there were no scheduled actions.
- No tests were added, because there is zero testing infrastructure or
fixtures for the actions specifications models. I feel these are trivial
enough to not require any tests.
Before we had just the plain mail address as recipient. But now we provide additional Information for the Mail clients.
---
Porting information:
- Two behavior changes are noted with this patch, the display name is now always quoted although in some scenarios unnecessary it's a safety precaution of Go. B encoding is used when certain characters are present as they aren't 'legal' to be used as a display name and Q encoding would still show them and B encoding needs to be used, this is now done by Go's `address.String()`.
- Update and add new unit tests.
Co-authored-by: 6543 <6543@obermui.de>
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/4516
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-committed-by: Gusted <postmaster@gusted.xyz>
- We were previously using `github.com/keybase/go-crypto`, because the
package for openpgp by Go itself is deprecated and no longer
maintained. This library provided a maintained version of the openpgp
package. However, it hasn't seen any activity for the last five years,
and I would therefore consider this also unmaintained.
- This patch switches the package to `github.com/ProtonMail/go-crypto`
which provides a maintained version of the openpgp package and was
already being used in the tests.
- Adds unit tests, I've carefully checked the callstacks to ensure the
OpenPGP-related code was covered under either a unit test or integration
tests to avoid regression, as this can easily turn into security
vulnerabilities if a regression happens here.
- Small behavior update, revocations are now checked correctly instead
of checking if they merely exist and the expiry time of a subkey is used
if one is provided (this is just cosmetic and doesn't impact security).
- One more dependency eliminated :D
- Go's deadcode eliminator is quite simple, if you put a public function
in a package `aa/bb` that is used only by tests, it would still be built
if package `aa/bb` was imported. This means that if such functions use
libraries relevant only to tests that those libraries would still be
be built and increase the binary size of a Go binary.
- This is also the case with Forgejo, `models/migrations/base/tests.go`
contained functions exclusively used by tests which (skipping some steps
here) imports https://github.com/ClickHouse/clickhouse-go, which is
2MiB. The `code.gitea.io/gitea/models/migrations/base` package is
imported by `cmd/doctor` and thus the code of the clickhouse library is
also built and included in the Forgejo binary, although entirely unused
and not reachable.
- This patch moves the test-related functions to their own package, so
Go's deadcode eliminator knows not to build the test-related functions
and thus reduces the size of the Forgejo binary.
- It is not possible to move this to a `_test.go` file because Go does
not allow importing functions from such files, so any test helper
function must be in a non-test package and file.
- Reduction of size (built with `TAGS="sqlite sqlite_unlock_notify" make
build`):
- Before: 95912040 bytes (92M)
- After: 92306888 bytes (89M)
I noticed that Forgejo does not allow HTTP range requests when downloading artifacts. All other file downloads like releases and packages support them.
So I looked at the code and found that the artifact download endpoint uses a simple io.Copy to serve the file contents instead of using the established `ServeContentByReadSeeker` function which does take range requests into account.
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/4218
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Reviewed-by: Gusted <gusted@noreply.codeberg.org>
Co-authored-by: ThetaDev <thetadev@magenta.de>
Co-committed-by: ThetaDev <thetadev@magenta.de>
Closes #2797
I'm aware of https://github.com/go-gitea/gitea/pull/28163 exists, but since I had it laying around on my drive and collecting dust, I might as well open a PR for it if anyone wants the feature a bit sooner than waiting for upstream to release it or to be a forgejo "native" implementation.
This PR Contains:
- Support for the `workflow_dispatch` trigger
- Inputs: boolean, string, number, choice
Things still to be done:
- [x] API Endpoint `/api/v1/<org>/<repo>/actions/workflows/<workflow id>/dispatches`
- ~~Fixing some UI bugs I had no time figuring out, like why dropdown/choice inputs's menu's behave weirdly~~ Unrelated visual bug with dropdowns inside dropdowns
- [x] Fix bug where opening the branch selection submits the form
- [x] Limit on inputs to render/process
Things not in this PR:
- Inputs: environment (First need support for environments in forgejo)
Things needed to test this:
- A patch for https://code.forgejo.org/forgejo/runner to actually consider the inputs inside the workflow.
~~One possible patch can be seen here: https://code.forgejo.org/Mai-Lapyst/runner/src/branch/support-workflow-inputs~~
[PR](https://code.forgejo.org/forgejo/runner/pulls/199)
![image](/attachments/2db50c9e-898f-41cb-b698-43edeefd2573)
## Testing
- Checkout PR
- Setup new development runner with [this PR](https://code.forgejo.org/forgejo/runner/pulls/199)
- Create a repo with a workflow (see below)
- Go to the actions tab, select the workflow and see the notice as in the screenshot above
- Use the button + dropdown to run the workflow
- Try also running it via the api using the `` endpoint
- ...
- Profit!
<details>
<summary>Example workflow</summary>
```yaml
on:
workflow_dispatch:
inputs:
logLevel:
description: 'Log Level'
required: true
default: 'warning'
type: choice
options:
- info
- warning
- debug
tags:
description: 'Test scenario tags'
required: false
type: boolean
boolean_default_true:
description: 'Test scenario tags'
required: true
type: boolean
default: true
boolean_default_false:
description: 'Test scenario tags'
required: false
type: boolean
default: false
number1_default:
description: 'Number w. default'
default: '100'
type: number
number2:
description: 'Number w/o. default'
type: number
string1_default:
description: 'String w. default'
default: 'Hello world'
type: string
string2:
description: 'String w/o. default'
required: true
type: string
jobs:
test:
runs-on: docker
steps:
- uses: actions/checkout@v3
- run: whoami
- run: cat /etc/issue
- run: uname -a
- run: date
- run: echo ${{ inputs.logLevel }}
- run: echo ${{ inputs.tags }}
- env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- run: echo "abc"
```
</details>
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/3334
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Mai-Lapyst <mai-lapyst@noreply.codeberg.org>
Co-committed-by: Mai-Lapyst <mai-lapyst@noreply.codeberg.org>