- On editting a team, only update the units if the team isn't the
'Owners' team. Otherwise the 'Owners' team end up having all of their
unit access modes set to 'None'; because the request form doesn't send
over any units, as it's simply not shown in the UI.
- Adds a database inconstency check and fix for the case where the
'Owners' team is affected by this bug.
- Adds unit test.
- Adds integration test.
- Resolves #5528
- Regression of https://github.com/go-gitea/gitea/pull/24012
Port of https://github.com/go-gitea/gitea/pull/32204
(cherry picked from commit d6d3c96e6555fc91b3e2ef21f4d8d7475564bb3e)
Conflicts:
routers/api/v1/api.go
services/context/api.go
trivial context conflicts
Fix #31423
(cherry picked from commit f4b8f6fc40ce2869135372a5c6ec6418d27ebfba)
Conflicts:
models/fixtures/comment.yml
comment fixtures have to be shifted because there is one more in Forgejo
The inventory of the sha256:* images and the manifest index that
reference them is incomplete because it does not take into account any
image older than the expiration limit. As a result some sha256:* will
be considered orphaned although they are referenced from a manifest
index that was created more recently than the expiration limit.
There must not be any filtering based on the creation time when
building the inventory. The expiration limit must only be taken into
account when deleting orphaned images: those that are more recent than
the expiration limit must not be deleted.
This limit is specially important because it protects against a race
between a cleanup task and an ongoing mirroring task. A mirroring
task (such as skopeo sync) will first upload sha256:* images and then
create the corresponding manifest index. If a cleanup races against
it, the sha256:* images that are not yet referenced will be deleted
without skopeo noticing and the published index manifest that happens
at a later time will contain references to non-existent images.
After migrating a repository with pull request, the branch is missed and
after the pull request merged, the branch cannot be deleted.
(cherry picked from commit 5a8568459d22e57cac506465463660526ca6a08f)
Conflicts:
services/repository/branch.go
conflict because of [GITEA] Fix typo in formatting error e71b5a038e
- [x] add architecture-specific removal support
- [x] Fix upload competition
- [x] Fix not checking input when downloading
docs: https://codeberg.org/forgejo/docs/pulls/874
### Release notes
- [ ] I do not want this change to show in the release notes.
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/5351
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Exploding Dragon <explodingfkl@gmail.com>
Co-committed-by: Exploding Dragon <explodingfkl@gmail.com>
Remove unused CSRF options, decouple "new csrf protector" and "prepare"
logic, do not redirect to home page if CSRF validation falis (it
shouldn't happen in daily usage, if it happens, redirecting to home
doesn't help either but just makes the problem more complex for "fetch")
(cherry picked from commit 1fede04b83288d8a91304a83b7601699bb5cba04)
Conflicts:
options/locale/locale_en-US.ini
tests/integration/repo_branch_test.go
trivial context conflicts
A 500 status code was thrown when passing a non-existent target to the
create release API. This snapshot handles this error and instead throws
a 404 status code.
Discovered while working on #31840.
(cherry picked from commit f05d9c98c4cb95e3a8a71bf3e2f8f4529e09f96f)
---
`status == "rename"` should have read `status == "renamed"`. The typo
means that file.PreviousFilename would never be populated, which e.g.
breaks usage of the Github Action at
https://github.com/dorny/paths-filter.
(cherry picked from commit 7c6edf1ba06d4c3269eaa78f4039c9123b006c51)
- The Conan and Container packages use a different type of
authentication. It first authenticates via the regular way (api tokens
or user:password, handled via `auth.Basic`) and then generates a JWT
token that is used by the package software (such as Docker) to do the
action they wanted to do. This JWT token didn't properly propagate the
API scopes that the token was generated for, and thus could lead to a
'scope escalation' within the Conan and Container packages, read
access to write access.
- Store the API scope in the JWT token, so it can be propagated on
subsequent calls that uses that JWT token.
- Integration test added.
- Resolves #5128
- This is in the spirit of #5090.
- Move to a fork of gitea.com/go-chi/cache,
code.forgejo.org/go-chi/cache. It removes unused code (a lot of
adapters, that can't be used by Forgejo) and unused dependencies (see
go.sum). Also updates existing dependencies.
8c64f1a362..main
- This is a fork of https://github.com/dchest/captcha, as
https://gitea.com/go-chi/captcha is a fork of
github.com/go-macaron/captcha which is a fork (although not properly
credited) of a older version of https://github.com/dchest/captcha. Hence
why I've just forked the original.
- The fork includes some QoL improvements (uses standard library for
determistic RNG instead of rolling your own crypto), and removal of
audio support (500KiB unused data that bloated the binary otherwise).
Flips the image over the x-asis.
47270f2b55..main
- This move is needed for the next commit, because
gitea.com/go-chi/captcha included the gitea.com/go-chi/cache dependency.
Add `DiffCleanupSemantic` into the mix when generated diffs (PR review,
commit view and issue/comment history). This avoids trying to produce a
optimal diff and tries to reduce the amount of edits, by combing them
into larger edits, which is nicer and easier to 'look at'. There's no
need for a perfect minimal diff, as the output isn't being parsed by a
computer, it's parsed by people.
Ref: https://codeberg.org/forgejo/forgejo/issues/4996
It loads the Commit with a temporary open GitRepo. This is incorrect,
the GitRepo should be open as long as the Commit can be used. This
mainly removes the usage of this function as it's not needed.
- Moves to a fork of gitea.com/go-chi/session that removed support for
couchbase (and ledis, but that was never made available in Forgejo)
along with other code improvements.
f8ce677595..main
- The rationale for removing Couchbase is quite simple. Its not licensed
under FOSS
license (https://www.couchbase.com/blog/couchbase-adopts-bsl-license/)
and therefore cannot be tested by Forgejo and shouldn't be supported.
This is a similair vein to the removal of MSSQL
support (https://codeberg.org/forgejo/discussions/issues/122)
- A additional benefit is that this reduces the Forgejo binary by ~600Kb.
- This allows `CreateDeclarativeRepo` to be used by other testing
packages such as E2EE testing.
- Removes unused function in `services/webhook/sourcehut/builds_test.go`.
- Continuation of https://github.com/go-gitea/gitea/pull/18835 (by
@Gusted, so it's fine to change copyright holder to Forgejo).
- Add the option to use SSH for push mirrors, this would allow for the
deploy keys feature to be used and not require tokens to be used which
cannot be limited to a specific repository. The private key is stored
encrypted (via the `keying` module) on the database and NEVER given to
the user, to avoid accidental exposure and misuse.
- CAVEAT: This does require the `ssh` binary to be present, which may
not be available in containerized environments, this could be solved by
adding a SSH client into forgejo itself and use the forgejo binary as
SSH command, but should be done in another PR.
- CAVEAT: Mirroring of LFS content is not supported, this would require
the previous stated problem to be solved due to LFS authentication (an
attempt was made at forgejo/forgejo#2544).
- Integration test added.
- Resolves #4416
- Currently users created through the reverse proxy aren't created
trough the normal route of `createAndHandleCreatedUser` as this does a
lot of other routines which aren't necessary for the reverse proxy auth,
however one routine is important to have: the first created user should
be an admin. This patch adds that code
- Adds unit test.
- Resolves #4437
* support changing label colors
* support changing issue state
* use helpers to keep type conversions DRY
* drop the x/exp license because it is no longer used
The tests are performed by the gof3 compliance suite
- When a comment was updated or deleted and was part of an
pending/ongoing review, it would have triggered a notification, such as
a webhook.
- This patch checks if the comment is part of a pending review and then
does not fire a notification and, in the case of updating a comment,
does not save the content history because this is not necessary if it is
still a "draft" comment given it is a pending comment (there is no need
to see my embarrassing typos).
- Adds integration tests.
- Resolves https://codeberg.org/forgejo/forgejo/issues/4368
If the assign the pull request review to a team, it did not show the
members of the team in the "requested_reviewers" field, so the field was
null. As a solution, I added the team members to the array.
fix #31764
(cherry picked from commit 94cca8846e7d62c8a295d70c8199d706dfa60e5c)
This reverts commit 4ed372af13.
This change from Gitea was not considered by the Forgejo UI team and there is a consensus that it feels like a regression.
The test which was added in that commit is kept and modified to test that reviews can successfully be submitted on closed and merged PRs.
Closes forgejo/design#11
ForkRepository performs two different functions:
* The fork itself, if it does not already exist
* Updates and notifications after the fork is performed
The function is split to reflect that and otherwise unmodified.
The two function are given different names to:
* clarify which integration tests provides coverage
* distinguish it from the notification method by the same name
Previous arch package grouping was not well-suited for complex or multi-architecture environments. It now supports the following content:
- Support grouping by any path.
- New support for packages in `xz` format.
- Fix clean up rules
<!--start release-notes-assistant-->
## Draft release notes
<!--URL:https://codeberg.org/forgejo/forgejo-->
- Features
- [PR](https://codeberg.org/forgejo/forgejo/pulls/4903): <!--number 4903 --><!--line 0 --><!--description c3VwcG9ydCBncm91cGluZyBieSBhbnkgcGF0aCBmb3IgYXJjaCBwYWNrYWdl-->support grouping by any path for arch package<!--description-->
<!--end release-notes-assistant-->
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/4903
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Exploding Dragon <explodingfkl@gmail.com>
Co-committed-by: Exploding Dragon <explodingfkl@gmail.com>
- Fix "WARNING: item list for enum is not a valid JSON array, using the
old deprecated format" messages from
https://github.com/go-swagger/go-swagger in the CI.
- parsing scopes in `grantAdditionalScopes`
- read basic user info if `read:user`
- fail reading repository info if only `read:user`
- read repository info if `read:repository`
- if `setting.OAuth2.EnabledAdditionalGrantScopes` not provided it reads
all groups (public+private)
- if `setting.OAuth2.EnabledAdditionalGrantScopes` provided it reads
only public groups
- if `setting.OAuth2.EnabledAdditionalGrantScopes` and `read:organization`
provided it reads all groups
- `CheckOAuthAccessToken` returns both user ID and additional scopes
- `grantAdditionalScopes` returns AccessTokenScope ready string (grantScopes)
compiled from requested additional scopes by the client
- `userIDFromToken` sets returned grantScopes (if any) instead of default `all`
Was facing issues while writing unit tests for federation code. Mocks weren't catching all network calls, because was being out of scope of the mocking infra. Plus, I think we can have more granular tests.
This PR puts the client behind an interface, that can be retrieved from `ctx`. Context doesn't require initialization, as it defaults to the implementation available in-tree. It may be overridden when required (like testing).
## Mechanism
1. Get client factory from `ctx` (factory contains network and crypto parameters that are needed)
2. Initialize client with sender's keys and the receiver's public key
3. Use client as before.
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/4853
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Aravinth Manivannan <realaravinth@batsense.net>
Co-committed-by: Aravinth Manivannan <realaravinth@batsense.net>
- Adjust the `RepoRefByType` middleware to allow for commit SHAs that
are as short as 4 characters (the minium that Git requires).
- Integration test added.
- Follow up to 4d76bbeda7
- Resolves #4781
Part of #24256.
Clear up old action logs to free up storage space.
Users will see a message indicating that the log has been cleared if
they view old tasks.
<img width="1361" alt="image"
src="https://github.com/user-attachments/assets/9f0f3a3a-bc5a-402f-90ca-49282d196c22">
Docs: https://gitea.com/gitea/docs/pulls/40
---------
Co-authored-by: silverwind <me@silverwind.io>
(cherry picked from commit 687c1182482ad9443a5911c068b317a91c91d586)
Conflicts:
custom/conf/app.example.ini
routers/web/repo/actions/view.go
trivial context conflict
Fix #31137.
Replace #31623#31697.
When migrating LFS objects, if there's any object that failed (like some
objects are losted, which is not really critical), Gitea will stop
migrating LFS immediately but treat the migration as successful.
This PR checks the error according to the [LFS api
doc](https://github.com/git-lfs/git-lfs/blob/main/docs/api/batch.md#successful-responses).
> LFS object error codes should match HTTP status codes where possible:
>
> - 404 - The object does not exist on the server.
> - 409 - The specified hash algorithm disagrees with the server's
acceptable options.
> - 410 - The object was removed by the owner.
> - 422 - Validation error.
If the error is `404`, it's safe to ignore it and continue migration.
Otherwise, stop the migration and mark it as failed to ensure data
integrity of LFS objects.
And maybe we should also ignore others errors (maybe `410`? I'm not sure
what's the difference between "does not exist" and "removed by the
owner".), we can add it later when some users report that they have
failed to migrate LFS because of an error which should be ignored.
(cherry picked from commit 09b56fc0690317891829906d45c1d645794c63d5)
There's already `initActionsTasks`; it will avoid additional check for
if Actions enabled to move `registerActionsCleanup` into it.
And we don't really need `OlderThanConfig`.
(cherry picked from commit f989f464386139592b6911cad1be4c901eb97fe5)
The previous commit laid out the foundation of the quota engine, this
one builds on top of it, and implements the actual enforcement.
Enforcement happens at the route decoration level, whenever possible. In
case of the API, when over quota, a 413 error is returned, with an
appropriate JSON payload. In case of web routes, a 413 HTML page is
rendered with similar information.
This implementation is for a **soft quota**: quota usage is checked
before an operation is to be performed, and the operation is *only*
denied if the user is already over quota. This makes it possible to go
over quota, but has the significant advantage of being practically
implementable within the current Forgejo architecture.
The goal of enforcement is to deny actions that can make the user go
over quota, and allow the rest. As such, deleting things should - in
almost all cases - be possible. A prime exemption is deleting files via
the web ui: that creates a new commit, which in turn increases repo
size, thus, is denied if the user is over quota.
Limitations
-----------
Because we generally work at a route decorator level, and rarely
look *into* the operation itself, `size:repos:public` and
`size:repos:private` are not enforced at this level, the engine enforces
against `size:repos:all`. This will be improved in the future.
AGit does not play very well with this system, because AGit PRs count
toward the repo they're opened against, while in the GitHub-style fork +
pull model, it counts against the fork. This too, can be improved in the
future.
There's very little done on the UI side to guard against going over
quota. What this patch implements, is enforcement, not prevention. The
UI will still let you *try* operations that *will* result in a denial.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
This is an implementation of a quota engine, and the API routes to
manage its settings. This does *not* contain any enforcement code: this
is just the bedrock, the engine itself.
The goal of the engine is to be flexible and future proof: to be nimble
enough to build on it further, without having to rewrite large parts of
it.
It might feel a little more complicated than necessary, because the goal
was to be able to support scenarios only very few Forgejo instances
need, scenarios the vast majority of mostly smaller instances simply do
not care about. The goal is to support both big and small, and for that,
we need a solid, flexible foundation.
There are thee big parts to the engine: counting quota use, setting
limits, and evaluating whether the usage is within the limits. Sounds
simple on paper, less so in practice!
Quota counting
==============
Quota is counted based on repo ownership, whenever possible, because
repo owners are in ultimate control over the resources they use: they
can delete repos, attachments, everything, even if they don't *own*
those themselves. They can clean up, and will always have the permission
and access required to do so. Would we count quota based on the owning
user, that could lead to situations where a user is unable to free up
space, because they uploaded a big attachment to a repo that has been
taken private since. It's both more fair, and much safer to count quota
against repo owners.
This means that if user A uploads an attachment to an issue opened
against organization O, that will count towards the quota of
organization O, rather than user A.
One's quota usage stats can be queried using the `/user/quota` API
endpoint. To figure out what's eating into it, the
`/user/repos?order_by=size`, `/user/quota/attachments`,
`/user/quota/artifacts`, and `/user/quota/packages` endpoints should be
consulted. There's also `/user/quota/check?subject=<...>` to check
whether the signed-in user is within a particular quota limit.
Quotas are counted based on sizes stored in the database.
Setting quota limits
====================
There are different "subjects" one can limit usage for. At this time,
only size-based limits are implemented, which are:
- `size:all`: As the name would imply, the total size of everything
Forgejo tracks.
- `size:repos:all`: The total size of all repositories (not including
LFS).
- `size:repos:public`: The total size of all public repositories (not
including LFS).
- `size:repos:private`: The total size of all private repositories (not
including LFS).
- `sizeall`: The total size of all git data (including all
repositories, and LFS).
- `sizelfs`: The size of all git LFS data (either in private or
public repos).
- `size:assets:all`: The size of all assets tracked by Forgejo.
- `size:assets:attachments:all`: The size of all kinds of attachments
tracked by Forgejo.
- `size:assets:attachments:issues`: Size of all attachments attached to
issues, including issue comments.
- `size:assets:attachments:releases`: Size of all attachments attached
to releases. This does *not* include automatically generated archives.
- `size:assets:artifacts`: Size of all Action artifacts.
- `size:assets:packages:all`: Size of all Packages.
- `size:wiki`: Wiki size
Wiki size is currently not tracked, and the engine will always deem it
within quota.
These subjects are built into Rules, which set a limit on *all* subjects
within a rule. Thus, we can create a rule that says: "1Gb limit on all
release assets, all packages, and git LFS, combined". For a rule to
stand, the total sum of all subjects must be below the rule's limit.
Rules are in turn collected into groups. A group is just a name, and a
list of rules. For a group to stand, all of its rules must stand. Thus,
if we have a group with two rules, one that sets a combined 1Gb limit on
release assets, all packages, and git LFS, and another rule that sets a
256Mb limit on packages, if the user has 512Mb of packages, the group
will not stand, because the second rule deems it over quota. Similarly,
if the user has only 128Mb of packages, but 900Mb of release assets, the
group will not stand, because the combined size of packages and release
assets is over the 1Gb limit of the first rule.
Groups themselves are collected into Group Lists. A group list stands
when *any* of the groups within stand. This allows an administrator to
set conservative defaults, but then place select users into additional
groups that increase some aspect of their limits.
To top it off, it is possible to set the default quota groups a user
belongs to in `app.ini`. If there's no explicit assignment, the engine
will use the default groups. This makes it possible to avoid having to
assign each and every user a list of quota groups, and only those need
to be explicitly assigned who need a different set of groups than the
defaults.
If a user has any quota groups assigned to them, the default list will
not be considered for them.
The management APIs
===================
This commit contains the engine itself, its unit tests, and the quota
management APIs. It does not contain any enforcement.
The APIs are documented in-code, and in the swagger docs, and the
integration tests can serve as an example on how to use them.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
- add package counter to repo/user/org overview pages
- add go unit tests for repo/user has/count packages
- add many more unit tests for packages model
- fix error for non-existing packages in DeletePackageByID and SetRepositoryLink
Replace a double select with a simple select.
The complication originates from the initial implementation which
deleted packages instead of selecting them. It was justified to
workaround a problem in MySQL. But it is just a waste of resources
when collecting a list of IDs.
- In the spirit of #4635
- Notify the owner when their account is getting enrolled into TOTP. The
message is changed according if they have security keys or not.
- Integration test added.
- Regression of #4635
- The authentication mails weren't being sent with links to the
instance, because the the wrong variable was used in the mail footer.
`$.AppUrl` should've been `AppUrl`.
- Unit test added.