Refactoring of #32211
this move the PublicOnly() filter calcuation next to the DB querys and
let it be decided by the Doer
---
*Sponsored by Kithara Software GmbH*
(cherry picked from commit 43c252dfeaf9ab03c4db3e7ac5169bc0d69901ac)
Conflicts:
models/organization/org_test.go
models/organization/org_user_test.go
routers/web/org/home.go
rather simple conflict resolution but not trivial
tests/integration/user_count_test.go had to be adapted (simple)
because it does not exist in Gitea and uses the modified model
- If a repository is forked to a private or limited user/organization,
the fork should not be visible in the list of forks depending on the
doer requesting the list of forks.
- Added integration testing for web and API route.
- Use the forked [binding](https://code.forgejo.org/go-chi/binding)
library. This library has two benefits, it removes the usage of
`github.com/goccy/go-json` (has no benefit as the minimo library is also
using it). It adds the `TrimSpace` feature, which will during the
binding part trim the spaces around the value it got from the form, this
is done before validation.
Fix #28121
I did some tests and found that the `missing signature key` error is
caused by an incorrect `Content-Type` header. Gitea correctly sets the
`Content-Type` header when serving files.
348d1d0f32/routers/api/packages/container/container.go (L712-L717)
However, when `SERVE_DIRECT` is enabled, the `Content-Type` header may
be set to an incorrect value by the storage service. To fix this issue,
we can use query parameters to override response header values.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
<img width="600px"
src="https://github.com/user-attachments/assets/f2ff90f0-f1df-46f9-9680-b8120222c555"
/>
In this PR, I introduced a new parameter to the `URL` method to support
additional parameters.
```
URL(path, name string, reqParams url.Values) (*url.URL, error)
```
---
Most S3-like services support specifying the content type when storing
objects. However, Gitea always use `application/octet-stream`.
Therefore, I believe we also need to improve the `Save` method to
support storing objects with the correct content type.
b7fb20e73e/modules/storage/minio.go (L214-L221)
(cherry picked from commit 0690cb076bf63f71988a709f62a9c04660b51a4f)
Conflicts:
- modules/storage/azureblob.go
Dropped the change, as we do not support Azure blob storage.
- modules/storage/helper.go
Resolved by adjusting their `discardStorage` to our
`DiscardStorage`
- routers/api/actions/artifacts.go
routers/api/actions/artifactsv4.go
routers/web/repo/actions/view.go
routers/web/repo/download.go
Resolved the conflicts by manually adding the new `nil`
parameter to the `storage.Attachments.URL()` calls.
Originally conflicted due to differences in the if expression
above these calls.
(cherry picked from commit f4d3aaeeb9e1b11c5495e4608a3f52f316c35758)
Conflicts:
- modules/charset/charset_test.go
Resolved by manually changing a `=` to `:=`, as per the
original patch. Conflict was due to `require.NoError`.
---
Conflict resolution: Trivial, for `repo_attributes.go` move where the
`IsErrCanceledOrKilled` needs to happen because of other changes that
happened in this file.
To add some words to this change: It seems to be mostly simplifying the
error handling of git operations.
(cherry picked from commit e524f63d58900557d7d57fc3bcd19d9facc8b8ee)
- Add a permission check that the doer has write permissions to the head
repository if the the 'delete branch after merge' is enabled when
merging a pull request.
- Unify the checks in the web and API router to `DeleteBranchAfterMerge`.
- Added integration tests.
These settings can allow users to only display the repositories explore page.
Thanks to yp05327 and wxiaoguang !
---------
Co-authored-by: Giteabot <teabot@gitea.io>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
(cherry picked from commit 9206fbb55fd28f21720072fce6a36cc22277934c)
Conflicts:
- templates/explore/navbar.tmpl
Resolved by manually applying the last hunk to our template.
- If you select a portion of the comment, `Quote reply` will not only
quote that portion and not copy paste the whole text as it previously
did. This is achieved by using the `@github/quote-selection` package.
- There's preprocessing to ensure Forgejo-flavored markdown syntax is
preserved.
- e2e test added.
- Resolves #1342
this will result in better api clients generated out of the openapi docs
... for SearchIssues
---
*Sponsored by Kithara Software GmbH*
(cherry picked from commit d638067d3cb0a7f69b4d899f65b9be4940bd3e41)
Port of https://github.com/go-gitea/gitea/pull/32204
(cherry picked from commit d6d3c96e6555fc91b3e2ef21f4d8d7475564bb3e)
Conflicts:
routers/api/v1/api.go
services/context/api.go
trivial context conflicts
Fix #30898
we have an option `SearchByEmail`, so enable it, then we can search user
by email.
Also added a test for it.
(cherry picked from commit 5d6d025c9b8d2abca9ec2bfdc795d1f0c1c6592d)
Resolves #20475
(cherry picked from commit 7e68bc88238104d2ee8b5a877fc1ad437f1778a4)
Conflicts:
tests/integration/pull_create_test.go
add missing testPullCreateDirectly from
c63060b130d34e3f03f28f4dccbf04d381a95c17 Fix code owners will not be mentioned when a pull request comes from a forked repository (#30476)
Fix #31423
(cherry picked from commit f4b8f6fc40ce2869135372a5c6ec6418d27ebfba)
Conflicts:
models/fixtures/comment.yml
comment fixtures have to be shifted because there is one more in Forgejo
A 500 status code was thrown when passing a non-existent target to the
create release API. This snapshot handles this error and instead throws
a 404 status code.
Discovered while working on #31840.
(cherry picked from commit f05d9c98c4cb95e3a8a71bf3e2f8f4529e09f96f)
- Follow up of #4819
- When no `ssh` executable is present, disable the UI and backend bits
that allow the creation of push mirrors that use SSH authentication. As
this feature requires the usage of the `ssh` binary.
- Integration test added.
It loads the Commit with a temporary open GitRepo. This is incorrect,
the GitRepo should be open as long as the Commit can be used. This
mainly removes the usage of this function as it's not needed.
- Continuation of https://github.com/go-gitea/gitea/pull/18835 (by
@Gusted, so it's fine to change copyright holder to Forgejo).
- Add the option to use SSH for push mirrors, this would allow for the
deploy keys feature to be used and not require tokens to be used which
cannot be limited to a specific repository. The private key is stored
encrypted (via the `keying` module) on the database and NEVER given to
the user, to avoid accidental exposure and misuse.
- CAVEAT: This does require the `ssh` binary to be present, which may
not be available in containerized environments, this could be solved by
adding a SSH client into forgejo itself and use the forgejo binary as
SSH command, but should be done in another PR.
- CAVEAT: Mirroring of LFS content is not supported, this would require
the previous stated problem to be solved due to LFS authentication (an
attempt was made at forgejo/forgejo#2544).
- Integration test added.
- Resolves #4416
This reverts commit 4ed372af13.
This change from Gitea was not considered by the Forgejo UI team and there is a consensus that it feels like a regression.
The test which was added in that commit is kept and modified to test that reviews can successfully be submitted on closed and merged PRs.
Closes forgejo/design#11
ForkRepository performs two different functions:
* The fork itself, if it does not already exist
* Updates and notifications after the fork is performed
The function is split to reflect that and otherwise unmodified.
The two function are given different names to:
* clarify which integration tests provides coverage
* distinguish it from the notification method by the same name
An instance-wide actor is required for outgoing signed requests that are
done on behalf of the instance, rather than on behalf of other actors.
Such things include updating profile information, or fetching public
keys.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
Fix #31707.
It's split from #31724.
Although #31724 could also fix #31707, it has change a lot so it's not a
good idea to backport it.
(cherry picked from commit 81fa471119a6733d257f63f8c2c1f4acc583d21b)
The previous commit laid out the foundation of the quota engine, this
one builds on top of it, and implements the actual enforcement.
Enforcement happens at the route decoration level, whenever possible. In
case of the API, when over quota, a 413 error is returned, with an
appropriate JSON payload. In case of web routes, a 413 HTML page is
rendered with similar information.
This implementation is for a **soft quota**: quota usage is checked
before an operation is to be performed, and the operation is *only*
denied if the user is already over quota. This makes it possible to go
over quota, but has the significant advantage of being practically
implementable within the current Forgejo architecture.
The goal of enforcement is to deny actions that can make the user go
over quota, and allow the rest. As such, deleting things should - in
almost all cases - be possible. A prime exemption is deleting files via
the web ui: that creates a new commit, which in turn increases repo
size, thus, is denied if the user is over quota.
Limitations
-----------
Because we generally work at a route decorator level, and rarely
look *into* the operation itself, `size:repos:public` and
`size:repos:private` are not enforced at this level, the engine enforces
against `size:repos:all`. This will be improved in the future.
AGit does not play very well with this system, because AGit PRs count
toward the repo they're opened against, while in the GitHub-style fork +
pull model, it counts against the fork. This too, can be improved in the
future.
There's very little done on the UI side to guard against going over
quota. What this patch implements, is enforcement, not prevention. The
UI will still let you *try* operations that *will* result in a denial.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
This is an implementation of a quota engine, and the API routes to
manage its settings. This does *not* contain any enforcement code: this
is just the bedrock, the engine itself.
The goal of the engine is to be flexible and future proof: to be nimble
enough to build on it further, without having to rewrite large parts of
it.
It might feel a little more complicated than necessary, because the goal
was to be able to support scenarios only very few Forgejo instances
need, scenarios the vast majority of mostly smaller instances simply do
not care about. The goal is to support both big and small, and for that,
we need a solid, flexible foundation.
There are thee big parts to the engine: counting quota use, setting
limits, and evaluating whether the usage is within the limits. Sounds
simple on paper, less so in practice!
Quota counting
==============
Quota is counted based on repo ownership, whenever possible, because
repo owners are in ultimate control over the resources they use: they
can delete repos, attachments, everything, even if they don't *own*
those themselves. They can clean up, and will always have the permission
and access required to do so. Would we count quota based on the owning
user, that could lead to situations where a user is unable to free up
space, because they uploaded a big attachment to a repo that has been
taken private since. It's both more fair, and much safer to count quota
against repo owners.
This means that if user A uploads an attachment to an issue opened
against organization O, that will count towards the quota of
organization O, rather than user A.
One's quota usage stats can be queried using the `/user/quota` API
endpoint. To figure out what's eating into it, the
`/user/repos?order_by=size`, `/user/quota/attachments`,
`/user/quota/artifacts`, and `/user/quota/packages` endpoints should be
consulted. There's also `/user/quota/check?subject=<...>` to check
whether the signed-in user is within a particular quota limit.
Quotas are counted based on sizes stored in the database.
Setting quota limits
====================
There are different "subjects" one can limit usage for. At this time,
only size-based limits are implemented, which are:
- `size:all`: As the name would imply, the total size of everything
Forgejo tracks.
- `size:repos:all`: The total size of all repositories (not including
LFS).
- `size:repos:public`: The total size of all public repositories (not
including LFS).
- `size:repos:private`: The total size of all private repositories (not
including LFS).
- `sizeall`: The total size of all git data (including all
repositories, and LFS).
- `sizelfs`: The size of all git LFS data (either in private or
public repos).
- `size:assets:all`: The size of all assets tracked by Forgejo.
- `size:assets:attachments:all`: The size of all kinds of attachments
tracked by Forgejo.
- `size:assets:attachments:issues`: Size of all attachments attached to
issues, including issue comments.
- `size:assets:attachments:releases`: Size of all attachments attached
to releases. This does *not* include automatically generated archives.
- `size:assets:artifacts`: Size of all Action artifacts.
- `size:assets:packages:all`: Size of all Packages.
- `size:wiki`: Wiki size
Wiki size is currently not tracked, and the engine will always deem it
within quota.
These subjects are built into Rules, which set a limit on *all* subjects
within a rule. Thus, we can create a rule that says: "1Gb limit on all
release assets, all packages, and git LFS, combined". For a rule to
stand, the total sum of all subjects must be below the rule's limit.
Rules are in turn collected into groups. A group is just a name, and a
list of rules. For a group to stand, all of its rules must stand. Thus,
if we have a group with two rules, one that sets a combined 1Gb limit on
release assets, all packages, and git LFS, and another rule that sets a
256Mb limit on packages, if the user has 512Mb of packages, the group
will not stand, because the second rule deems it over quota. Similarly,
if the user has only 128Mb of packages, but 900Mb of release assets, the
group will not stand, because the combined size of packages and release
assets is over the 1Gb limit of the first rule.
Groups themselves are collected into Group Lists. A group list stands
when *any* of the groups within stand. This allows an administrator to
set conservative defaults, but then place select users into additional
groups that increase some aspect of their limits.
To top it off, it is possible to set the default quota groups a user
belongs to in `app.ini`. If there's no explicit assignment, the engine
will use the default groups. This makes it possible to avoid having to
assign each and every user a list of quota groups, and only those need
to be explicitly assigned who need a different set of groups than the
defaults.
If a user has any quota groups assigned to them, the default list will
not be considered for them.
The management APIs
===================
This commit contains the engine itself, its unit tests, and the quota
management APIs. It does not contain any enforcement.
The APIs are documented in-code, and in the swagger docs, and the
integration tests can serve as an example on how to use them.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>