Part of #27065
This reduces the usage of `db.DefaultContext`. I think I've got enough
files for the first PR. When this is merged, I will continue working on
this.
Considering how many files this PR affect, I hope it won't take to long
to merge, so I don't end up in the merge conflict hell.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Since the issue indexer has been refactored, the issue overview webpage
is built by the `buildIssueOverview` function and underlying
`indexer.Search` function and `GetIssueStats` instead of
`GetUserIssueStats`. So the function is no longer used.
I moved the relevant tests to `indexer_test.go` and since the search
option changed from `IssueOptions` to `SearchOptions`, most of the tests
are useless now.
We need more tests about the db indexer because those tests are highly
connected with the issue overview webpage and now this page has several
bugs.
Any advice about those test cases is appreciated.
---------
Co-authored-by: CaiCandong <50507092+CaiCandong@users.noreply.github.com>
- In org mode you can specify an description for media via the following
syntax `[[description][media link]]`. The description is then used as
title or alt.
- This patch fixes the rendering of the description by seperating the
description and non-description cases and using `org.String()`.
- Added unit tests.
- Inspired by
6eb20dbda9/org/html_writer.go (L406-L427)
- Resolves https://codeberg.org/Codeberg/Community/issues/848
(cherry picked from commit 8b8aab8311)
Co-authored-by: Gusted <postmaster@gusted.xyz>
Fix #26723
Add `ChangeDefaultBranch` to the `notifier` interface and implement it
in `indexerNotifier`. So when changing the default branch,
`indexerNotifier` sends a message to the `indexer queue` to update the
index.
---------
Co-authored-by: techknowlogick <matti@mdranta.net>
Should BucketExists (HeadBucket) fail because of an error related to
the connection rather than the existence of the bucket, no information
is available and the admin is left guessing.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
> This action is useful to determine if a bucket exists and you have
> permission to access it. The action returns a 200 OK if the bucket
> exists and you have permission to access it.
>
> If the bucket does not exist or you do not have permission to access
> it, the HEAD request returns a generic 400 Bad Request, 403
> Forbidden or 404 Not Found code. A message body is not included, so
> you cannot determine the exception beyond these error codes.
GetBucketVersioning is used instead and exclusively dedicated to
asserting if using the connection does not return a BadRequest.
If it does the NewMinioStorage logs an error and returns. Otherwise
it keeps going knowing that BucketExists is not going to fail for
reasons unrelated to the existence of the bucket and the permissions
to access it.
(cherry picked from commit d1df4b3bc62e5e61893a923f1c4b58f084eb03af)
Refs: https://codeberg.org/forgejo/forgejo/issues/1338
Unfortunately, when a system setting hasn't been stored in the database,
it cannot be cached.
Meanwhile, this PR also uses context cache for push email avatar display
which should avoid to read user table via email address again and again.
According to my local test, this should reduce dashboard elapsed time
from 150ms -> 80ms .
If the AppURL(ROOT_URL) is an HTTPS URL, then the COOKIE_SECURE's
default value should be true.
And, if a user visits an "http" site with "https" AppURL, they won't be
able to login, and they should have been warned. The only problem is
that the "language" can't be set either in such case, while I think it
is not a serious problem, and it could be fixed easily if needed.
![image](https://github.com/go-gitea/gitea/assets/2114189/7bc9a859-dcc1-467d-bc7c-1dd6a10389e3)
A set of terminology, along with a broader description, can help more
people engage with the Gitea queue system, providing insights and
ensuring its correct use.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
This feature was removed by #22219 to avoid possible CSRF attack.
This PR takes reverseproxy auth for API back but with default disabled.
To prevent possbile CSRF attack, the responsibility will be the
reverseproxy but not Gitea itself.
For those want to enable this `ENABLE_REVERSE_PROXY_AUTHENTICATION_API`,
they should know what they are doing.
---------
Co-authored-by: Giteabot <teabot@gitea.io>
Currently, Artifact does not have an expiration and automatic cleanup
mechanism, and this feature needs to be added. It contains the following
key points:
- [x] add global artifact retention days option in config file. Default
value is 90 days.
- [x] add cron task to clean up expired artifacts. It should run once a
day.
- [x] support custom retention period from `retention-days: 5` in
`upload-artifact@v3`.
- [x] artifacts link in actions view should be non-clickable text when
expired.
1. The `og:description` should be "a one to two sentence description of
your object"
* It shouldn't output all the user inputted content -- it would be
pretty huge.
* Maybe it only needs at most 300 bytes.
2. Do not render commit message as HTML
- While doing some sanity checks over OpenSSH's code for how they handle
certificates authentication. I stumbled on an condition that checks the
certificate type is really an user certificate on the server-side
authentication. This checks seems to be a formality and just for the
sake of good domain seperation, because an user and host certificate
don't differ in their generation, verification or flags that can be
included.
- Add this check to the builtin SSH server to stay close to the
unwritten SSH specification.
- This is an breaking change for setups where the builtin SSH server is
being used and for some reason host certificates were being used for
authentication.
-
(cherry picked from commit de35b141b7)
Refs: https://codeberg.org/forgejo/forgejo/pulls/1172
## ⚠️ BREAKING ⚠️
Like OpenSSH, the built-in SSH server will now only accept SSH user
certificates, not server certificates.
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-authored-by: Giteabot <teabot@gitea.io>
1. The old `prepareQueryArg` did double-unescaping of form value.
2. By the way, remove the unnecessary `ctx.Flash = ...` in
`MockContext`.
Co-authored-by: Giteabot <teabot@gitea.io>
Just like `models/unittest`, the testing helper functions should be in a
separate package: `contexttest`
And complete the TODO:
> // TODO: move this function to other packages, because it depends on
"models" package
Backtick syntax now works in repo description too. Also, I replaced the
CSS for this was a new single class, making it more flexible and not
dependent on a parent. Also, very slightly reduced font size from 16.8px
to 16px.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
From the Go specification:
> "1. For a nil slice, the number of iterations is 0."
https://go.dev/ref/spec#For_range
Therefore, an additional nil check for before the loop is unnecessary.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Related to: #8312#26491
In migration v109, we only added a new column `CanCreateOrgRepo` in Team
table, but not initial the value of it.
This may cause bug like #26491.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
According to the GitHub API Spec:
https://docs.github.com/en/rest/actions/secrets?apiVersion=2022-11-28#create-or-update-an-organization-secret
Merge the Create and Update secret into a single API.
- Remove the `CreateSecretOption` struct and replace it with
`CreateOrUpdateSecretOption` in `modules/structs/secret.go`
- Update the `CreateOrUpdateOrgSecret` function in
`routers/api/v1/org/action.go` to use `CreateOrUpdateSecretOption`
instead of `UpdateSecretOption`
- Remove the `CreateOrgSecret` function in
`routers/api/v1/org/action.go` and replace it with
`CreateOrUpdateOrgSecret`
- Update the Swagger documentation in
`routers/api/v1/swagger/options.go` and `templates/swagger/v1_json.tmpl`
to reflect the changes in the struct names and function names
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
The web context (modules/context.Context) is quite complex, it's
difficult for the callers to initialize correctly.
This PR introduces a `NewWebContext` function, to make sure the web
context have the same behavior for different cases.
- Resolves https://codeberg.org/forgejo/forgejo/issues/580
- Return a `upload_field` to any release API response, which points to
the API URL for uploading new assets.
- Adds unit test.
- Adds integration testing to verify URL is returned correctly and that
upload endpoint actually works
---------
Co-authored-by: Gusted <postmaster@gusted.xyz>
Hi,
We'd like to add merge files files to GetCommitFileStatus fucntions so
API returns the list of all the files associated to a merged pull
request commit, like GitHub API does.
The list of affectedFiles for an API commit is fetched from toCommit()
function in routers/api/v1/repo/commits.go, and API was returning no
file in case of a pull request with no conflict, or just files
associated to the confict resolution, but NOT the full list of merged
files.
This would lead to situations where a CI polling a repo for changes
could miss some file changes due to API returning an empty / partial
list in case of such merged pull requests. (Hope this makes sense :) )
NOTE: I'd like to add a unittest in
integrations/api_repo_git_commits_test.go but failed to understand how
to add my own test bare repo so I can make a test on a merged pull
request commit to check for affectedFiles.
Is there a merged pull request in there that I could use maybe?
Could someone please direct me to the relevant ressources with
informations on how to do that please?
Thanks for your time,
Laurent.
---------
Co-authored-by: Thomas Desveaux <desveaux.thomas@gmail.com>
Replace #22751
1. only support the default branch in the repository setting.
2. autoload schedule data from the schedule table after starting the
service.
3. support specific syntax like `@yearly`, `@monthly`, `@weekly`,
`@daily`, `@hourly`
## How to use
See the [GitHub Actions
document](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule)
for getting more detailed information.
```yaml
on:
schedule:
- cron: '30 5 * * 1,3'
- cron: '30 5 * * 2,4'
jobs:
test_schedule:
runs-on: ubuntu-latest
steps:
- name: Not on Monday or Wednesday
if: github.event.schedule != '30 5 * * 1,3'
run: echo "This step will be skipped on Monday and Wednesday"
- name: Every time
run: echo "This step will always run"
```
Signed-off-by: Bo-Yi.Wu <appleboy.tw@gmail.com>
---------
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR has multiple parts, and I didn't split them because
it's not easy to test them separately since they are all about the
dashboard page for issues.
1. Support counting issues via indexer to fix #26361
2. Fix repo selection so it also fixes #26653
3. Keep keywords in filter links.
The first two are regressions of #26012.
After:
https://github.com/go-gitea/gitea/assets/9418365/71dfea7e-d9e2-42b6-851a-cc081435c946
Thanks to @CaiCandong for helping with some tests.
- Add a new `CreateSecretOption` struct for creating secrets
- Implement a `CreateOrgSecret` function to create a secret in an
organization
- Add a new route in `api.go` to handle the creation of organization
secrets
- Update the Swagger template to include the new `CreateOrgSecret` API
endpoint
---------
Signed-off-by: appleboy <appleboy.tw@gmail.com>
Previously, `err` was defined above, checked for `err == nil` and used
nowhere else.
Hence, the result of `convertMinioErr` would always be `nil`.
This leads to a NPE further down the line.
That is not intentional, it should convert the error of the most recent
operation, not one of its predecessors.
Found through
https://discord.com/channels/322538954119184384/322538954119184384/1143185780206993550.
- Added new tests to cover corner cases
- Replace existing regex with new one
Closes #26551
---
As @silverwind suggested, I started from
[validate-npm-package-name](https://github.com/npm/validate-npm-package-name),
but found this solution too complicated.
Then I tried to fix existing regex myself, but thought, that exclude all
restricted symbols is harder, than set only allowed symbols.
Then I search a bit more and found
[package-name-regex](https://github.com/dword-design/package-name-regex)
and regex from it works for all new test cases.
Let me know, if more information or help with this PR is needed.
Fix #26536
Follow #26012
Whatever the comment type is, always update the issue indexer. So the
issue indexer will be updated when there is a change in Status,
Assignee, Label, and so on.
I added the logic for `NotifyUpdateComment`, but missed it for
`NotifyCreateIssueComment` and `NotifyDeleteComment`.
- Add a new function `CountOrgSecrets` in the file
`models/secret/secret.go`
- Add a new file `modules/structs/secret.go`
- Add a new function `ListActionsSecrets` in the file
`routers/api/v1/api.go`
- Add a new file `routers/api/v1/org/action.go`
- Add a new function `listActionsSecrets` in the file
`routers/api/v1/org/action.go`
go-sdk: https://gitea.com/gitea/go-sdk/pulls/629
---------
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <matti@mdranta.net>
Co-authored-by: Giteabot <teabot@gitea.io>
"ogg" is just a "container" format for audio and video.
Golang's `DetectContentType` only reports "application/ogg" for
potential ogg files.
Actually it could do more "guess" to see whether it is a audio file or a
video file.
## Archived labels
This adds the structure to allow for archived labels.
Archived labels are, just like closed milestones or projects, a medium to hide information without deleting it.
It is especially useful if there are outdated labels that should no longer be used without deleting the label entirely.
## Changes
1. UI and API have been equipped with the support to mark a label as archived
2. The time when a label has been archived will be stored in the DB
## Outsourced for the future
There's no special handling for archived labels at the moment.
This will be done in the future.
## Screenshots
![image](https://github.com/go-gitea/gitea/assets/80308335/208f95cd-42e4-4ed7-9a1f-cd2050a645d4)
![image](https://github.com/go-gitea/gitea/assets/80308335/746428e0-40bb-45b3-b992-85602feb371d)
Part of https://github.com/go-gitea/gitea/issues/25237
---------
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
This PR rewrites the function `getStorage` and make it more clear.
Include tests from #26435, thanks @earl-warren
---------
Co-authored-by: Earl Warren <contact@earl-warren.org>
Close stdout correctly for "git blame", otherwise the failed "git blame"
would case the request hanging forever.
And "os.Stderr" should never (seldom) be used as git command's stderr
When users put the secrets into a file (GITEA__sec__KEY__FILE), the
newline sometimes is different to avoid (eg: echo/vim/...)
So the last newline could be removed when reading, it makes the users
easier to maintain the secret files.
Co-authored-by: Giteabot <teabot@gitea.io>
For some reason, the permission of the client_id and secret may cannot
create bucket, so now we will check whether bucket does exist first and
then try to create a bucket if it doesn't exist.
Try to fix #25984
Co-authored-by: silverwind <me@silverwind.io>
In the `RepoRefForAPI()` context function `CommitID` is not set if `ref`
is used. It is set correctly for other if/else branches where `Commit`
is set. It doesn't appear that any routes that use `RepoRefForAPI()`
also use `CommitID` but that may be the case in the future.
## Changes
- Sets `ctx.Repo.CommitID` when `ref` is explicitly used for api routes
that use `RepoRefForAPI()`
The MinIO client isn't redirecting to the correct AWS endpoint if a
non-default data center is used.
In my use case I created an AWS bucket at `eu-central-1` region. Because
of the missing region initialization of the client the default
`us-east-1` API endpoint is used returning a `301 Moved Permanently`
response that's not handled properly by MinIO client. This in return
aborts using S3 storage on AWS as the `BucketExists()` call will fail
with the http moved error.
MinIO client trace shows the issue:
```text
---------START-HTTP---------
HEAD / HTTP/1.1
Host: xxxxxxxxxxx-prod-gitea-data.s3.dualstack.us-east-1.amazonaws.com
User-Agent: MinIO (windows; amd64) minio-go/v7.0.61
Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20230809/accesspoint.eu-central-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20230809T141143Z
HTTP/1.1 301 Moved Permanently
Connection: close
Content-Type: application/xml
Date: Wed, 09 Aug 2023 14:11:43 GMT
Server: AmazonS3
X-Amz-Bucket-Region: eu-central-1
X-Amz-Id-2: UK7wfeYi0HcTcytNvQ3wTAZ5ZP1mOSMnvRZ9Fz4xXzeNsS47NB/KfFx2unFxo3L7XckHpMNPPVo=
X-Amz-Request-Id: S1V2MJV8SZ11GEVN
---------END-HTTP---------
```
Co-authored-by: Heiko Besemann <heiko.besemann@qbeyond.de>
This PR is an extended implementation of #25189 and builds upon the
proposal by @hickford in #25653, utilizing some ideas proposed
internally by @wxiaoguang.
Mainly, this PR consists of a mechanism to pre-register OAuth2
applications on startup, which can be enabled or disabled by modifying
the `[oauth2].DEFAULT_APPLICATIONS` parameter in app.ini. The OAuth2
applications registered this way are being marked as "locked" and
neither be deleted nor edited over UI to prevent confusing/unexpected
behavior. Instead, they're being removed if no longer enabled in config.
![grafik](https://github.com/go-gitea/gitea/assets/47871822/81a78b1c-4b68-40a7-9e99-c272ebb8f62e)
The implemented mechanism can also be used to pre-register other OAuth2
applications in the future, if wanted.
Co-authored-by: hickford <mirth.hickford@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
---------
Co-authored-by: M Hickford <mirth.hickford@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
There are 2 kinds of ".Editorconfig" in code, one is `JSON string` for
the web edtior, another is `*editorconfig.Editorconfig` for the file
rendering (used by `TabSizeClass`)
This PR distinguish them with different names.
And by the way, change the default tab size from 8 to 4, I think few
people would like to use 8-size tabs nowadays.
Before:
* `{{.locale.Tr ...}}`
* `{{$.locale.Tr ...}}`
* `{{$.root.locale.Tr ...}}`
* `{{template "sub" .}}`
* `{{template "sub" (dict "locale" $.locale)}}`
* `{{template "sub" (dict "root" $)}}`
* .....
With context function: only need to `{{ctx.Locale.Tr ...}}`
The "ctx" could be considered as a super-global variable for all
templates including sub-templates.
To avoid potential risks (any bug in the template context function
package), this PR only starts using "ctx" in "head.tmpl" and
"footer.tmpl" and it has a "DataRaceCheck". If there is anything wrong,
the code can be fixed or reverted easily.
- Currently the post processing will transform all issue indexes (such as `#6`) into a clickable link.
- This makes sense in an situation like issues or PRs,
where referencing to other issues is quite common
and only referencing their issue index is an handy and efficient way to do it.
- Currently this is also run for documents
(which is the user profile and viewing rendered files),
but in those situations it's less common to reference issues by their index and instead could mean something else.
- This patch disables this post processing for issue index for documents. Matches Github's behavior.
- Added unit tests.
- Resolves https://codeberg.org/Codeberg/Community/issues/1120
Co-authored-by: Gusted <postmaster@gusted.xyz>
Follow #25229
Copy from
https://github.com/go-gitea/gitea/pull/26290#issuecomment-1663135186
The bug is that we cannot get changed files for the
`pull_request_target` event. This event runs in the context of the base
branch, so we won't get any changes if we call
`GetFilesChangedSinceCommit` with `PullRequest.Base.Ref`.
Fix #26064
Some git commands should use parent context, otherwise it would exit too
early (by the default timeout, 10m), and the "cmd.Wait" waits till the
pipes are closed.
This PR will fix #26264, caused by #23911.
The package configuration derive is totally wrong when storage type is
local in that PR.
This PR fixed the inherit logic when storage type is local with some
unit tests.
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes #26270.
Co-Author: @wxiaoguang
Thanks @lunny for providing this solution
As
https://github.com/go-gitea/gitea/issues/26270#issuecomment-1661695151
said, at present we cannot get the names of changed files correctly when
the `OldCommitID` is `EmptySHA`. In this PR, the `GetCommitFilesChanged`
method is added and will be used to get the changed files by commit ID.
References:
- https://stackoverflow.com/a/424142
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
1. Fix the wrong document (add the missing `MODE=`)
2. Add a more friendly log message to tell users to add `MODE=` in their
config
Co-authored-by: Giteabot <teabot@gitea.io>
Not too important, but I think that it'd be a pretty neat touch.
Also fixes some layout bugs introduced by a previous PR.
---------
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-authored-by: Caesar Schinas <caesar@caesarschinas.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fix #24662.
Replace #24822 and #25708 (although it has been merged)
## Background
In the past, Gitea supported issue searching with a keyword and
conditions in a less efficient way. It worked by searching for issues
with the keyword and obtaining limited IDs (as it is heavy to get all)
on the indexer (bleve/elasticsearch/meilisearch), and then querying with
conditions on the database to find a subset of the found IDs. This is
why the results could be incomplete.
To solve this issue, we need to store all fields that could be used as
conditions in the indexer and support both keyword and additional
conditions when searching with the indexer.
## Major changes
- Redefine `IndexerData` to include all fields that could be used as
filter conditions.
- Refactor `Search(ctx context.Context, kw string, repoIDs []int64,
limit, start int, state string)` to `Search(ctx context.Context, options
*SearchOptions)`, so it supports more conditions now.
- Change the data type stored in `issueIndexerQueue`. Use
`IndexerMetadata` instead of `IndexerData` in case the data has been
updated while it is in the queue. This also reduces the storage size of
the queue.
- Enhance searching with Bleve/Elasticsearch/Meilisearch, make them
fully support `SearchOptions`. Also, update the data versions.
- Keep most logic of database indexer, but remove
`issues.SearchIssueIDsByKeyword` in `models` to avoid confusion where is
the entry point to search issues.
- Start a Meilisearch instance to test it in unit tests.
- Add unit tests with almost full coverage to test
Bleve/Elasticsearch/Meilisearch indexer.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- Configure `setting.CacheService.TTL` which will force the code to go
trough the caching mechanism.
- Remove the TODO and uncomment the test code.
(cherry picked from commit a201f2f189)
Refs: https://codeberg.org/forgejo/forgejo/pulls/974
---------
Co-authored-by: Gusted <postmaster@gusted.xyz>