I'm currently evaluating how to get a code coverage report, created during a CI job, to be published / provided as HTML, to make it easier to use (instead of displaying it in the job output). Support for this was introduced in GitLab 10.1, but requires GitLab Pages.
Any chance of enabling this here at 0xacab?
Thanks,
cheers,
Georg
Designs
Child items 0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items 0
Link issues together to show that they're related.
Learn more.
I'm currently evaluating how to get a code coverage report, created during a CI job, to be published / provided as HTML, to make it easier to use (instead of displaying it in the job output). Support for this was introduced in GitLab 10.1, but requires GitLab Pages.
Any chance of enabling this here at 0xacab?
Personally, I think it would be fine to enable, but maybe @guido has a
different idea?
However, we can use something like pages.hexacab.org
riseup cannot use let's encrypt to get a wildcard certificate, so
we will need to pay for a wildcard certificate which doesn't seems
ideal.
Why don't we provide non-encrypted http-only pages under the pages
domain, and allow people to setup custom domains with TLS certificates?
It is easily done in a build pipeline to request the LE cert:
@micah I like your proposals, thank you very much! FWIW: The hidden service support it still on my table, and I'll work on it "soon".. ;) Quite busy currently etc., but after this I could support with the pages as well, if needed.
This is somehow confusing, as it mentions hostnames instead of domain
names, but yeah, forgot about that point.
However, we can use something like pages.hexacab.org
We would need to use hexacab.org, as the XSS possible issues wil should
up in any service that is hosted there instead of 0xacab.org. So we will
need a domain name only for this, which is still the first requirement.
riseup cannot use let's encrypt to get a wildcard certificate, so
we will need to pay for a wildcard certificate which doesn't seems
ideal.
Why don't we provide non-encrypted http-only pages under the pages
domain, and allow people to setup custom domains with TLS certificates?
It is easily done in a build pipeline to request the LE cert:
Because we will need to disable hsts preload in whatever domain we
use, or limit it. This kills using hexacab.org domain for anything
serious (ok) or having to register another domain as mentioned before
(ok too). We can do it, but it doesn't sounds good at all.
Because you have to be manually generating new certs every 3 months.
While the way you linked here means you have to be pasting the needed
files on the gitlab interface there are tools that do this automatically
and can be put in cron, i guess. Still, the process sucks as you are
vastly removing the automatization from LE.
Apart from that the update process it's less than good, another
possible problem is to hit the LE rate limits, obviously with somebody
trying to produce harm (as an open service, this is always something
that can happen).
--
You can also pay for a signed cert, probably more cheap than a domain
name is today and do this process once every x year/s.
Or put cloudflare in front of it.
I dont think its possible to have pages support with only custom
domains, but that would be a better option than having non-https
support.
I'm not even clear it would be a better option, as it will reduce the
possible use of the service. It is indeed better to have https always
enabled, but forcing people to use a domain name and get a certificate
is less than ideal.
Unless there are over a couple of groups wanting to use the pages
feature and accepting the idea of having to use their use domain address
and their own certificates it doesn't seem worth to host pages. The same
could be said even if the domain/cert situation was ok, but in that case
the less requeriments the easier to spin a website and i guess the more
use it can get.
I like the idea of gitlab pages, as it makes possible for us to host
stuff without worrying too much. I don't like we end up hosting a new
service with obstacles as that mean write documentation, increasing
support for the service, and the mess of domains/certs solution.
So, how to advance with this?
A
Define and register (if needed) a domain name
Get a way to get a signed wildcard certificate
B
Get a list of projects wanting this feature
We evaluate if it's worth to deploy pages with the obstacles because
the benefits will be bigger than the troubles of having to deal with
that
This is somehow confusing, as it mentions hostnames instead of domain
names, but yeah, forgot about that point.
I'm not skilled at same-origin XSS attacks, but I think they are correct
when they talk about hostnames, and not domain names.
Wikipedia[0] details the origin-determination rules as follows:
The algorithm used to calculate the "origin" of a URI is specified in
RFC 6454, Section 4. For absolute URIs, the origin is the triple
{protocol, host, port}. If the URI does not use a hierarchical element
as a naming authority (see RFC 3986, Section 3.2) or if the URI is not
an absolute URI, then a globally unique identifier is used. Two
resources are considered to be of the same origin if and only if all
these values are exactly the same.
However, we can use something like pages.hexacab.org
We would need to use hexacab.org, as the XSS possible issues wil should
up in any service that is hosted there instead of 0xacab.org. So we will
need a domain name only for this, which is still the first requirement.
I think it can be pages.hexacab.org, based on the fact that XSS can
only be done when there is same-origin?
Why don't we provide non-encrypted http-only pages under the pages
domain, and allow people to setup custom domains with TLS certificates?
It is easily done in a build pipeline to request the LE cert:
Because we will need to disable hsts preload in whatever domain we
use, or limit it. This kills using hexacab.org domain for anything
serious (ok) or having to register another domain as mentioned before
(ok too). We can do it, but it doesn't sounds good at all.
I think that is fine.
Because you have to be manually generating new certs every 3 months.
While the way you linked here means you have to be pasting the needed
files on the gitlab interface there are tools that do this automatically
and can be put in cron, i guess. Still, the process sucks as you are
vastly removing the automatization from LE.
Yes, this part sucks, but I dont think its a deal-breaker. Right now,
leap providers have to manually renew LE certs. I argued against it, and
I did not agree with the reason why it was done that way, and now people
wish to change that now they see that this is a little bit annoying. But
really, its only a little bit annoying. You set a calendar alarm and
then just run one command to renew and deploy. I'm not doing it, but if
I were doing it, I'd just setup a cronjob at home to do it automatically ;)
Apart from that the update process it's less than good, another
possible problem is to hit the LE rate limits, obviously with somebody
trying to produce harm (as an open service, this is always something
that can happen).
I dont think this is a problem. You can request, and they do give,
whitelisting for domains for LE if you anticipate this kind of load. In
fact Peter from LE even posted on one of the gitlab issues about LE in
gitlab pages asking people to let them know if they run into this and
they will whitelist.
There was more on the comment that was eated by gitlab's mail processing.
I'm not skilled at same-origin XSS attacks, but I think they are correct
when they talk about hostnames, and not domain names.
If you look at that part of the issue, yeah it doesn't matter. The problems are not related from where the resource is loaded, but how much access you can get to the cookies set on the browser. In specific how tricky will be to make a session fixation (this assuming that the cookies set by other websites are restricted to just the hostname). There is nothing that prevents something.hexacab.org to set a cookie for hexacab.org.
This is the part I wasn't thinking about, I read hostnames and said, yeah, we should be fine, but didn't think about cookies. We have no problems to enforce more strict CSP rules than same-origin, but it's not the only possible issue, and while we have to be tracking how the software we use deal with this stuff it's not worth to leave this possibility to just save 15 usd per year.
You can now go to read documentation and paste new here things that outside the scope of this bug, it's true that session hijacking and session fixation needs collaboration from the webapps to work and that they shouldn't get those issues on the first place, but history doesn't make me confident that the errors will not slip and we will have the door open for them to be exploited. And it's not just me https://gitlab.com/gitlab-org/gitlab-ce/issues/33240#note_50696647
Yes, this part sucks, but I dont think its a deal-breaker. Right now,
leap providers have to manually renew LE certs. I argued against it, and
I did not agree with the reason why it was done that way, and now people
wish to change that now they see that this is a little bit annoying. But
really, its only a little bit annoying. You set a calendar alarm and
then just run one command to renew and deploy. I'm not doing it, but if
I were doing it, I'd just setup a cronjob at home to do it automatically ;)
I do think it's a deal-breaker, as I'm always the one that gets the complains.
Yeah, I'm tracking that issue. As mentioned before I want this service. But not because I want it it means it's ok to do it in the current situation. I even asked as soon the hexacab.org domain was suggested to be bought to be used for this. But there is work to be done before this can happen without us need to be worrying much about this service, or we do accept the possible tradeoffs and put the work in place because we believe it's worth it and then we switch to a better solution when it shows up.
There was more on the comment that was eated by gitlab's mail processing.
damn. i just pinged that issue about email eating the other day too...
I'm not skilled at same-origin XSS attacks, but I think they are correct
when they talk about hostnames, and not domain names.
If you look at that part of the issue, yeah it doesn't matter. The problems are not related from where the resource is loaded, but how much access you can get to the cookies set on the browser. In specific how tricky will be to make a session fixation (this assuming that the cookies set by other websites are restricted to just the hostname). There is nothing that prevents something.hexacab.org to set a cookie for hexacab.org.
This is stuff I dont know, so I'm just talking nonsense here... The one
thing I do know is that if you have two domains you can easily restrict
the cookies to the (sub)domain.
Doesn't that mean if you make everyone on gitlab pages be on
pages.hexacab.org, then the cookie has to be for pages.hexacab.org?
If you can also set one for hexacab.org, then its fine if we just use
that domain for pages, because cookies can't be set for 0xacab.org, no?
This is the part I wasn't thinking about, I read hostnames and said, yeah, we should be fine, but didn't think about cookies. We have no problems to enforce more strict CSP rules than same-origin, but it's not the only possible issue, and while we have to be tracking how the software we use deal with this stuff it's not worth to leave this possibility to just save 15 usd per year.
I dont get the saving 15usd/year. If we use hexacab.org for pages, then
we are talking about spending 15usd/year for a domain specifically for
pages. Or are you saying we should use hexacab.org for something else,
and get a different domain for pages? I'm fine with that too, I dont
really care.
You can now go to read documentation and paste new here things that
outside the scope of this bug.
I dont understand why would I do that.
Yes, this part sucks, but I dont think its a deal-breaker. Right now,
leap providers have to manually renew LE certs. I argued against it, and
I did not agree with the reason why it was done that way, and now people
wish to change that now they see that this is a little bit annoying. But
really, its only a little bit annoying. You set a calendar alarm and
then just run one command to renew and deploy. I'm not doing it, but if
I were doing it, I'd just setup a cronjob at home to do it automatically ;)
I do think it's a deal-breaker, as I'm always the one that gets the complains.
nobody should complain to us about a pages site having an expired cert,
that would not be our responsibility. If they complain to us, which I
they might, then we redirect them. I guess you want to avoid that
redirection? I can understand that.
Yeah, I'm tracking that issue. As mentioned before I want this
service. But not because I want it it means it's ok to do it in the
current situation. I even asked as soon the hexacab.org domain was
suggested to be bought to be used for this. But there is work to be
done before this can happen without us need to be worrying much about
this service, or we do accept the possible tradeoffs and put the work
in place because we believe it's worth it and then we switch to a
better solution when it shows up.
I'm unclear what is the work to be done. If its waiting for automatic
LE renewal, that isn't really work, its just waiting, so I must not
understand what we need to do.
This is stuff I dont know, so I'm just talking nonsense here... The one
thing I do know is that if you have two domains you can easily restrict
the cookies to the (sub)domain.
Doesn't that mean if you make everyone on gitlab pages be on
pages.hexacab.org, then the cookie has to be for pages.hexacab.org?
If you can also set one for hexacab.org, then its fine if we just use
that domain for pages, because cookies can't be set for 0xacab.org, no?
The owner of a website can restrict cookies to their own domain, this is by default unless you make explicit the "domain" flag to them. The problem is that a.domain.com can set a cookie and pass the domain asdomain.com, like a gitlab page with the following "document.cookie = "session_id=123456;domain=domain.com", this will affect domain.com and a.domain.com, b.domain.com, etc.
If we only use a domain name (hexacab.org, 0xacabpages.org, thisisnotaboringmailinglist.net, etc) the damage level is way smaller as those are just plain text websites with no sessions. The issue is when we host other stuff sharing the domain name, which do set cookies. Thats why roadrunner.pages.0xacab.org can affect 0xacab.org, and kanban.0xacab.org.
Alternatively, there have been cases where the software sets the cookie itself for the domain name, which can lead to session hijacking.
I dont get the saving 15usd/year. If we use hexacab.org for pages, then
we are talking about spending 15usd/year for a domain specifically for
pages. Or are you saying we should use hexacab.org for something else,
and get a different domain for pages? I'm fine with that too, I dont
really care.
$15 per year as it would mean mantain a domain for pages. I don't care which name it is, my request is to settle for one. It can be hexacab or thisisnotaboringmailinglist.net or whatever. It's not terrible expensive for the help it can brings, but we all know how hard is to settle for a name
nobody should complain to us about a pages site having an expired cert, that would not be our responsibility. If they complain to us, which I they might, then we redirect them. I guess you want to avoid that redirection? I can understand that.
I'm unclear what is the work to be done. If its waiting for automatic
LE renewal, that isn't really work, its just waiting, so I must not
understand what we need to do.
The ways to make the renewal automatically can fail, at least the npm and ruby ones can break with an updated dependencies. Just like what happens when something gets updated on debian, arch or ubuntu. That for some reason the their configuration is having problems. That they expect it to work like github pages or gitlab pages.
We can wait, for as you notice most of those changes are expected to get into 6 months since now. I think we can do better than that. We can try to improve our dns system and get a way to get the LE wildcard. We can put the domain's dns behind cloudflare or another 3rd party that will allow us to update it using certbot. We can script a way to make these modifications easier for us to run by hand and run them every 2 months. We can ask another group that has this somehow resolved to take over this part. We can deploy pages on kubernetes and use https://github.com/jetstack/cert-manager. And probably other ways. It's 6 months we can do this quicker than gitlab... the problem is the low priority this can have if it's only for one or two groups.
From all this the easier and quicker will be put the dns of it behind cloudflare's dns settings because it's free, disable the proxying and then get and automatizate the certificate signing with certbot. The less quick will be buying one which seems even worse. Scripting to make this quicker seems the next easier and fast where we put the load in ourside and not the users, and it's better than buying one but worse than make cloudflare manage the dns.
If we rely on a 3rd party we can reduce whatever we do here. If we don't we need to do changes with today's infrastructure or be manually doing this, then prioritization comes into place. Or we can wait until somebody has a nicer way to do this thing.
FWIW: The "Automatic HTTPS certificate creation/renewal for Pages custom domains" was merged upstream, and will be shipped in the upcoming 12.1 release.
Not right now, it may be possible and good, but our focus is not right now here as a group. If you want to give us a hand with this, we would love to take domain name suggestions for the pages. But I can't really say it will be up or when. That said, this is a service I would use, so I'm interested too, but prioritization of work is always a collective decision and we have our hands more or less full with other stuff for the next weeks.
Hello everyone,
I just arrived here with the aim of creating a repository of art and utopias in the modern ages wondering if I could let the markdown pages be published as a public website for consulting, bit as far as I read the feature is not yet implemented and I am also really interested on it. Please let me know I it's still being evaluated or it has already been dismissed. I'd it was so, could you tell me another grassroot git service that has it enabled?
Take care all and thanks a lot for your support
Above it was mentioned that enabling Pages would be more relevant if there was a user base, so just wanted to chime in that I would also use it. I'm happy to donate $30 for two years of the domain, though if I understand correctly hexacab.org has been purchased after all? Are there other ways a hand could be given? Thanks!
Hi, I've just had a go at setting up GitLab Pages for ecobytes.net/allmende.io and deem it a very useful thing, esp. since Let's Encrypt is well integrated for custom domains. They try not hitting rate limits by letting people confirm their domain with a TXT record, but don't confirm the CNAME or A RRs are pointing towards the GitLab pages instance. The pages daemon is very simple to configure, if wanting to be ran independent from a GitLab (Omnibus) install, but has some configuration quirks on the GitLab (Omnibus) side to get working fully, why I'm proposing to add my latest experiences here. Please note they are phasing out NFS-based sharing of assets, and require an S3-compatible Object Store now, plus a (shared) GitLab runner present for the projects to build.
Here is current situation: We just need to settle on a domain name. So we are asking those of you who are wanting this to propose some suggestions. The only criteria is that it has to be available, it cannot be incredibly expensive (there are domain names that are amazing, but we'd have to pay $4,000USD for them).
Because posting in a public issue an available domain name could result in the domain name being grabbed by someone who loves it, maybe its better if you send your suggestions in an email, so please send your suggestions to micah@riseup.net
After a few days of suggestions, we'll pick one and start moving on it.
Thanks everyone for your suggestions, we got some really interesting ones! Most people said that they would use their own domain, so it didn't matter too much which one was picked. We finally settled on itcouldbewor.se and are in the process of getting things setup. One of the first steps in the process is getting the domain on the Public Suffix List, which will take an unknown amount of time (possibly weeks). That request was submitted two days ago, and while we wait for that to be resolved, we'll put together the other parts of the architecture to make this work.
Hey guess what! Its Friday, and the weekend is here!
What better way to spend the weekend than to try out the gitlab pages we've now enabled?!
We just finished setting it up, and it appears to work (either with the built-in domain itcouldbewor.se, or with your own custom domain, and automatic Let's Encrypt certificates)!
Why not give it a try and see if it works for you, and let us know how it goes? We're really excited to see what kind of pages people will setup here.
I'll keep this issue open for a little bit for any feedback/problems/questions people might have and then close it after we are more comfortable that things are working how they should.
I've been able to set up a Hugo website on itcouldbewor.se, thank you for providing this lovely service!
The process wasn't easy and I have some notes that could be useful for other users, where is the best place to submit documentation?
About the domain name: I know I'm coming late to the party but I find itcouldbewor.se pretty hard to use and communicate, especially for non-english speakers.
I've got two domain name suggestions, shorter, multilingual-friendly and available for less than 20$/y.
I would love to email them to micah, but before that, is it possible to make another domain name available for "Riseup's Gitlab Pages"?
Best regards!
(Riseup is a lovely tech collective, hundreds of people around me are benefiting from your amazing work. Hundreds of thanks, for you, from france ️)
I've been able to set up a Hugo website on itcouldbewor.se, thank you for providing this lovely service!
Nice to hear :D
The process wasn't easy and I have some notes that could be useful for other users, where is the best place to submit documentation?
I agree the documentation is not great. We have not created our own, and
have been depending on gitlab's internal documentation. We don't have a
good place for gitlab documentation that is not our own, right
now. It might be something to put on https://riseup.net, but I'm not
sure.
Do you want to put here what things you've found and we can try to
figure out the best way to show that to people?
About the domain name: I know I'm coming late to the party but I find
itcouldbewor.se pretty hard to use and communicate, especially for
non-english speakers.
It is also a bit... hard for english speakers as well.
Did you know that you can use your own domain name for your site? The
pages documentation shows how you can register your own domain name and
configure it for your site, and stop using the ugly one :)
I've got two domain name suggestions, shorter, multilingual-friendly
and available for less than 20$/y. I would love to email them to
micah, but before that, is it possible to make another domain name
available for "Riseup's Gitlab Pages"?
I'm unsure if we can have more than one domain, but I would be surprised
it we could not. I'd be very interested to hear your suggestions, if you
want to email them to me: micah@riseup.net
(Riseup is a lovely tech collective, hundreds of people
around me are benefiting from your amazing work. Hundreds of thanks,
for you, from france ️)
un grand merci! :D
...
On 2022-06-30 09:16:38, Alice Denilam (@roneo) wrote:
Do you want to put here what things you've found and we can try to figure out the best way to show that to people?
Sure! I'll clean up my notes soon, and post them here.
Did you know that you can use your own domain name for your site?
Yes, but using a custom domain does not provide the same privacy as using a Riseup subdomain.
That's a super valuable feature BTW, thanks!
I'm unsure if we can have more than one domain, but I would be surprised it we could not. I'd be very interested to hear your suggestions, if you want to email them to me: micah@riseup.net
Hi @roneo we're thinking of going the pages route and came across this issue. Was wondering if you had those notes to drop :) even if they are messy I'm sure they'd still be of use to us!
Feel free to comment and criticize, I didn't have the time to test it in depth.
Side note: I faced a weird bug while testing the default template for Hugo:
I created a "New project" from the homepage, then selected "Create from template"
I picked the Hugo template:
And the result is this repo: Hugo is there, but the files are outdated and the theme is Beautifulhugo, instead of Ananke on Gitlab.org.
Moreover, the .gitlab-ci.yml provided is breaking the build. (See this commit for details)