Has the Board been made aware of this yet?
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Mar 15 2021
Mar 14 2021
For reference this generates around 9 error messages a second, which is substantial - especially when trying to find error messages so this should be fixed permanently soon.
Mar 13 2021
Mar 11 2021
Jobs don't fail as this is always ran after jobs are completed.
Mar 10 2021
https://github.com/miraheze/CreateWiki/pull/200 makes this task resolved, only setting a configuration in LS is required now to enable this.
@Southparkfan Can we have an update on this please?
https://github.com/certbot/certbot/issues/8710 has a noticeable impact on us, which requires our intervention so downstream ticket remains open
Mar 8 2021
In T6935#136946, @R4356th wrote:Okay but could you please make it so that Wiki Creators are at least allowed to input their own comments?
In T4164#136920, @R4356th wrote:But Stewards always close such requests as not done. See this (archived) request, for example.
Canned Responses are either used or they’re not within the extension, adding an other option defeats the purpose of why they were added. The extension doesn’t force canned responses to be used therefore this is an issue for configuration rather than the extension.
Mar 7 2021
Considering our resources and facts about the current and new proposed system, I'm not seeing enough of a gain to justify allocating a very significant portion of our remaining resources to creating an entire service cluster which would require at minimum 8GB of memory and several cores, just to run jobs. We moved sessions from Redis to Memcached which provided a good performance boost, keeping in mind JobRunner still uses Redis that could be one step. Alternatively, we could also drop dedicated runners, take our main cluster up to 8 and running jobs on all of them. If we allocate 1 runner per server, that would give us 8 instances, which is double what we have right now.
Going to mark as declined for now - a quick look into this suggests we'd need around 16GB to be able to process just our RC traffic alone, ignoring jobs as well. The actual gain in deploying this service as well is very small, so I don't feel like I can justify allocating the remaining RAM on a single server to a Kafka deployment.
Requires Kafka -> claiming as a investigatory exercise
Resolved in the sense that all that is practical is done.
Has shown some interest in doing this.
Tracking tasks are bad - as this task depends on sub tasks being doing rather than something actually being done.
We now regularly update CA certs on each puppet run - putting the burden of management responsibility on the CAs rather than us. Monitoring all CAs on the system would add approximately another 150 SSL checks which seems disproportionate especially for something we neither maintain nor can control.
We're now using the ca-certificates and capath approach with web configuration. Chains are now created and regularly updated by the CA themselves, rather than us manually adding and maintaining them. CAs also maintain their own trust chains.
In T4164#136761, @R4356th wrote:@John, if I am understanding your and @Universal_Omega's concerns correctly, differentiating the sitenotice for wikis manually closed as opposed to automatically (i.e. by a script) would allow people to easily understand if they can adopt that wiki. This should decrease the number of RfAs that cannot be undertaken and reduce Stewards' workload.
In T4164#136756, @Universal_Omega wrote:In T4164#107191, @Reception123 wrote:Private wikis vs public wikis is now Done.
What remains to be done is manual closure vs script which is still blocked on community consensus.
Whether it's blocked on community concensus or not, about whether manually closed wikis can be adopted. I personally believe it should still be differentiated and to say that it was manually closed rather then it was closed after time
Mar 6 2021
Discussed yesterday and paladox found a way to do this, assigning to him.
Mar 5 2021
As the domains aren't in active use, there's no real requirement for us have control over them - therefore if Zppix is willing to keep renewing them, that is absolutely fine. If he isn't, and a transfer isn't possible, registration lapsing would be the only next steps.
Deleted wikis can now be excluded by removing 'deleted' from state list (all includes deleted)
Mar 3 2021
Another issue caused by deployers merging without reviewing the changes being deployed. I've ask @Paladox to run the SQL.
Mar 2 2021
Mar 1 2021
Not an issue
mail and postfixUser are two distinct things
Feb 28 2021
Paladox has blocked the IP which is causing most of the traffic
Update on the logging I asked you to look into a few days ago?
Feb 25 2021
Quite a few actions are blocked on you.
Redis is no longer being used for caching
Non-reliable numbers here from small scaling testing but:
Discussion happened and support was given, outcome?
Feb 24 2021
https://github.com/miraheze/CreateWiki/commit/c9dd807fffa119e47558ce820ec2ef876a9a26f2 And deployment will be whenever someone chooses to deploy it onto production
Feb 21 2021
Technically the location of the file means little and moving it to be in MirahezeMagic would seem out of scope - as it's a root-level file. Keeping it within puppet seems easiest as theres no gain from moving it to mw-config.
Feb 19 2021
Feb 18 2021
Same password as bots-noreply