FOSS developer and Miraheze Sysadmin.
Tue, Aug 4
There's a rewrite rule at https://github.com/miraheze/puppet/blob/b81b54995d672b3549903635c0d4376bf1628d02/modules/mediawiki/templates/mediawiki-includes.conf.erb#L67 which should make this not happen.
Hmm, that looks wrong. I'll get someone to look at this tommorow.
Mon, Aug 3
Sun, Aug 2
Long imports will timeout and show a 503. Please create a seperate task with the file to import and wiki and we will do it server side.
Can it be approved then? Multiple members of SRE have seen this discussion and not actively stated an approval / decline.
That's something I'm going to look into more
Would not solve problems
There's 100% duplication in the way jobs are processed. If the duplication detections works as I understand, it would reduce the load as less jobs to process. Concurrency limiting would also reduce the amount of resources a single type of job can take up and therefore reduce the impact of a spike in edits on loginwiki.
See T6006 as well, I think they'll go hand in hand well
@Zppix had to restart the jobrunner service again last night due to lag.
Sat, Aug 1
There's also a debugging statement left in extensions/MobileTabsPlugin/resources/tabs2accordion.js, which should be taken out but which doesn't really hurt anything.
Already available. Please check Special:ManageWiki/extensions before filing tasks.
Cc @John as I think ManageWiki uses it so he can confirm but is it not already available?
Fri, Jul 31
my runJobs script was accidently on mw7 but load looks fine so I'm leaving it there to avoid restarting it. I'll move if it causes an issue.
A restart of jobrunner1 seems to have stopped it failing. Let's hope runJobs.php clears the backlog and it stays this way.
It's my understanding from chatting with @RhinosF1 that some of the jobs in the job queue are getting backed up. I'm pretty rusty on my MediaWiki skills, so I'm not yet familiar with with which set of jobs, and whether the problem is with something in MediaWiki core or with one of the many Miraheze custom jobs.
It's anything ran by the jobrunner
Thu, Jul 30
I can just do a mass mark for meta when at my pc. I'm happy to do that as based on my local authority.
Stalling for 7 days, then will send 2nd notification.
All not serving Miraheze have been pulled and unset, I will alert crats for them wikis shortly.
Wed, Jul 29
Wikis where 3 applies will be removed in the morning. For 1/2, as above crats will have 14 days to make the change.
As discussed on IRC, we'll send 2 notifications within the next 14 days. No response = CD revoked.
He can revert his own change, we can fix it without touching ManageWiki.
If they want it deleting, then just get them to give the go ahead and we'll take it from there. Don't worry, there's a script to fix this kind of thing when creating namespaces.
Tue, Jul 28
Also noting that the dns checks took up that much of icinga it was impossible to see at the time what on earth happened.
that's not merged
We fixed this the other week. It now works.
Should we go and file a report with them then?
Mon, Jul 27
Sun, Jul 26
Please request this from sysadmins and we'll do the deletion. We don't allow it as this mimics as shell script.
VE isn't even bundled (yet), switching to Extensions
Sat, Jul 25
They still give access to PII. It's not allowing you to check anyone unilaterally but PII is still PII.
Both times there was unanimous rejection of the proposals.
Reading the closure, that was a lot to do with the format of the RfC. I'm not talking allowing wikis to just willy nilly give CU/OS to anyone. I'm talking allowing wikis with a sufficient enough community that they can elect at least 2 users who have signed an NDA by elections that are reviewed by the steward team and those users reaching support levels at lears higher than required for Steward.
Based on what I've seen and Amanda's comment, closing as invalid.
These rights are fundamentally equivalent to CheckUser permissions. You would definitely need to sign the NDA if granted access to these permissions. Given that CheckUser and Oversight permissions are not given out to non-stewards, I don't see why an exception would be made here, unless there is some unusual/extenuating reason for this. If it is the case that the rights will be granted to anyone who has signed the NDA (which I doubt is the case), that is not reflected in the relevant policies/documentation.
https://grafana.miraheze.org/d/iWQm-pOZz/nginx-appservers?viewPanel=12&orgId=1&from=now-15m&to=now&var-instance=mw5.miraheze.org:9113 shows mw5 nginx issues same as last night when it crashed?
Can this be closed?
Why is this stalled?
Fri, Jul 24
I'll talk to them
From what I can see, it treats actor_id as unique but all IPs have actor_id set to 0 so when actor_id = 0 it needs to fallback to using actor_ip
Drop us an email to tech[at]miraheze[dot]org with the webhook url and we'll do it.
I don't see why this was taken out of the queue
Thu, Jul 23
This stinks of a problem with how it handles actors + anonymous users. I'm betting Upstream.