Hi I'm John. I'm the Co-Founder of Miraheze and a Steward.
Mon, May 3
Sun, May 2
Load times for mw*, so this is MediaWiki infrastructure. Please tag tasks correctly on future.
Thu, Apr 29
I have created an account on OpenCVE and populated it with products/services we are using. Password can be found on Private Git.
Wed, Apr 28
If this can’t be done as a single source of truth, my personal opinion would be to get rid of public/private settings until they can be done safely and securely.
Mon, Apr 26
Sun, Apr 25
@Paladox can you take a look just to see if it’s only an OOM and nothing more serious?
Fri, Apr 23
Thu, Apr 22
Wed, Apr 21
@Reception123; can this be closed? Grafana shows this isn’t irregular and there’s a task already opened to increase capacity that is blocked in MWSRE.
Tue, Apr 20
@Southparkfan updates the above?
Grafana shows an increase in requests/s on both cp10 and cp11.
Mon, Apr 19
@Southparkfan See the above please
Fri, Apr 16
Rename worked when I did:
Ignore the above - problem was caused by aaawiki not have a cache file on test3.
root@test3:~# php /srv/mediawiki/w/maintenance/eval.php --wiki metawiki > $jQ = JobQueueGroup::singleton('metawiki');
Deployment of jobrunner on all servers has now happened. Per the above, this is blocked on MediaWiki (SRE) deciding when they wish to deploy an additional two servers.
Steps to enact the above would be:
Wed, Apr 14
Tue, Apr 13
@Paladox are you okay to have a look at this?
Added monitoring, removed production error as no stack trace/ID/link was provided for a production error
Sun, Apr 11
Apr 9 2021
It looks like a useful service, so we should definitely give it a try and see from a security perspective.
Changes never got deployed on the server, this has been fixed now.
Apr 8 2021
Because of our monitoring, we’re doing fairly intensive Lua scripts on almost a 100k keys, this can take up to 2 seconds to run. We have set our connectTimeout in Redis has being 2s (https://github.com/miraheze/mw-config/blob/master/GlobalCache.php#L48).
Redis software not the jobqueue software as this is manually ran, not a job
Apr 7 2021
Apr 6 2021
Basic LUA script to handle this:
Apr 5 2021
Apr 1 2021
Mar 31 2021
Is there a use case for this that the ES data source wouldn’t fulfil? Is this the approach MediaWiki (SRE) wish to take? If so this would fall under the MW team to implement as part of their task as without a use case for Infra, what’s the point in implementing something unused?
Mar 28 2021
Not blocked on external entity
Mar 27 2021
Sounds like there isn't a problem then?
Do we have an update on this? Also, who is taking responsibility for this?
Mar 26 2021
Before I can review this, more information needs to be provided.
Mar 25 2021
Approved, with spending authorisation by @Southparkfan
Approved for cloud4.
Mar 23 2021
T4302 - if that task gets declined in the future then this task would need re-opening.
Mar 22 2021
Get a list of all h-sha1ById, loop over them running a HLEN on the key will return how many unclaimed jobs there are by job type - add these up and then the data exists for both the whole jobqueue but also per job (and if you want to go further, each job type by each wiki)
Reviewing the stats already put up by @Paladox and looking into Dovecot's stats facility in more detail, I don't believe we would gain any new information from Dovecot stats as the Postfix ones already cover all bases of mail, including connections, logins and auth failures.