Both insertion and processing rate statistics are available readily by eye - the new graph backs up my original point for why I declined to do this, it adds no value.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
May 13 2021
In T7073#145080, @Southparkfan wrote:In T7073#144421, @Reception123 wrote:I could work on adding the metrics to prometheus. Which metrics would you like to collect? (a counter of <this> in unit <that>)
In T7073#144421, @Reception123 wrote:
I could work on adding the metrics to prometheus. Which metrics would you like to collect? (a counter of <this> in unit <that>)
GitInfo::getHeadCommitDate tries to fetch the timestamp of the last commit for each extension. This can be very slow:
southparkfan@mw8:/srv/mediawiki/w/extensions/VisualEditor$ time git show -s --format=format:%ct HEAD 1620421442
In T5700#145074, @Universal_Omega wrote:In T5700#145050, @Southparkfan wrote:Since T7297 is considered to be a duplicate:
MediaWiki offers the HttpRequestFactory class to make HTTP calls in a standardised manner. The class ensures MediaWiki's internal logging features (e.g. 'http' log channel) and configurations settings (e.g. http_proxy) are used upon executing HTTP calls. Instead, RottenLinks uses the curl_ functions directly.
Example code (untested!):$request = MediaWikiServices::getInstance()->getHttpRequestFactory()->create( $url, [ 'method' => 'HEAD', // return headers only 'timeout' => $config->get( 'RottenLinksCurlTimeout' ), 'userAgent' => 'RottenLinks, MediaWiki extension (https://github.com/miraheze/RottenLinks), running on ' . $config->get( 'Server' ) ], __METHOD__ )->execute(); return (int)$request->getStatus();Just to mention, about to do PR for this, but the final return here is not entirely correct. It would be return (int)$request->getStatusValue()->getValue(); instead because ->execute returns a Status instance, which doesn't have getStatus(), so we use getStatusValue() to get an instance of StatusValue and then finally getValue() to get correct http response code from the status message.
Sorry, I messed up my code after rewriting it. You should not chain ->getStatus() after ->execute(). See this example:
$request = MediaWikiServices::getInstance()->getHttpRequestFactory()->create( $url, [ 'method' => 'HEAD', // return headers only 'timeout' => $config->get( 'RottenLinksCurlTimeout' ), 'userAgent' => 'RottenLinks, MediaWiki extension (https://github.com/miraheze/RottenLinks), running on ' . $config->get( 'Server' ) ], __METHOD__ ) $reqexec = $request->execute(); return (int)$request->getStatus();
@Reception123 See https://grafana.miraheze.org/d/3L3WYylMz/mediawiki-job-queue?orgId=1&from=now-24h&to=now for the 'insertion rate'. I have not been able to add the 'processing rate'.
In T5700#145050, @Southparkfan wrote:Since T7297 is considered to be a duplicate:
MediaWiki offers the HttpRequestFactory class to make HTTP calls in a standardised manner. The class ensures MediaWiki's internal logging features (e.g. 'http' log channel) and configurations settings (e.g. http_proxy) are used upon executing HTTP calls. Instead, RottenLinks uses the curl_ functions directly.
Example code (untested!):$request = MediaWikiServices::getInstance()->getHttpRequestFactory()->create( $url, [ 'method' => 'HEAD', // return headers only 'timeout' => $config->get( 'RottenLinksCurlTimeout' ), 'userAgent' => 'RottenLinks, MediaWiki extension (https://github.com/miraheze/RottenLinks), running on ' . $config->get( 'Server' ) ], __METHOD__ )->execute(); return (int)$request->getStatus();
In T7135#144414, @Reception123 wrote:
This does not have high priority. Can be assigned to me if you wish.
Paladox has changed the scheduler on cloud5. Let's wait for a day to see the impact on I/O performance (regular operations).
You can already enabled this in Special:ManageWiki/extensions yourself :)
Since T7297 is considered to be a duplicate:
MediaWiki offers the HttpRequestFactory class to make HTTP calls in a standardised manner. The class ensures MediaWiki's internal logging features (e.g. 'http' log channel) and configurations settings (e.g. http_proxy) are used upon executing HTTP calls. Instead, RottenLinks uses the curl_ functions directly.
Example code (untested!):$request = MediaWikiServices::getInstance()->getHttpRequestFactory()->create( $url, [ 'method' => 'HEAD', // return headers only 'timeout' => $config->get( 'RottenLinksCurlTimeout' ), 'userAgent' => 'RottenLinks, MediaWiki extension (https://github.com/miraheze/RottenLinks), running on ' . $config->get( 'Server' ) ], __METHOD__ )->execute(); return (int)$request->getStatus();
Removing the Extensions project since this is tagged with RemovePII now and we don't usually tag Extensions when tagging an additional Miraheze extension project.
Above task was created for this purpose, which was to overhaul how the detection works.
Usually we don't tag Extensions when tagging an additional Miraheze extension tag.
Noting here (for future reference) that @Paladox proposed on IRC to create a script that would run every minute to check whether the mounts are mounted.
Lowering priority. Closing as invalid per discussion on community noticeboard. Until we've ruled out this can't be resolved on-wiki, this doesn't need a task.
Please do not triage feature requests as high priority. It may be useful for you to read https://meta.miraheze.org/wiki/Phabricator in order to familiarize yourself with our method of working here.
Do you mean Special:NewPages?
It's been less than 12 hours
Any feedback yet??
Ah, alright that's fine.
@Turtle84375 @R4356th Upstream has fixed this and I've updated AF.
In T7287#144953, @Reception123 wrote:Would it be possible to have a puppet check for this and if /mnt/mediawiki-static is unnacessible puppet automatically does umount?
T7134 - It does. That didn't happen.
Would it be possible to have a puppet check for this and if /mnt/mediawiki-static is unnacessible puppet automatically does umount?
@Emmateapot It's not possible to have a wiki with two domains, however one can redirect to the other. Would you like 1) puzzles.wiki to redirect to www.puzzles.wiki OR 2) www.puzzles.wiki to redirect to puzzles.wiki ?
This is a bit tricky due to the way DataDump configuration is, but I'll look into it.
@RhinosF1: any progress on this?
In T7194#143066, @Reception123 wrote:I don't quite recall why they were grouped, but I do imagine there was some reason behind it. The question is whether it's worth the effort of splitting them up
We won't be maintaining it. Sorry.
May 12 2021
Well then, I guess we can just ignore this, as it most likely won't meet the standards of the users' expectations.
Closing per original status again.
In T7182#144904, @WikiJS wrote:By that, does it mean we can't change the logo and/or footer?
By that, does it mean we can't change the logo and/or footer?
In T7278#144894, @Amical wrote:This is the dump. Thanks in advance!{F1438957}
We'll review it as soon as we can to confirm everything.
This is the dump. Thanks in advance!{F1438957}
This is to Wikimedia specific I think.
There is also a way to do this by adding a check if you will still have managewiki access post-submit, basically it can do probably do a check by removing the right from the group in a simulation way, then checking if the actor can still access managewiki, though that does get quite complex.
I plan to work on the rest of this over the coming weeks.
if you'd like i could make you editor in this wiki to try out. Only problem is that it is in greek!
Wanted to send a patch.
Unfortunately due to Miraheze will not be enabling the configuration again, due to the upstream decision to remove this, there is not much we can do, if R4356th wants to create an extension for it that's fine but not really a reason to keep this task open if it's blocked on that though. Apologies for the inconvenience this caused.
There seems to be an issue with DPL3 where this config option doesn't work. I think I know why but can't exactly fix it as I don't have any way to test it as I'd need a large wiki to test that which I don't have.
It's understandable that it's annoying to have upstream be slow and not immediately fix patches but it's also not possible to shift that responsibility towards Miraheze and have us having to track locally applied changes to extensions. One may say that this one is more important than others, but if we would do one then everyone will ask us to apply upstream patches and that is just too much for us to manage I think. Either way, as RhinosF1 says I'm not aware of there being a proper way to do this. Turning off AbuseFilter doesn't make sense either because this is just one individual functionality that doesn't work.
In T7275#144857, @Dmehus wrote:@Reception123 Does the AuthorProtect extension even work on Miraheze? I've been thinking for some time we should consider uninstalling it, since I believe the issue is it requires a separate editauthorprotected restriction level from being added to each wiki (which we can't do, at least not easily, in ManageWiki. I haven't been able to get it to work on testwiki, at least not without an LS.php change.
Not sure on the functionality differences, but UserPageEditProtection does seem to be a simpler, better extension than AuthorProtect, so I'd 👍 uninstalling AuthorProtect and installing UserPageEditProtection (after a security review, of course)
@Reception123 Does the AuthorProtect extension even work on Miraheze? I've been thinking for some time we should consider uninstalling it, since I believe the issue is it requires a separate editauthorprotected restriction level from being added to each wiki (which we can't do, at least not easily, in ManageWiki. I haven't been able to get it to work on testwiki, at least not without an LS.php change.
Subscribing self.
This is honestly amazing that this exists!
In T7275#144817, @Reception123 wrote:@PiscesKazeMGR I see. There is also an easier option which would be to set a Special:AbuseFilter with the following configuration and set to disallow:
( article_namespace == 2 & !user_name == article_text & !"/" in article_text & !"sysop" in user_groups )Let me know if this would work, otherwise the extension can be reviewed and potentially be added.
Okay
Unfortunately this cannot be accepted, we do not have the resources for CirrusSearch.
This also goes with Elastica Extension and ElasticSearch and OpenJDK please
@PiscesKazeMGR I see. There is also an easier option which would be to set a Special:AbuseFilter with the following configuration and set to disallow:
Anyway, for faster progress, I will show you the differences between AP and UPEP.
AuthorProtect | UserPageEditProtection | |
Affecting | Lock any page as desired by the author of the page, including Main and User namespace. | Only lock pages in the User namespace; only allow editing for the corresponding user and sysop. |
Trouble happening? | YES. Vandalism users can manually create a member page with content that defames others and then lock it to prevent the victim from modifying it. | NO. |
Conclude | => Not safe for a community wiki! | => Protect users from vandalizing User namespace pages. |
AbuseFilter is not getting turned off because of this.
I understand the concerns but still think that "hacking" is the way to go if Miraheze wants to make sure all global extensions maintained by the WMF work. We all know that the WMF are lazy folks (to put it bluntly) and do not really care about third parties. It is unlikely that the issue will be fixed anytime soon. If the issue cannot be fixed by Miraheze, then AbuseFilter should be turned into a default extension so that annoyed users can at least turn it off.
I think it's already set to true but if you wouldn't mind checking i would be obliged!
I don't think starting to hack extensions is the way to go, as it would be a slippery slope and we can't possibly manage all that. Also for the reasons given by RhinosF1 regarding updates and so on, there would be merge conflicts / overrides and it would just not be practical to do so.
In T7290#144804, @RhinosF1 wrote:In T7290#144802, @R4356th wrote:I can see a straightforward solution actually. Miraheze can just test this patch on test3 and if it goes well, apply it to production wikis.
We've never in our history applied a patch on top of core to an extension.
We've done it for core both publicly and for security issues before but never an extension to my knowledge.
Because core is just files and not updated that regularly. Extensions are sub modules and updated much more often.