We should monitor the size of Database tables and ensure that data in it is not uncontrolled.
Description
Status | Assigned | Task | ||
---|---|---|---|---|
Resolved | Reception123 | T7531 503 Backend fetch failed errors lasting a few minutes | ||
Resolved | Unknown Object (User) | T7532 GlobalNewFiles disabled as caused too many connections to db11 (CVE-2021-32722) | ||
Declined | Unknown Object (User) | T7543 Monitor size of database tables |
Event Timeline
Do you mean moniter the size of every individual table on all wikis? If not can you elaborate?
I think that would be very hard to do without huge bloat. We should probably sample the data.
Yes I know, that's why I was asking. I'm not sure how we can really do this in a good way. I don't see how this can really be done in Grafana, so any ideas about it then?
No, we don't have any per table / per DB stats. I think we discussed it being in a seperate report.
If this is per wiki, subdivided into tables, then this isn't feasible for Grafana (in my opinion). And I'm not sure how else we would collect and usefully display this information.
I'm going to go ahead and mark this as declined per what I said above. However, do reopen if a method to do so can be found.