User Details
- User Since
- Aug 6 2021, 6:35 AM (167 w, 4 d)
- Availability
- Available
- LDAP User
- CtrlZvi
- MediaWiki User
- Unknown
Oct 5 2022
More thinking outloud, I think there's an even easier solution. Subscribing is already a client-side action. So upon subscription, since the client knows that it may have missed changes between site link and subscription, we just invalidate the cache. That way, the next view will rerender and will pick up any missed changes. I'm working on testing this now, but there's some subtleties involved, and my understanding of how caching, refreshing links, subscriptions, changes, etc. all interact is still very new.
Oct 3 2022
Thinking out loud here, could we potentially add all pre-existing subscriptions for a client as a deferred action when the client is added instead of as a job? And do that before creating the DispatchChanges job?
Sep 25 2022
Upon even more digging, my issue was similar, but actually subtly different. I've opened a separate issue (T318501) for it.
This is likely another thing that could go wrong with dispatching a la T291063
Sep 23 2022
Reopening because after digging further, I'm more convinced that my above analysis is correct.
tl;dr: There is a race condition between change dispatch and subscription creation that used to be resolvable via manual dispatch running in a loop but no longer has a resolution.
For more information, I think I'm encountering a race condition between making changes and adding the subscription. If the recent change is processed before the refreshLinks job runs, then there is no new recent change processing to trigger dispatch when the refreshLinks job creates the sitelink/subscription.
@Michael I am running Wikibase REL1_38 (from source, commit 33c2c9226c) and am still seeing what I believe is this issue.
Aug 30 2022
I wonder if there's benefit in migrating the MySQL and PostgreSQL implementations to PDO? That would probably resolve the issue where they return strings and unify behavior. I suspect it might be at the expense of performance, though?
I agree 100% with using the not being coupled to strings and using the native integer types. It seems like a positive change for a whole host of reasons. But that's a much larger change in a system I don't have experience in, and my experiments with trying to support both the strings returned by MySQL and PostgreSQL and the integers returned by SQLite were not successful.
Aug 29 2022
Although discovered with Wikibase, this is not actually a Wikibase issue. PHP 8.1 introduces a breaking change into the SqliteResultWrapper. Leaving the Wikibase tags for now in case there's a local workaround that's preferred, but I think I now have an idea of how to fix this at the core level and hope to have a patch soon.
Digging a bit more, the type mismatch comes from loading the wbt_type table into the cache using the NameTableStore::loadTable() function. This function stores the returned values into an associative array by their key, and although PostgreSQL, MySQL, and SQLite prior to PHP 8.1 returned the ids as strings, the use of them as keys in the associate array results in them being cast to int. This is almost certainly why DatabaseTermInLangIdsAcquirer::acquireTermInLangIdsInner was casting back to a string.
Aug 28 2022
Aug 23 2022
GeoData was not fixed with https://gerrit.wikimedia.org/r/c/mediawiki/extensions/GeoData/+/805460. While support was added in the schema, GeoData has a bug preventing insertion of data into the geo_tags table.
Mar 9 2022
I have a local change that removes the CAST in favor of more portable syntax, but it does not solve the table prefix issue. I haven't had time to look into replacing this with something truly portable yet. Would the CAST removal be potentially welcome as an intermediate patch? Or should I wait to submit a patch until I have something that replaces the semi-hardcoded $where with something more portable?
Mar 8 2022
Sure. I have a personal wiki I'm running (currently MW 1.37.1) with some custom extensions. One of these extensions is one I'm developing that will automatically geocode an address (stored in a property on an item) and store the resulting coordinates (into a different property on the same item). This (currently) runs as a deferred update triggered by saving the item with a change to the address property. The process is triggered by the onWikibaseChangeNotification hook which adds the deferred update to do the geocoding then returns.
Mar 5 2022
Aug 29 2021
Aug 27 2021
Aug 23 2021
Aug 22 2021
Aug 20 2021
Alternate idea, instead of using the AfterImportPage hook, I could just extend the already used onImportHandleRevisionXMLTag hook.
I locally have a solution using the AfterImportPage hook to compare the highest recorded id in wb_id_counters for the imported entity type to the imported entity id. If the imported id is higher, it writes the imported id to the wb_id_counters table. I'm not sure this is the best solution, though, for a few reasons:
- If something goes wrong page import, but some of the revisions were successfully imported, then it doesn't solve the problem as the AfterPageImport hook won't be called.
- It requires reading from (and possibly writing to) the database an extra time for every page import, slowing down the import process, which is already a concern (see T287164: Improve bulk import via API)
- It adds a second hook to the import process (somehow it feels like more hooks for one task is less good than fewer hooks for one task)