There doesn't seem any easy way for a tool that takes an URL and processes the wiki article at that URL (spider, browser extension etc.) to identify the article sufficiently to interact with the API - the title and variant is contained in the URL but it might be in the path or the query, the path format might depend on wiki configuration, the URL might be in non-canonical encoding etc. MediaWiki scripts rely on page variables instead, but those use mw.config so they are not in a parsable format. The most important variables identifying the content (title, variant, revision, maybe page ID) should be embedded in the HTML in a machine-readable format.
Description
Description
Event Timeline
Comment Actions
(The use case where this came up is a browser extension for sending the current article to action=readinglists&command=createentry.)
Comment Actions
MediaWiki scripts rely on page variables instead, but those use mw.config so they are not in a parsable format.
Why not? /"wgPageName"\s*:\s*"([^"]+)"/, /"wgRelevantArticleId"\s*:\s*(\d+)/, or something similar depending on what you actually want, will extract the relevant data from the HTML.
Comment Actions
The advice about parsing HTML with regex probably applies here.
We could just add the variables as a bunch of meta keywords, or something like JSON-LD, to get well-defined, machine-readable syntax.