[go: up one dir, main page]

Jump to content

Wikipedia talk:Bots/Archive 5

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 3Archive 4Archive 5Archive 6Archive 7Archive 10

NetBot

Bot running pywikipediabot, specifically category.py to assist with tasks resulting from Category discussions at Wikipedia:Categories for deletion. May occasionally help with disambig, redirect bypassing, and interwiki. -- Netoholic @ 01:17, 31 Aug 2004 (UTC)

I object to this user running a bot. Snowspinner 06:39, Sep 18, 2004 (UTC)
Considering I've made no edits with it yet, this seems premature. Care to offer any specific complaint of this bot's stated purpose? -- Netoholic @ 04:31, 2004 Sep 19 (UTC)
I don't trust you to only use the bot for the stated purpose. Snowspinner 04:45, Sep 19, 2004 (UTC)
Then you may add the bot's contrib page to your frequently visited pages. -- Netoholic @ 04:48, 2004 Sep 19 (UTC)
Considering this user's past behaviour and contempt for our policies, I'd be very careful about allowing him to run a bot. Ambi 04:33, 19 Sep 2004 (UTC)
Wow, dunno where this is coming from. I have no contempt for any policies, and my behaviour is of a completely civil manner. Is there any technical reason I cannot assist with category maintenance, and the other minor, and frequently bot-handled, simple maintenance tasks listed above? If not, then I welcome your comments on my Talk page, rather than making inappropriately agressive statements on this one. Please check your own behavior. -- Netoholic @ 04:47, 2004 Sep 19 (UTC)
I'm afraid I have to agree with Snowspinner on this one. I don't want to have to block it in a month's time if it starts mass-removing VFD listings or something similar. Ambi 04:50, 19 Sep 2004 (UTC)

I have run some initial tests with the bot, in accordance with the Bot policy, and its edits have been both harmless and useful. While I have read the above objections, there is no substantive basis for them. My intended uses (see first post above) are completely well-meaning for Wikipedia. -- Netoholic @ 21:46, 2004 Sep 23 (UTC)

You are not the judge of whether objections are substantative. Snowspinner 22:47, Sep 23, 2004 (UTC)

Please note that Netoholic ran this bot without approval, and it was blocked from editing. RickK 06:23, Sep 25, 2004 (UTC)

It has since been unilaterally unblocked by User:Guanaco, FYI. Ambi 06:26, 25 Sep 2004 (UTC)
Per the Bot policy, this bot has run slow tests without a bot flag. Snowspinner blocked it, and that block was removed (see User talk:Guanaco#Unblock). How else can a user test a bot and show that it performs useful and non-disruptive edits? -- Netoholic @ 06:30, 2004 Sep 25 (UTC)
"Before running a bot, you must get approval on Wikipedia talk:Bots. State there precisely what the bot will do. Get a rough consensus on the talk page that it is a good idea. Wait a week to see if there are any objections, and if there aren't, go ahead and run it for a short period so it can be monitored." (emphasis mine) -Sean Curtin 17:29, Sep 26, 2004 (UTC)
The request was first posted here on "31 Aug", and no objection was raised until "Sep 18", well over the "one week" mentioned according to policy. So far, no objections to the bot's actions have yet been raised. I am willing to run it without the bot flag for whatever time is necessary to avoid objections, but there is no remote cause for this to be blocked for performing any reasonable edits. I find this whole line insulting, as this is obviously more a character attack, than a technical concern. -- Netoholic @ 18:08, 2004 Sep 26 (UTC)
The bot policy does not have any guidelines for what a valid objection is, but, as one of the concerns about bots is clearly vandalbots, character seems a logical objection. Snowspinner 18:32, Sep 26, 2004 (UTC)
I would be very interested to hear what aspects of my character are objectionable, and how they relate to this bot's limited scope. Vandalism has clear definition here on WP, so you'll need a lot more than just "I don't like this person" to explain your objection - unless you are trying to say that I am a vandal. In which case, I point you to WP:RFC. -- Netoholic @ 19:00, 2004 Sep 26 (UTC)
I have signed the RfC that exists against you. I think you have vandalized in the past. I do not trust you not to vandalize in the future. I do not think you should have a bot. This is simple. This is straightforward. And this is something you entirely brought upon yourself by repeatedly causing trouble. You reap what you sow. As I've said, go a month without causing trouble, I'll withdraw my objection. But, quite frankly, PMing me in IRC to call me a "pre-eminent fuck" repeatedly does not make me think you should be trusted with a bot. If this upsets you, I'm sorry. Snowspinner 19:41, Sep 26, 2004 (UTC)
That RFC has offered no evidence of vandalism. Your comments continue to be insulting at all turns, and I ask you to check your behaviour. -- Netoholic @ 20:24, 2004 Sep 26 (UTC)

New uses

Netoholic used NetBot to unilaterally bypass the templates for deletion process. I then blocked it for 24 hours as it was "messing up articles", per policy on running bots. See Wikipedia:Templates for deletion#Template:Csdtalkhist. —Ben Brockert (42) UE News 00:54, Jan 11, 2005 (UTC)

So which is it, categories or templates? Can you show any diffs where the bot "messed up" an article? You blocked because it was one of your templates which was removed, not because of any remote damage the bot was doing. You didn't even inform me on my talk page. Not only did you block the bot, but you blocked my live account and IP address - something the bot/blocking policy expressly forbids. After all, if I myself had manually removed the template, would youhave blocked me directly? Don't abuse your privileges to win fights. I suggest you read up on those policies, rather than make nasty comments. -- Netoholic @ 02:40, 2005 Jan 11 (UTC)
My fault, meant templates, fixed. I did not block your live account or IP address, that was a manifestation of the software. I already apolgised for not notifying you; as I said, it won't happen again. No, if you had done it manually I would have just reverted you, as I reverted your bot. —Ben Brockert (42) UE News 03:43, Jan 11, 2005 (UTC)

Netoholic just changed the description of what his bot is allowed to do, see [1]. Since he did not first get permission here to expand the use of the bot, I have reverted his edit (and he reverted my revert, three times). If he could make a case for it here, I would appreciate it. —Ben Brockert (42) UE News 03:43, Jan 11, 2005 (UTC)

I gave up long ago trying to get a bot flag. I am content having the edits visible in Recentchanges, and to run it at the slow pace required on Wikipedia:Bots. This bot helps frequently with WP:TFD, WP:CFD, disambig, etc. and other very normal tasks. Please view Special:Contributions/NetBot. It's recently grabbed the special attentions of User:Brockert, over a template he created. We have too few bots to help with these very busy maintenance areas, and this bot helps me immensely. -- Netoholic @ 04:01, 2005 Jan 11 (UTC)
Netoholic is under the impression that this page is used only to get a bot flag, and that there is no requirement to first get approval for using the bot. I disagree. Other user's input is appreciated. —Ben Brockert (42) UE News 05:20, Jan 11, 2005 (UTC)
Can you provide a diff for when you asked for a bot flag? I could not find it, only the request for simple:. —Ben Brockert (42) UE News 05:45, Jan 11, 2005 (UTC)

This bot has not garnered community approval to be run. If I find it running, I will block it. RickK 05:17, Jan 11, 2005 (UTC)

  • RickK, welcome back. I invite you to review the edits of the bot, and tell me that it is not performing very helpful maintenance. -- Netoholic @ 06:03, 2005 Jan 11 (UTC)
    • I don't care if the bot is running the way you proposed that it would or not. You are running an unapproved bot. There is no discussion to be made. Get approval, then you can run it. RickK 06:24, Jan 11, 2005 (UTC)
    • It doesn't matter whether the bot is doing wonderful work, we have a policy that bot tasks need to be approved by the community and you need to abide by that. --fvw* 15:56, 2005 Jan 11 (UTC)

Unless anyone actually voices any objections to the edits that this bot (which uses standard pywikipediabot scripts) is performing, then its use will continue. I have always invited review of the entire edit history. So far, I have never received a complaint that the bot has made any mistakes or damaged any article, nor is it a server resource hog since I run it in off-peak hours. On the other hand, I have received positive feedback whenever I've used it help with the tedious tasks of WP:TFD and WP:CFD. On this basis, I ask that the objectors above, and anyone else, stop discussing whether it has approval or not, and actually review the edits and give approval. Lack of objections to its edits is tacit approval, which is what I've been going on. -- Netoholic @ 05:36, 2005 Jan 13 (UTC)

The objection Snowspinner made earlier seems to apply here. Also, I'm objecting, on the same grounds: You are unilaterally changing the purpose of your bot, without first asking here. Under those circumstances, I don't feel you can be trusted to run a bot.
If you want specific diffs of abuse, try [2] [3] [4] [5] [6] [7] [8]
[9] [10] [11] [12] [13], where you remove a template from a page while that template is still being discussed under TfD, especially since the voting at TfD is unanimously for keeping the template. --Carnildo 06:05, 13 Jan 2005 (UTC)
Snowspinner's objection was to me, not the bot. He raised that objection before I even started using it. Anyway, the "trust" issue is null because I've said I have no need or desire for a bot flag. Without that, the edits by the bot are visible on Recentchanges. I saw removal of the template you mentioned as pretty obvious and non-controversial. It was also completely reversible had any reasonably calm objection been noted at the time. This is all a tempest in a teapot. -- Netoholic @ 07:16, 2005 Jan 13 (UTC)
An unflagged bot can make just as big a mess as a flagged bot. And I see removing a template that has been voted "keep" on TfD as being the same order of vandalism as blanking a page that has survived VfD. --Carnildo 07:38, 13 Jan 2005 (UTC)
That would've been true, if the vote had concluded. As I recall, at the time, Brockert was the only one that had commented on TFD request (which I myself nominated). This was not some shady attempt at bypassing the process -- I just really thought he wasn't aware of the typical use of templates and thought I'd save him the trouble of having to re-edit the pages to remove the template. I only ran the bot because I didn't want to manually do that either. It's not like any text was lost, nor was it something I couldn't have reverted immediately if there was a complaint. I apparently misread him, and now have been the target of his attention. I only wish there had been some good faith assumptions, and that he'd asked me on my talk apge about. It's hard, but people have to remember that launching into tirades is rarely productive. -- Netoholic @ 08:41, 2005 Jan 13 (UTC)
I feel that those comments are disingenuous, backhanded, and insulting. —Ben Brockert (42) UE News 04:35, Jan 14, 2005 (UTC)
Not meant to be. The thing is, I had really never interacted with you, and your replies at the time seemed like you weren't familiar with template usage. As someone who patrols WP:TFD, we get a lot of newbies and get similar sorts of reactions. -- Netoholic @ 04:58, 2005 Jan 14 (UTC)
What about this: User talk:Cburnett#NetBot on Cyrano de Bergerac? To me, that reads as a complaint that the bot has "made a mistake or damaged an article". There is no policy that a category line in an article can contain only the category. There's also the issue of you using the bot to orphan templates before nominating them for deletion, a policy you decided not to follow because you didn't write it (?!). I do think that you are damaging articles with your bot, or I wouldn't be here, I certainly don't enjoy this discussion. —Ben Brockert (42) UE News 04:35, Jan 14, 2005 (UTC)
Every pywikipediabot running will sort categories exactly like that. Cburnett's use of comments is... unusual, and the first I have heard of it coming up. I did NOT use the bot to orphan Template:CompactTOCallplustwo2. I did that by hand, pointing the, what 8 articles, to a better template that is used on dozens/hundreds -- but it was not using the bot. I am sorry you feel the need to make such a bad working atmosphere around this. -- Netoholic @ 04:58, 2005 Jan 14 (UTC)

Reading this whole thing has left me slightly dizzy. All I know is that when I took at look at WP:CfD on 7 January, it was up to well over 200KB long. After a fair amount of work (I spent a good part of two days on it), it was down to about 170KB. Unfortunately, all the easy entries had been done. Basically all of the remaining ones need have have category entries in article deleted and/or changed, and doing that by hand is really painful. I did some (e.g. Category:Government of North Carolina) by hand, and it was a real drag (literally - with the Wikipedia being so slow).

Unfortunately, now both bots that were lending a hand with doing the grunt-work are now offline - the Pearl bot because it was doing things inefficiently, and this one because people don't want Netoholic running a bot.

(Although I'm sort of confused as to why - the bot isn't doing anything he can't do manually. Maybe Netoholic should write a program to generate "script" pages, like the one NickJ generates for adding Redirects - Netoholic can then cut-and-paste the resulting text into sub-pages of his user pages, and then open up a couple of browser windows on his PC, and go from window to window clicking on "Do it"/"Save page" links. It only take 4 mouse clicks for each one - I got to the point with NickJ's pages where I could keep 4 browers windows busy. How this differs in results from a slow-running bot is beyond me, but as far as I can see it's perfectly allowable.)

Anyway, as far as bots go, naybe there's a way to finesse the whole situation with Netoholic. How about one of the objectors to Netoholic running this bot instead runs a bot themselves to do WP:CfD stuff, then? I note that despite a lot of people's hard work, CfD is still a massive 150KB. I'm sure Netoholic would be happy to let someone else do the work on CfD. Noel (talk) 23:29, 13 Jan 2005 (UTC)

I'll look at CfD. You might bump the notice on the admin noticeboard that CfD is in need of attention, and/or add it to Announcements or Goings-on, or write an article on it for Snow's next newspaper, or put a request on Bot requests. —Ben Brockert (42) UE News 04:35, Jan 14, 2005 (UTC)
I've already listed CfD on WP:AN, and done a ton of work there. Why don't you work on getting someone to run a bot there, if you don't want Netoholic doing it? Noel (talk) 13:16, 14 Jan 2005 (UTC)

I would like this bot to be allowed. I haven't seen any bad things being done with it, and Netoholic is not a vandal. I don't particularly like the way that people are implying he is one!!! He's never done vandalism in the entire time I've been here, and I've been in conflict with him a few times. - Ta bu shi da yu 09:14, 14 Jan 2005 (UTC)

Agree with concerns raised by esp. Ambi, Snowspinner, and Carnildo. This entire fiasco has been abominable, and the manner in which Netoholic has conducted himself is grounds enough for protest. ADH (t&m) 09:27, Jan 14, 2005 (UTC)

Ditto to my comments to Brockert. You don't want Netoholic running a bot to catch up on the huge backlog, fine - but then I expect you to find someone else to do it, then. "If you're not part of the solution", etc. Noel (talk) 13:16, 14 Jan 2005 (UTC)
What a lovely false dichotomy. If you'll excuse me, I have to go shoot some people in the park—unless you have a better solution for rampant population growth. ADH (t&m) 19:55, Jan 14, 2005 (UTC)
What a lovely strawman argument. The result of shooting people in the park is destructive (socially, ethically), whilst the result of running this bot is constructive (performs useful work). To be a fair comparison, try this sentence instead: "If you'll excuse me, I have to go clean up some litter in the park - unless you have a better solution for rampant littering." To which some possible reasonable responses are either "Go right ahead", or (God forbid) "Let me help you". -- All the best, Nickj (t) 23:51, 14 Jan 2005 (UTC)
As long as we're talking logic, your justification for labeling my argument a straw man is based on a false premise: that the bot is uncontroversially productive in a positive way. The last 15 pages or so of argument on the matter suggest otherwise. Noel's assertion—that one has no right to be critical without offering an alternative—is the textbook example of a false dichotomy, and my hyperbolic example was meant to demonstrate that. ADH (t&m) 00:04, Jan 15, 2005 (UTC)

Compromise

If the issue is Netoholic, not the bot, why not get someone else to run the bot? Vacuum c 19:39, Jan 14, 2005 (UTC)

Apparently, there's a slight problem with how the bot handles certain category lines. If that's fixed, then I'd agree with this. --Carnildo 20:19, 14 Jan 2005 (UTC)

I find this all very unfair, and spiteful, considering I have never done a thing to injure Wikipedia or cause strife. Although I have had disagreements with a few people here, I would never consider putting my thumb down on them if they are voicing a genuine request to help Wikipedia. I have approached every single objector here at one time or another and openly offered to work towards a mutually beneficial environment here, but I can't make people meet me half-way. None of this is about a bot, it's about people taking things the wrong way, holding grudges over unrelated events, and teaming up to kick me when I'm down. I'm fortunate that at least a few people have seen that I work here in complete good faith, and I hope the rest do as well, someday. -- Netoholic @ 20:45, 2005 Jan 14 (UTC)

You're right. It isn't about a bot, it's about how you are using the bot, with a minor side issue of occasional bot malfunctions. --Carnildo 20:56, 14 Jan 2005 (UTC)
If this was about bot's usage or malfuctions, then you need to read down this very talk page and support blocking every single one of them listed as causing a problem. My bot affected one page Cyrano de Bergerac because of a peculiarity in the way you'd marked the category lines with comments - but did so in the same way every pywikipediabot does. I don't think it's serious, but I can work on changes and submit them to the more hard-core developers of it. The thing is, several bots just on this page have broken hundreds of pages even more severely.
No, the above objections aren't about a simple malfunction either. It's purely personal, and without good reason. -- Netoholic @ 21:13, 2005 Jan 14 (UTC)
Netoholic, the basic problem is that you are flaunting policy here. You state repeatedly that you no longer desire to get a bot flag, and will just run the bot and expect that the contributions will be sufficient. The bot policy on wikipedia is fairly straightforward; while you did comply with the first step of listing it here, you never followed through with actually getting the bot marked as a bot for approval at hm:requests for permissions. Unfortunately, the issue has become so charged now that if you were to submit it there now it would be rejected for sure. I wish that we could just contact all parties involved in this dispute, and allow you to start the bot submission/approval procedure over from the very beginning. I'm not sure how exactly to go abou this- perhaps it requires a dispute resolution of some kind. --DropDeadGorgias (talk) 21:59, Jan 14, 2005 (UTC)
Well, "flaunting policy" is pretty strong language, since that is not my intent at all. My idea of not pushing for a bot flag was to try and bring compromise to this. If the actions could be monitored more easily, and the bot run slowly (2 edits per minute max), then objectors would be satisfied. I have no opinion either way. The problem with the "approval" system here is that it's most often done via tacit approval, a request sits here for a week with no replies and it's done, particularly for using the well-known pywikipediabot framework. Heck, I can't even find the discussions for many of the bots listed as approved. If people will agree to a "do over", we can try that. All I want to do is keep helping where it's needed. -- Netoholic @ 22:25, 2005 Jan 14 (UTC)
Malfunctions happen. You can't predict every possible situation, so when you find something going wrong, the bot should be stopped immediately, the problem fixed, and the bot re-started, possibly with a note on the talk page about what happened. (and no, I'm not the one who put the markup in Cyrano de Bergerac) --Carnildo 00:58, 16 Jan 2005 (UTC)
I absolutely agree with you. I don't think the Cyrano de Bergerac markup that was affected would have been enough to stop the bot from running, being a one-off oddity of that page's formatting. But at anytime if there is a problem, I'd stop immediately. I've notified a few bot owners myself, and been largely ignored and had to ask for blocks, so I understand. Luckily, until this discussion started, noone has complained about it breaking an article. Even then, only two minor problems have been noted. And sorry, Carnildo, you're right, I got my C... usernames mixed up. -- Netoholic @ 02:24, 2005 Jan 16 (UTC)
I have never done a thing to injure Wikipedia or cause strife. What a load of codswallop. Deleting VfD headers before completion of the vote is certainly injuring Wikipedia. Mass moves of page titles without consensus is certainly injuring Wikipedia. RickK 05:40, Jan 17, 2005 (UTC)
Obviously, the comment you quoted refers to intent. Everyone has made mistakes here, but I still dispute that I've ever done anything so heinous as to injure Wikipedia. That VfD thing occured only a very short time after I got involved (and was very new to the procedures). That was almost 4 months ago. "Mass moves of page titles" I think can only refer to the commonization of names for crater articles - a move that was long overdue, one that I had help performing, and has never been challenged as a wrong move. You need to learn to let things go, because you are wrong about me. -- Netoholic @ 06:27, 2005 Jan 17 (UTC)
Netoholic, please stop hiding this discussion, as it is germane to the issue. Allow someone else to archive it when the time comes. —Ben Brockert (42) UE News 01:39, Jan 25, 2005 (UTC)
"WARNING: This page is 118 kilobytes long. Please consider condensing the page and moving the detail to another article so it is not approaching or in excess of 32KB." As such, I had archived the above discussion to /Archive 4#NetBot, but Brockert keeps re-adding it. Personal agenda, I suppose. -- Netoholic @ 06:35, 2005 Jan 25 (UTC)
Yes, it is a personal agenda that discussion should not be archived when it is still being discussed. Call me crazy, but I think a discussion should be finished before it is archived. And when it's archived, it should be archived by someone who isn't trying to hide views that oppose theirs. I'm just chock full of these weird ideas. Another is leaving reasonable edit summaries, unlike your last two on this page.


NetBot request

NetBot is a pywikipediabot, usually helping with tasks resulting from Category and Template deletions. May occasionally help by answering bot requests, doing simple text replacements, changing links for disambiguation, bypassing redirects, and adding interwiki links. -- Netoholic @ 19:09, 2005 Jan 16 (UTC)

Support with understanding that, should Netbot do anything it is not supposed to, it will be blocked without objection. Snowspinner 19:38, Jan 16, 2005 (UTC)
Addendum - I would prefer this bot not acquire a botflag, based primarily on the number of people who seem to have concerns about Netoholic running a bot. His edits should thus remain as transparent as possible. Snowspinner 14:14, Jan 25, 2005 (UTC)
I had no problem with that arrangement, but some admins equate "no bot flag" as "unapproved" and have blocked the bot, which is why this new request was made. I encourage regular review of its future contribs if anyone has any doubts. -- Netoholic @ 16:36, 2005 Jan 25 (UTC)
Support. Should Netoholic wish the bot to do anything else, however, he must get permission to do so on this page. Vacuum c 22:00, Jan 16, 2005 (UTC)
Support, so long as
  1. The bot does exactly and only those tasks it is approved for.
  2. The bug in pywikipediabot mentioned earlier is fixed.
  3. "Simple text replacements" is limited to the mentioned link disambig and redirect bypassing. Any other interpretation leaves too much wiggle room.
  4. Any variation from this will result in the bot being blocked and Netoholic needing to re-apply for bot approval if the bot is later unblocked.
--Carnildo 05:36, 17 Jan 2005 (UTC)
Support, bot does great work. Neutralitytalk 06:27, Jan 17, 2005 (UTC)
I support for Category work only. —Ben Brockert (42) UE News 01:17, Jan 19, 2005 (UTC)
I actually support primarily for the template work - there are lots of category bots, but to my knowledge this would be the first one to automate template removal. Snowspinner 19:39, Jan 24, 2005 (UTC)
In that case, I don't support at all, since he has demonstrated an inability to follow the process of TfD. User:Netbot should definitely not have a bot flag, since Netoholic does not use the accout only as a bot. —Ben Brockert (42) UE News 01:39, Jan 25, 2005 (UTC)
It seems like bad faith to say you support, and then only object after I make the formal bot flag request on Meta:. Is this an attempt at sabotage because of your unfounded personal feelings? -- Netoholic @ 04:38, 2005 Jan 25 (UTC)
I only supported for one use because that was the one use that someone had said it was needed for. If it is not needed for that one use, I don't need to support at all. —Ben Brockert (42) UE News 04:58, Jan 25, 2005 (UTC)
We very much need someone to be running a bot to remove templates from pages. There's not a lot of judgment calls to be made, since it's mostly a matter of going into the list of templates in the holding cell, picking one, and nuking it. If it were to do something that is not this, that would qualify to me as "something it's not supposed to do." But we do need a bot for that, because sometimes it just plain sucks to do it by hand. Snowspinner 14:11, Jan 25, 2005 (UTC)
See also the section NetBot for the recent discussion leading to this request.Ben Brockert (42) UE News 07:07, Jan 25, 2005 (UTC)

Forgot to add one specific task to the list above. I'll also very occasionally use the bot to assist in moving articles found to be put in uncategorized categories, especially the ones where no category page has even been created (red-linked) and/or where it is a clearly obvious move (like pluralization or improper capitolization). These won't necessarily make it to WP:CFD, since they either don't exist or will fit WP:CSD after being emptied for 24 hours, but I doubt this use is controversial. -- Netoholic @ 06:10, 2005 Jan 18 (UTC)

If used for this (so far) unapproved use the bot will be blocked. —Ben Brockert (42) UE News 01:39, Jan 25, 2005 (UTC)
Posting this information is a request for approval, and I'd not do it without waiting the required week to see if there are any objections. Like I said, it seems a reasonable and helpful use, o I doubt there would be any. Please don't threaten action where none is needed, it feels like admin bullying. -- Netoholic @ 04:38, 2005 Jan 25 (UTC)
Please stop deleting my comments. —Ben Brockert (42) UE News 06:21, Jan 25, 2005 (UTC)
Further, please stop moving my comments. Just knock it off for a little while, like a week. —Ben Brockert (42) UE News 07:07, Jan 25, 2005 (UTC)

Support, so long as:

  1. The bot does only what Netoholic has indicated it would do as of this approval date and time stamp
  2. If the bot does anything it is not listed to do as of this approval date and time stamp, not only will the bot be blocked, but Netoholic as well. RickK 08:13, Jan 25, 2005 (UTC)


  • en:User:NetBot - pywikipediabot, no objections given for bot flag on en:Wikipedia, see en:Wikipedia talk:Bots#NetBot request. -- Netoholic @ 14:41, 24 Jan 2005 (UTC)
    • This is untrue. The page is chock full of objections. Stewards, please read it before granting flag. —Ben Brockert < 01:30, 25 Jan 2005 (UTC)
      • User:Brockert above seems to have a personal agenda. He's restored an archived discussion related to the Bot account, which is months-old. This new request had no objections, until I posted the above request, at which time Brockert restored that old discussion and indicated objection. An obvious agenda, which I hope noone will encourage. -- Netoholic @ 04:06, 25 Jan 2005 (UTC)
        • Mudslinging aside, Netoholic uses his bot account for non-bot edits. For that reason alone it should not have a bot flag. —Ben Brockert < 07:18, 25 Jan 2005 (UTC)

I've removed this from m:Requests for permissions. Please get consensus on this before putting it on Meta since there are still two objections. Angela. 03:01, Jan 26, 2005 (UTC)

Yeah, the objections were only made after I posted the request on Meta: (which itself was done after giving one week here). Screw it. If admins have to sink to low levels and sandbag my attempt to help perform these maintenance tasks, then I don't wanna be involved. You guys argue it out. If you all decide this is worthwhile, you let me know. -- Netoholic @ 04:30, 2005 Jan 26 (UTC)

More from m:Requests for permissions

  • en:User:NetBot moved to en:Wikipedia talk:Bots#NetBot request since there are two objections to this bot. Angela 03:01, 26 Jan 2005 (UTC)
    • I don't see two objections, I see one, and he hasn't really given a solid explanation of his objection (which was only made an objection after the waiting periond and after the request on this page was posted). Considering that many others have given support for it, and the usage is very straightforward (as a pywikipediabot), I ask that this be re-evaluated. -- Netoholic @ 07:28, 9 Feb 2005 (UTC)
      • No consensus has been reached since the last time this was listed on here. —Ben Brockert < 06:16, 10 Feb 2005 (UTC)
        • Please get more people voting on this. Currently, there are so few votes that even one objection makes a big difference. Angela 01:37, 14 Feb 2005 (UTC)
          • Since when is bot approval a voting process. Considering that Snowspinner, Neutrality, and RickK have all given approval (none of which is a big fan of mine personally, but all see the value NetBot has given), I'd say that overrides the non-specific disapproval given by Brockert based on his personal grudge. -- Netoholic @ 15:49, 17 Feb 2005 (UTC)
            • That's still less than 80% support. Angela 10:51, 19 Feb 2005 (UTC)
              • Factoring in other supporters on that page (Vacuum, Carnildo) means >83% if you are just going to bring this down to numbers. -- Netoholic @ 17:43, 20 Feb 2005 (UTC)
            • Snowspinner with his approval said "Addendum - I would prefer this bot not acquire a botflag". Therefore I propose you consider your bot an "approved bot without botflag" for now. - Andre Engels 09:05, 24 Feb 2005 (UTC)
              • That's fine. Can someone now please unblock the bot and list it on w:Wikipedia:Bots as "Currently approved but running without a flag". -- Netoholic @ 04:31, 26 Feb 2005 (UTC)
Current status

Anthere has noted its status on the main bot page. It should be considered approved for the above indicated tasks, but running without a flag. -- Netoholic @ 18:21, 2005 Mar 2 (UTC)

Pearle Wisebot

Adding CFD tags

I will shortly demo a new capability of Pearle, to add {{cfd}} tags to a list of specified categories. Please post if you have any qualms about her doing this on a routine basis to any categories mentioned on CFD, especially long lists of badly-named categories. -- Beland 07:11, 1 Dec 2004 (UTC)

Demo complete. -- Beland 00:23, 5 Dec 2004 (UTC)

Moving category intro text for CFD

I have used a new capability to fix the intro text in first two subcategories of Category:Japanese prefectures. The problem was when there is a long list of categories to be renamed, she was able to move articles and subcategories, but not intro text. The problem with intro text is that there might be some on both the source and destination pages, and a simple combination might be the wrong thing to do. Human intervention is needed to verify these moves and merges. I run Pearle in two steps. First, a TRANSFER_TEXT_CHECK command makes some suggestions about what the source and destination categories should be changed to. It automatically deals with {{cfd}} tags, merges lists of categories, and trims excess whitespace. Where the suggestion is wrong, I fix the text manually. Where the suggestion is correct, I simply run TRANSFER_TEXT_ACTUALLY with the same arguments, and the suggestion is automatically implemented.

Pearle is currently renaming subcategories Category:Japanese prefectures so that "prefecture" is capitalized where appropriate. I have scheduled a TRANSFER_TEXT_ACTUALLY run immediately thereafter, which will fix intro texts (which should also create the correct category hierarchy). Please let me know if you have any qualms about Pearle doing this on a routine basis to categories approved by WP:CFD for deletion. -- Beland 00:23, 5 Dec 2004 (UTC)

Pearle bot conduct

  1. The Pearle bot ws approved for use to "For each member, replace all instances of Category:Name_of_A, with Category:Name_of_B, preserving sort fields. Members that contain any nowiki or pre tags in the wikisource will be skipped."
  2. The bot operated as in this report [14], carrying out the moves not in the manner described, an efficient change from one category to the other, but by making two edits in rapid succession, the first to remove a category, the second to add the new category to the same article.
  3. This mode of operation unnecessarily increased the server load required for the operation and introduced the permanent need to store an unnecessary article revision in history.

This bot appears to be

  1. operating in a manner inconsistent with its authorisation
  2. operating contrary to the bot policy that edits should be about ten seconds apart
  3. operating contrary to the policy that a bot should not be a server hog, by making edits at an excessive rate and making two edits where one is sufficient for the job.

Per the bot policy, please demonstrate that this bot is harmless, not a server hog and operating as approved when operating as described above. Jamesday 22:58, 10 Jan 2005 (UTC)

I had previously replied on User talk:Pearle (since archived). When I first wrote the code, I had abstracted reads and writes, and so recycled this for moves, which unfortunately resulted in doing two edits in rapid succession in an attempt to make the move quasi-atomic. After reading the above discussion about server load, I decided to fix this problem. So, as it happens, by the time I noticed the complaint, the offending code had been re-written. Moves are now accomplished in one edit. At the same time, I added more sophisticated load-monitoring logic, so now Pearle waits extra-long if Wikipedia is running slow. Apologies for the load caused by the earlier double-edits. -- Beland 22:40, 17 Jan 2005 (UTC)

Thanks for the improvements - I expect the humans appreciate the greater efficiency.:) All bot operations are probably also benefitting from some much improved saving code put into service a few weeks ago - if you notice your bots taking less time per edit, that'll be why. The most human visible symptom is far fewer lock wait timeouts. Jamesday 21:57, 3 Feb 2005 (UTC)


Original complaint

Unfortunately this bot is contravening policy. Categories should be placed before interwiki links, and not after, as per Wikipedia:Interlanguage links#Syntax and Wikipedia:Categorization#How to create categories. This has been reported on the bot talk page. I request that the bot be stopped until this has been remedied.

It would be nice if the bot could go and correct all the articles that have been affected. Noisy | Talk 19:50, Jan 16, 2005 (UTC)

I will be replying on User talk:Pearle. -- Beland 22:43, 17 Jan 2005 (UTC)

Discussion continued below... -- Beland 02:35, 3 Feb 2005 (UTC)

Q & A Round 1

So I've done a few test runs of code to fix articles to conform to the following rules:

  • Interwiki links are at the bottom of a page.
  • Category links are above interwiki links.

There are several complications.

  • Some people make invalid interwiki links
  • Sometimes people accidentally say [[Category:Foo]] when you mean [[:Category:Foo]]
  • HTML comments are sometimes interspersed with category and interwiki links, and sometimes these comments refer to the interwiki and category links.
  • Category and interwiki links are sometimes at the top of the page
  • Category and interwiki links are sometimes interspersed with body text or template tags
  • Sort order questions for category and interwiki links

There are three very common types of invalid interwiki link, each of which starts with one of the following:

  • zh-cn
  • minnan
  • zh-tw

I can run an automated scan to detect all links with colons in them that don't go to a valid external wiki or a valid internal page. What would I do with this information? Make a long list on some page and hope that others come by to clean it up? Tag each article (or its talk page) with a template that indicates the presence of an invalid link (and hope others fix them)?

I am also wondering whether all "minnan" links should just be automatically converted to "zh-min-nan"? Should all "zh-tw" links be converted to "tw"? Would that leave "zh-cn" to "zh"? (Are these Chinese languages? I haven't bothered looking them up.)


I started off with an algorithm that simply pulled all the valid category and interwiki links out of a page and then re-inserted them at the end, in the proper order. This causes any interspersed text to bubble up to before the category links, which is normally good, but bad if interwiki-related comments or invalid links are moved up, or accidental category links are yanked out of context.

It's difficult to tell on an automatic basis whether a comment refers to category links, interwiki links, or the body text. But because Pearle automatically adds categories to articles, as the rules stand, she must decide whether to put such links above or below a pre-interwiki comment.

So I have the following questions:

  1. Does the convention for category and interwiki links need modification to facilitate automatic processing? (e.g. a standard place for interwiki or category-related comments?)
  2. What is the proper algorithm for adding a category tag to an article?
  3. Should I try to clean up the articles in which Pearle added category links to the last line of the article, or is there another bot already suited to this purpose?
  4. Should there be a general cleanup effort to make articles conform to the convention? Can any of the common invalid interwiki links be auto-fixed? What should trigger a flag for manual inspection, and how should that flag be stored?
  5. Should I put interwiki links in any kind of sort order? Only if I happen to be editing the article for other reasons anyway, or should there be a general sweep?

WP:CFD will likely continue to accumulate categories that need articles moved around by a bot until we can get this resolved.

My recommendations:

  • Interwiki tags are dead last. They may be pulled from any part of the page.
  • Category tags immediately precede interwiki links. Any category tags separated by any body text or comments from the main group will trigger a flag for manual review.
  • Category links are separated from whatever precedes them by a single blank line.
  • Editors should not put any comments or body text below or among the category and interwiki links. Any such text will automatically be moved up.
  • There is no need for comments of the type "Interlanguage links below".
  • Invalid interwiki links are automatically moved up, but they trigger a flag for manual review.
  • The flag for manual review is a template which contains instructions and adds the article to a special category where editors who enjoy fixing this kind of problem can find them.

-- Beland 02:32, 3 Feb 2005 (UTC)

Related pages:

Suggestions:
  1. Any tag with non-tag text on the same line should be kept together; ie. if a category tag has a comment after it before Pearle goes through, it should have the comment afterwards as well.
  2. The most common cause of category tags in the body of the article is as part of a maintainence template substitution (VfD etc). Those should be kept with the rest of the template to keep removing easy once it's no longer needed. If a category tag other than one of the maintainence categories is found, it should be flagged for review.
--Carnildo 05:55, 3 Feb 2005 (UTC)
Pearle only works with the text as stored in the database; she doesn't "see" template substitutions when doing live category work, so the contents of referenced templates won't be touched. -- Beland 03:22, 9 Mar 2005 (UTC)

My remarks:

  • zh-cn and zh-tw should both be replaced by zh. These were used for 'Chinese (simplified characters)' and 'Chinese (traditional characters)' respectively, but since version 1.4, this is being solved in software, and both versions are at the same page.
  • I personally use 'minnan' rather than 'zh-min-nan'. Both are working; I like the first better. I have no idea of any official policy.
  • The Python bot is using your suggested order already, and can put them in that order if wanted.
  • Standard order for interwiki links is alphabetically by language name (transcripted to Roman script) on English. On most other languages it is alphabetically by language code (French, Polish, Finnish also alphabetize like en:, Hungarian and Hebrew are alphabetically by code but with English put first). - Andre Engels 20:38, 3 Feb 2005 (UTC) (hoping he'll remember to check this page...)

Andre and Carnildo seem to have covered most things. The only additional comments are that:

  • sometimes HTML comment tags are used around multiple lines, which may include categories
  • some categories and interwikis are placed on the same line, and need to be put on separate lines
  • comments, blank lines and the like introduce white space at the foot of articles: I think this is unsightly, but I don't know what to suggest to deal with this
  • there will probably be many more reasons to undertake work using bots like these in the future, so leave any clean-up work until such times. Noisy | Talk 23:09, Feb 5, 2005 (UTC)

I agre with Carnildo's first point: any HTML comment on a category line should be kept on that line by the bot. Bots have been run before that mess that up, and it is not harmless. —Ben Brockert (42) UE News 06:05, Feb 10, 2005 (UTC)

Q&A Round 2

Are comments such as <!--interwiki --> necessary? I was going to just remove them out of course, but then I thought that some people must have been confused, which is why you see these comments around. So then I was thinking perhaps a standard form might be nice, like:

<-- The below are interlanguage links. -->

or something?

<!--Categories--> seems completely unnecessary. I mean, what follows is a list of Category: tags. What else could it possibly be?

I also find things like <!--- en:Greyhound ---> on English pages, which I'm not sure why they exist. Any reason not to delete these in the course of cleanup? -- Beland 02:16, 2 Mar 2005 (UTC)

With the resounding silence here and on Wikipedia:Interlanguage links, I think I'll resolve this by installing <-- The below are interlanguage links. --> and deleting, when in the appropriate position on the page (case and whitespace insensitive):
  • <!--interwiki links-->
  • <!--interwiki-->
  • <!--categories-->
  • <!--interlanguage links-->
Spurious en: links will flag for manual review. I guess the best way to attract review of this practice is to edit some pages and see if there are any complaints. It is very easily changed depending on the whims of public opinion. -- Beland 03:06, 9 Mar 2005 (UTC)

This section is based on the suggestions above and some guesses to fill in the gaps. I am sharing some fine details so that others can help find bugs (both in the rules and their implementation) and so people know what to expect. I may in the future make small tweaks to fix bugs or accomodate community complaints. -- Beland 03:39, 9 Mar 2005 (UTC)

Rules

  • Pearle should attempt to do a category/interwiki cleanup whenever it edits an article, but there will be no mass cleanup run (except for articles already edited by Pearle) unless requested.
  • HTML comments on the same line following a category or interwiki tag will remain there. Any other text there will trigger a flag for review.
  • If a category or interwiki tag is found in the "body text" area, it will be flagged for review.
  • Canonicalize "zh-cn" (Chinese simplified) and "zh-tw" (Chinese traditional) to "zh" because the simplified/traditional distinction is now being solved in software.
  • Canonicalize "minnan" to "zh-min-nan", since only the latter is in the official, automatically updated list.
  • Multi-line HTML comments must be preserved
  • Separate category and interwiki links mashed together on the same line.
  • Don't change interwiki link sort order.

Algorithm

  • Break the article up into segments, each of which is tagged. Use two arrays, one for content, and one for names.
  • Parse input into segments, each of which is labeled by type.
    • Find nowiki tags everywhere.
    • Find comment tags everywhere else.
    • Find HTML tags everywhere else.
    • Find category links everywhere else.
    • Find interwiki links everywhere else.
    • Find template tags everywhere else.
    • Lump html tags following a category segment (except category and interwiki links) until the next newline into the category segment.
    • Lump everything following an interwiki segment (except category and interwiki links) until the next newline into the interwiki segment.
    • The remainder of the page will be tagged as body text.
  • Move any category or interwiki links at the top of the page to the very bottom.
  • Move {{template tags}} before the category links, preserving whatever whitespace preceded or followed them.
  • Delete these comments near the category/interwiki section (case and whitespace insensitive):
    • <!--interwiki links-->
    • <!--interwiki-->
    • <!--categories-->
    • <!--interlanguage links-->
  • Determine whether or not the page should be flagged for manual review. Find the last non-category, non-interwiki segment. If there are any interwiki or category links before this segment, flag the page for manual review by adding a template at the end.
  • If the page has not been flagged: consolidate all interwiki links at the end, preceded by category links, preceded by all other segment types. Be sure to retain the original order of segments in each of the three groups.
  • If there are interwiki links, precede them with a line that says:
<!-- The below are interlanguage links. -->

Notes

Humans take note: Comments near the end of the article must be placed before all category and interwiki links, or else the page will be flagged for review. The only exception is if your comment refers to a particular category or interwiki link. This is OK as long as the comment follows the link and stays all on one line.

See Category:Articles to check for link ordering for more details on the "flag for manual review" process.


Demo in progress

OK, so there are 2353 articles* that offline tests show need fixing, out of the 17000 or so with categories that Pearle has edited. I am going to run the first 22 and then wait for 24 hours to see if there are any complaints or comments. If there are none, I will run the remainder, then resume normal operations. -- Beland 02:38, 10 Mar 2005 (UTC)

* At least as of the 9 Feb 2005 database dump. Some articles may have been cleaned up by human editors since then, so there may be some very minor edits in this batch. -- Beland 03:51, 10 Mar 2005 (UTC)

The first few articles in the test run revealed a problem with newline characters, which has since been fixed. -- Beland 03:51, 10 Mar 2005 (UTC)

All of the articles tested had a problem with HTML comment markup, which has since been fixed. Adding another 10 articles to the demo to confirm that everything is working OK now. -- Beland 04:45, 10 Mar 2005 (UTC)

Some articles were accidentally fed into the large run twice, revealing a small bug that results in a duplicate comment. Now fixing. ::sigh:: -- Beland 13:51, 11 Mar 2005 (UTC)

Comments

Personally, I would remove/not add HTML comments. Besides, I feel it would be more productive if the bot would also add additional interwikis, rather just HTML comments. Many of pearle's recent contributions ("Minor category/interwiki code cleanup" on 05:00-06:00, 2005 Mar 12 ) seem to add just comments and re-arrange links. -- User:Docu

Well, it was requested that Pearle go back and rearrange category and interwiki links in cases where she failed to follow the established style. I'm restricting the current run to articles she's edited before, but there are also minor changes from the "recommended" style now.
If you're saying it's not particularly worthwhile to tidy things up in this way, I would agree. (As opposed to "please don't add HTML comments, ever" - if you mean that, then perhaps we would need to reconcile that opinion with the opinion of the editors who tend to leave these random comments lying around.) After the self-cleanup, such tidying will only be done in conjuction with a more important edit. Adding interwiki links is outside the scope of what I'm trying to do; I assume that could be done just by comparing what interwiki links other wikis have? That does sound like a good idea. -- Beland 00:09, 13 Mar 2005 (UTC)

Pearle REMOVE_CFD_TAG

Sometimes a large number of categories which have been tagged by Pearle don't get deleted, and have to be untagged. This command would simply remove {{cfd}}, {{cfr}}, {{cfru|.*?}}, etc. Will do a first run in a few days, then integrate into normal operations. Comment/complain as needed. -- Beland 06:31, 11 Mar 2005 (UTC)

Revert request

Something has gone wrong with Pearle's interwiki style cleanup logic. (She was just wholesale deleting all interwiki links.) All of her edits from 02:23, 18 Mar 2005 (Iron) to 06:02, 18 Mar 2005 (Lockheed L-1011) need to be reverted. I will need to figure out when this started happening and see if there are any more reverts, and of course to fix the underlying problem. I'm hoping someone has a vandalism revert tool handy that can do this before human editors spend too much time trying to clean up the mess. If not, I will try to create one as soon as possible, which may be Saturday, US West Coast time. -- Beland 06:15, 18 Mar 2005 (UTC)

OK, it looks like the last good edit was at 02:13, 15 Mar 2005 (Auxerre), and the first mistake was at 02:42, 15 Mar 2005 (National_Movement_for_Simeon_II). Fortunately, everything after that before Iron were political party articles, which have sparse interwiki links. I will be able to fix those manually in the next day or two. ::sigh:: -- Beland 06:53, 18 Mar 2005 (UTC)
I rollbacked 98% of them. Some of the articles have been edited so there are at least 1% to 2% that I missed. -- AllyUnion (talk) 07:26, 18 Mar 2005 (UTC)
I think I have manually repaired the rest. Thanks for your help, and apologies to anyone who was inconvenienced. I found the source of the problem; I commented out a whole line when I should have commented out only part of it. (This was when I was implementing the request not to add the "The below are interlanguage links." comment.) Mental note: Make sure "good/evil" switch is set to "good" before unleashing creation into the world. -- Beland 20:44, 19 Mar 2005 (UTC)

Creating a Wikipedia Bot

Originally posted at Wikipedia:Village pump Hey all, I was thinking about making a bot to browse through Wikipedia and generally help out in any way possible. I've recently learned how to access the internet using Java (which I know well) and I wanted to make a little program for practice. My question is, what should this bot do? I've thought about it, and right now I'm leaning towards having it make links more efficient by changing them so they directly link to a page if they're currently linked to a redirect. I figure it can't hurt, right? Anyways, what other miscellaneous chores should this bot do? Any suggestions? --pie4all88 06:49, 13 Dec 2004 (UTC)

There's a downside in that piped links make the wikicode slightly harder to read for an editor, and, especially because editors are not all techy-types, we need to make it as readable as possible. For example, the wikitext could be either ...[[National Security Agency|NSA]]... or ...[[NSA]]... The latter is more readable. — Matt Crypto 14:44, 14 Dec 2004 (UTC)
Ok. Someone else had mentioned at the village pump that he or she thought that there are some good reasons for redirects, too. Maybe that's not the best thing to have a bot do...I'm still open to suggestions, though :D --pie4all88 20:59, 14 Dec 2004 (UTC)
Have a look at Wikipedia:Bot_requests. RedWolf 01:39, Dec 17, 2004 (UTC)

Hi pie4all88, Here's a possibility - don't know if you'll like it or not, but I'll just put it out there and you can decide what you think .... : Where A is a redirect to B (e.g. "Hamsters" → "Hamster"), and the word A is a plural of the word B (as defined here), and where A does not contain the string "{{R from plural}}" (e.g. this plural redirect does contain that string, but most currently don't), then add the string " {{R from plural}}" to A, and save it. (Basically, this would put all the plural redirects into the right category.) If you don't like it then let me know, and if I think of anything else I'll let you know. All the best, -- Nickj 22:52, 16 Dec 2004 (UTC)

Categories with thousands of entries help a denial of service attack on the servers. To display a category page currently requires retrieving the full text of every article in the category, making a category with 5,000 entries cause the same work as 5,000 seldom-visited page views (not 5,000 normal page views because normal views tend to end up in cache). For that reason I was recently forced to remove the category (and image) from the GFDL template after it tripped the denial of service attack emergency response: one view of the category took the master server from 8 to over 700 pending queries in less than 30 seconds. And down again 30 seconds later after that page view. The damage can be significantly limited, for suitably small categories, by using the first letter or two letters of the page to create sub-categories, each of which is tolerably small. The sub-pages of allpages are currently set at 500, which is too high for comfort, but I've been reluctant to ask for that count to be reduced to 100, mostly because I know that there are improvements planned for MediaWiki version 1.5. Please refrain from using a bot to expand the number of such denial of service opportunities. We can in an emergency disable any of these things (by turning off category display, for example) but I'd much rather not have the opportunities created in the first place. Also see bug 1058, which is why I'm not asking a bot operator to split Category:GFDL or doing it myself on the servers, and bug 973. Jamesday 04:38, 17 Dec 2004 (UTC)
Whoops... Ah... OK... How do I put this... Maybe "Houston, we (may) have a problem" ? You see, I had no way of realising (until now) that large categories caused a problem, and there's a small project I'm running for adding Missing Redirects, and as part of that, well, new plural redirects are being added to the exact category described above. Just to be clear though, this is humans doing the adding — bots are in no way doing any of the adding (it's just that I'm interested in bots, and I'm doing this other thing as well, so me seeing your message on this topic is basically just blind luck). Anyway, when redirects are plurals, they're getting put into this category. Currently there are 239 redirects in this category. By the time we're done, there will be (I estimate) 5000 redirects in this category - so in a few weeks that example 5000 item category will be, as it were, more of an actual than an example ... I guess this is all just proving your point about the joys of coping with exponential growth ... So, I guess my question is this: is it OK to keep adding stuff into this category, or do we need to stop doing that (irrelevant of the category being appropriate)? All the best, -- Nickj 06:35, 17 Dec 2004 (UTC)
You should also estimate exponential growth with doubling time of 8-12 weeks. That's where it gets interesting trying to keep up in the race of capability against growth.:) User:Jamesday/report/letter frequency gives the first letter frequency for en Wikipedia, showing that about 9% of articles begin with S. I call it 10% for convenience and a safety margin. So 5000 redirects by letter and you'd end up with the uncomfortably large 500 size for the worst case page. 12 months and four to six doublings and it's at 2000-8000. Oops.:) I haven't checked the actual article doubling time - I'm using the capacity doubling period. For 8000 10% 800 but something like Sc is likely to be the most popular and that's likely to be closer to 500. So: might try going to two letters initially, if you want to think a year ahead. Can skip two for letters with less than 1% frequency or any which show a worst case 12 month prediction under 50. MediaWiki 1.5 is planned to chop the article text out of the cur table and make just getting a title much less slow. But it's a bad idea to assume any delivery time for software.:) For now, anything which needs titles needs to have a target result of under 500 titles. Jamesday 13:31, 17 Dec 2004 (UTC)
Why not just use lists instead of categories? Categories should have never been turned on, because the design was completely flawed (and very little was done to get input from the community before throwing them in). So just pretend we don't have categories, and use lists. Lists can do everything categories can do, plus more, plus they're more efficient (they're basically designed the way categories should have been). anthony 警告 14:22, 17 Dec 2004 (UTC)
That's a great idea--it's perfect for what I want to do. I've been thinking about it, and I don't see any downside to it, so...I'll work on it pretty soon. Right now I'm juggling finals, World of Warcraft, and other programming ventures :S Anyways, thanks for the suggestion, Nickj! --pie4all88 01:45, 17 Dec 2004 (UTC)

Our company (StudentsReview) has information on all of the universities in the country -- from the schools themselves, and would like to update the (un-updated) university pages with university homepages links, University Descriptions, Tuitions, Student Body makeup, Sports, and student opinion on the pages where little information exists. I wasn't aware of this page, so I made the bot operate on a small subset of pages (5) to make sure everything was ok -- I also had the bot wait 10-30 seconds before edits to keep server load low, but was still marked as a spammer. I think this bot would be tremendously useful as a continuous pipe to keep university data updated as we update it from schools. -- Byankamz

  1. I suspect the reason you were marked as a spammer is that you were inserting a link to a commercial-looking website with each edit. It had nothing to do with the edit rate.
  2. If [15] is typical of what you'll be doing, the layout of your contributions will stand out like a sore thumb.
  3. The edits you've been making are blocks of badly-constructed HTML code, which will probably cause problems with wikicode formatting of the pages.
  4. The licence terms of "Limited license provided to the non-commercial community courtesy of StudentsReview (http://www.studentsreview.com/AL/ASU.html). Copyright © 2000-2005 StudentsReview (http://www.studentsreview.com), All Rights Reserved." are unacceptable. All contributed content must be under the GFDL license.
If you're willing to work within the rules of Wikipedia, and are willing to discuss what information will be added and how it will be formatted, I suspect people will be willing to let you add the information.
--Carnildo 00:42, 24 Dec 2004 (UTC)
I agree that the information in the example looks quite good, but the problems listed above must be addressed. There should be ways to create an easily updateable version of the information in a table or some form, possibly by using a template. The licensing issue though is a big one, for sure. Looking at the StudentsReview page on the university in the example, the following message is given: "The survey data is not statistically reliable: It is not considered official, and has not been verified by Auburn University." Is this just a generic disclaimer because everyone has to have a disclaimer, or is this really not reliable data that should not be in Wikipedia? -- RM 04:17, Dec 24, 2004 (UTC)
This is a generic disclaimer -- we are not piping in data for which there is insufficient representation. At some point (after we become more sophisticated), we'd like to add "School is a Baptist institution that is primarily African American", along with distributions of SAT/ACT scores, etc. We are standing behind this data now, so the disclaimer is not terribly applicable.
as for the license, we'd just like to say that we don't want people using our student opinion information for commercial (competitive) purposes (which we thought was compatible with the GFDL license), though we are ok with it being propagated around the web.
The information is in a template table, which we can easily update -- I myself suck (big time) as a graphic designer, which is why it probably looks bad -- though I tested it all on the preview pages first to make certain no problems would exist with wiki code.
--User:byankamz
Re the license: I'm no lawyer, but on the GFDL page it states: "Materials for which commercial redistribution is prohibited generally cannot be used in a GFDL-licensed document, e.g., a Wikipedia article, because the license does not exclude commercial re-use." -- All the best, Nickj (t) 11:08, 24 Dec 2004 (UTC)
As has been stated by Jimbo Wales, a Wikipedia article is not a single document, it is an aggregation of separate documents some of which may not be under the GFDL. anthony 警告 23:53, 1 Jan 2005 (UTC)
Yes, but surely: 1) The component article contributions must be licensed in such a way as to be compatible with the GFDL - but I'm concerned this is not the case here; 2) When contributing to the Wikipedia now, the blurb under the edit box explicitly states "All contributions to Wikipedia are released under the GNU Free Documentation License". Surely therefore any article contributions are either an acceptance of the GFDL (if you're the copyright holder), or a confirmation that those materials should be safe to be released under the GFDL if you're not the copyright holder (such as copying public domain 1911 Encyclopedia Britannica material). If copyright holders then also want to release their contributions under additional licenses (such as in the free the RamBot articles push), then that's their prerogative, and I have no problem with that whatsoever. What concerns me most here is the contribution of additional article text, implying an acceptance of the GFDL, subsequently followed by the apparent addition of an extra licensing term which seems incompatible with the GFDL ("don't want people using our student opinion information for commercial (competitive) purposes"). -- All the best, Nickj (t) 04:25, 4 Jan 2005 (UTC)
The component article contributions must be licensed in such a way as to be compatible with the GFDL Not according to the GFDL. When contributing to the Wikipedia now, the blurb under the edit box explicitly states "All contributions to Wikipedia are released under the GNU Free Documentation License". And when you upload a file, you explicitly state "By uploading a file to which you hold the copyright, you agree to licence it under the terms of the GNU Free Documentation License." Clearly this is interpreted to only apply to content which one actually owns. Surely therefore any article contributions are either an acceptance of the GFDL (if you're the copyright holder), or a confirmation that those materials should be safe to be released under the GFDL if you're not the copyright holder You would think so, but this clearly isn't true. There is lots and lots of material in Wikipedia which is not GFDL. anthony 警告 14:01, 19 Jan 2005 (UTC)
When you say "material", do you mean "article text", or "images / audio / video / multimedia" ? I accept that there is multimedia content not under the GFDL, but surely all article text is covered by the GFDL? Indeed the Wikipedia:About page states "All text in Wikipedia, and most images and other content, is covered by the GNU Free Documentation License (GFDL)". -- All the best, Nickj (t) 06:46, 9 Feb 2005 (UTC)
Well, some of the text is covered by other GFDL-compatible licenses or is public domain. --Carnildo 07:04, 9 Feb 2005 (UTC)
Well if it's public domain you can by definition do whatever you want it, including incorporating it into GFDL works - i.e. Public Domain imposes no restrictions, and is compatible with the GFDL. What worries me is if article content can be added that imposes more restrictions than the GFDL. In which case, I could, to illustrate the problem, create two new licenses: GFDL-mod-1 (the same as GFDL, plus imposes the restriction that content must not be mixed with GFDL-mod-2), and GFDL-mod-2 (the same as GFDL, plus imposes the restriction that content must not be mixed with GFDL-mod-1). I could then make two edits to an existing article, one under GFDL-mod-1, and the other under GFDL-mod-2. This would make it impossible to distribute the article as-is, without violating both of those licenses. And since the license is the thing that gives the right to distribute the text (at least until its copyright expires), then such an article could not and should not be distributed. All of which is a just a complicated way of saying that people should not be allowed to add extra licensing conditions to their article contributions above and beyond the GFDL, otherwise it opens up the most enormous can of worms. Hence my original objection to someone adding content, and then subsequently trying to add the extra condition that their content could not be used for a commercial purpose. -- All the best, Nickj (t) 07:37, 9 Feb 2005 (UTC)

See Wikipedia:WikiProject Universities, Template:Infobox University2, Template:Infobox University1 and elsewhere for ways to incorprate unversity data--Jiang 05:35, 4 Jan 2005 (UTC)

Poccil/Peter O.'s bot Automation.js

Readers of this page may be interested in this page: Requests for adminship/Poccil2. It contains a large discussion on the use of a bot on a user's normal account, the line between bot and script, and the types of activities a bot should be allowed to do, such as deleting articles. --—Ben Brockert (42) UE News 00:17, Jan 1, 2005 (UTC)

The reason I called my script a "script" was because it acts no differently from if one were to manually perform the actions; i.e. it runs the web browser, opens the edit page, does some keystrokes, and saves the page, just like a human editor. Peter O. (Talk, automation script) 01:12, Jan 1, 2005 (UTC)

Permission for a bot

It will be running pywikipediabot and it will follow robots.txt and your speed limit AxyJo 23:43, 1 Jan 2005 (UTC)

What will the bot do? --Carnildo 01:05, 3 Jan 2005 (UTC)

Issue with CanisRufus bot

In my opinion the bot CanisRufus has gone a bit far afield. A quick check of the talk page seems to indicate that I'm not the only one. I don't see the usefulness (or desirability) of, for example, changing all instances of [[Mammalia]] to [[Mammal|Mammalia]]. Furthermore, RedWolf's dismissive attitude towards complaints seems to indicate that a review of the bot may be in order. --Dante Alighieri | Talk 10:20, Jan 2, 2005 (UTC)

It appears the bot (contribs) has also recently changed all instances of "African-American" to "African American", which is not correct in all circumstances. -- Netoholic @ 18:58, 2005 Jan 2 (UTC)

(Note the following content should have appeared before the previous comment but was not due to edit conflict)

The two issues are the result of dealing with redirects listed on Wikipedia:Offline reports/This is one of the most linked to redirect pages. It seems that attempting to fix the redirects cause grief for some people. As for African-American there was no talk at all on its talk page and neither is this alternate offically listed in the opening paragraph of the official page content at African American. As for fixing Mammalia so it went directly to Mammal, visually the user still sees "Mammalia". The Mammalia redirect has been in place for nearly 2½ years and unlikely to change. If there is a problem with bots fixing redirects, then this should be officially stated on the Wikipedia:Bots page. Alternatively, if a policy should be set up such that any attempts to fix redirects by bots should first be proposed on this talk page (or perhaps a subpage might be better), then this is also acceptable. I will no longer run the bot to fix redirects listed on the above page until this is resolved. I will still run the bot to resolve disambiguation from time to time, as was its first purpose. RedWolf 19:09, Jan 2, 2005 (UTC)
This "two and a half years" argument was also raised on the CanisRufus talk page. This argument is unconvincing largely because it had nothing to do with the question at hand. The fact that there has YET to be an article on Mammalia does not mean that there never WILL be one. In the past hour, 60 some odd articles that haven't existed "for the past two and a half years" were created. Under RW's rationale, any article that is now (and has been) a redirect will never become an article in its own right. I find this argument unconvincing. Also, with respect to African-American, the fact that there was no talk on the redirect page is unsurprising (most redirects don't get talk) and irrelevant. All that this goes to show is that RW didn't know that it was an inappropriate change at the time... which is fine. No one is accusing him of intentionally making imrproper changes. The point is that when it was brought to his attention, rather than enter into a discussion about it or apologize for the bot's unintended behavior, he became snide and defensive. This isn't an unforgiveable offense, it's just evidence that perhaps the bot should simply be used for disambiguation purposes as was originally stated... an (as far as I can tell) uncontroversial purpose. The position that redirects are, inherently, a "problem" to be "fixed" is not a universal one and there are people who do not subscribe to it. --Dante Alighieri | Talk 19:26, Jan 2, 2005 (UTC)
Bypassing a redirect in this way is a very bad practice and unnecessary. Redirects work for a reason, and it is entirely possible that the Mammalia redirect, for instance, might' change to point somewhere else in the future. There may also be specific reasons for the links not bypassing the redirect, which you may be unaware (Whatlinkshere comes to mind). Last point, is that any bot doing disambiguation or bypassing redirects also should not be run blind - each case may be different, and requires someone to check the context. -- Netoholic @ 19:29, 2005 Jan 2 (UTC)

Needless to say, a Wikipedia article is not definitive even for Wikipedia. Errors are introduced and corrected all the time. African American is an example. The article is incorrectly titled, and the usage of the expression in the article is frequently wrong as well. The expression should always be hyphenated, whether it is a noun or an adjective. It should follow the pattern — it was designed to follow the pattern — of Italian-American, Polish-American, Japanese-American. I will not further elaborate on that fact; but all those who have edited pages and inserted ‘African-American’ or let it stand have agreed that that is the correct spelling. But because none of them had fixed the article African American, RedWolf presumes that ‘African American’ is the undisputed correct version, and all instances of the hyphenated expression need to be changed. Now they will all need to be changed back. Of course, the examples I cite above are not all done correctly either — two of them redirect to page titles without hyphens, while the articles themselves jump back and forth. That just means that at some point they, too, will need to be fixed. Perhaps then it is a matter of dispute, or perhaps a matter of common ignorance. Either way, the bot was a crude way to address it. And the bot only replaced linked instances of ‘African-American’. The others were left alone, so that pages that had been internally consistent are no longer so. I agree that redirects are often the best solution. But if redirects are not desired, the appropriate solution was to change [[African-American]] to [[African American|African-American]], so that the change in link would not affect the spelling.
Ford 02:07, 2005 Jan 4 (UTC)

Please cite your sources as the modern convention seems to lean towards omitting the hyphen. This should really be taken to Wikipedia talk:Naming conventions since there exists no policy on this. I belive changing [[African-American]] to [[African American|African-American]] will be highly inappropriate. If it's wrong then we move the page, but I disagree that it is wrong so let's discuss this on the relevant page. --Jiang 05:32, 4 Jan 2005 (UTC)
I rolled back most of the s/African-American/African American/ changes made by the bot on Sunday afternoon. Some of the articles had already been changed so I was unable to rollback but those were maybe a dozen pages at most. While I was probably wrong in having the bot make this change, as Ford pointed out, there is a major inconsistency in the use (or non-use) of the hyphen. A encyclopedia should be consistent, which was all I was trying to accomplish. The Mammalia changes were actually s/[[Mammalia]]/[[Mammal]]ia. RedWolf 05:47, Jan 4, 2005 (UTC)

Read-only random-page getter for statistical stuff

Hello. I posted this comment to Wikipedia:Bot requests but it didn't attract any attention so I'm trying again. I'm thinking about running a read-only robot to get random pages for the purpose of doing statistical analysis on them. Probably the edit history will be just as interesting as the page text. I don't think I could ever get more than 1 page per second, which iirc is below the limit placed on spiders (can't find the rules governing spiders at the moment). Does that seem OK in general? If anyone has any comments I'd be interested to hear them. Regards & happy editing, Wile E. Heresiarch 22:16, 8 Jan 2005 (UTC)

Could you get the same results by downloading a snapshot of the database and running queries on it? --Carnildo 02:30, 9 Jan 2005 (UTC)
Well, maybe so. Maybe someone is selling snapshots on dvd? I'm not able to download snapshots, unfortunately. On the other hand, I'm not sure that the entire db would be interesting -- I was planning to sample just a fraction of the articles. On the other other hand, maybe if all goes well, it would turn out to be interesting to analyze the whole db. As you can see nothing has been decided yet. Best, Wile E. Heresiarch 03:43, 9 Jan 2005 (UTC)


Header text

At 01:15 on January 11, Poccil edited the header text in a way that made it sound like there was some situation in which running a bot on Wikipedia would have no drawbacks. This is untrue. The edit has reverted a few times, intermingled with the other reverts. I have changed it back to its original state. Please discuss before reverting it again. —Ben Brockert (42) UE News 07:31, Jan 15, 2005 (UTC)


Request for a bot

Hi, I'm writing this here because I'm active on this wiki and I wasn't sure where else to put it. The request I'm making is for (what I'm assuming would be) a simple bot for the bosnian language wikipedia. Basically we made several mistakes in writing articles for all the years, and fixing them now would mean tediously editing hundreds of articles. Namely, our articles for decades would be named, for example, "1960te" when they should have been "1960e", so we need to change the links to all of those on the pages for the years and centuries, redirect the articles for the decades, etc. Also, we in the end decided that we would use different words for "century" and "decade". If it's possible, could someone help us make a bot that would fix this problem? Asim Led 19:25, 1 Apr 2005 (UTC)

OpenProxyBlockerBot

With the recent surge of anonymous proxy vandalism, I think the time has come to attempt to plug the hole. I was thinking of periodically grabbing the lists of open proxies from the various websites that publish them, verifying they're open proxies and blocking them. I've already done this for the current Tor outproxies, but doing this manually for the (much larger) list of normal anonymous proxies would cost too much time. --fvw* 09:49, 2005 Jan 23 (UTC)

Bots with admin privileges make me nervous. Bots with admin privileges imposing permanent blocks make me very nervous. What happens when someone is clueless enough to not plug up a trojan for long enough to be listed, and then found by your bot? I agree that something needs to be done, though. And didn't there used to be a bot that did this? —Korath (Talk) 10:45, Jan 23, 2005 (UTC)
Then the bot would block that user, just as a flesh and blood admin would. This is a good thing™. If they want help getting rid of their proxy they're free to contact the admin or the mailinglist, but until then blocked is the correct state for that host.
(For the last, I was thinking of User:Proxy blocker, which worked differently. —Korath (Talk) 10:57, Jan 23, 2005 (UTC))
Yes, it scanned all anonymous users, which gave complaints. This shouldn't even scan innocent users, so would be much less problematic. Incidentally, Proxy blocker would have blocked your "poor innocent" trojaned user too. --fvw* 11:09, 2005 Jan 23 (UTC)
Actually, I was making an assumption above that probably isn't justified (it's very very late here, so forgive my incoherence) - will the blocks be permanent, or (like Proxy blocker's) a week or whatever? If the latter, how are you planning to deal with addresses that are still on the list, or on multiple lists? Unblock it and immediately reblock? And how often will it be run? —Korath (Talk) 11:22, Jan 23, 2005 (UTC)
Dunno. Currently proxy blocks are permanent, which makes sense, but once scripted there'd be no harm in making them shorter. Unblock and reblock isn't hard once you're not doing it manually. I'd guesstimate once every two weeks or every month should be sufficient to get most of them, but it's one of those things that'll have to be tweaked based on performance. --fvw* 11:28, 2005 Jan 23 (UTC)
My last concerns (honest!): if not permanently blocking, I assume the bot'll be written so that it doesn't ever actually shorten a pre-existing block. Also, everyone makes mistakes; if other bots go berserk, though, any admin can immediately neutralize them with a block. This won't (as I understand it) stop a bot that only places or removes blocks itself. How do you plan to safeguard it? —Korath (Talk) 08:43, Jan 24, 2005 (UTC)
It should only run 2 or 3 hours tops per run, so I'd be watching it run (and responding to any talk or emails) the whole time. --fvw* 08:57, 2005 Jan 24 (UTC)
Then I support, FWIW. —Korath (Talk) 09:11, Jan 24, 2005 (UTC)
This sounds good to me. Let's try it. Please make sure the bot uses a strong password. -- Karada 11:38, 24 Jan 2005 (UTC)
You mean the account it runs under? Just as any admin account I should hope. For the testing phase of this bot I think it'd be best to just run as User:Fvw, unless there's a bureaucrat around willing to bless a bot account into adminhood. --fvw* 13:41, 2005 Jan 24 (UTC)
Ceterum censeo that someone should do something about bug #621. But that's neither here nor there. JRM 16:16, 2005 Jan 24 (UTC)
I trust fvw to run it properly and respond reasonably to any issues that develop. Support. —Ben Brockert (42) UE News 01:45, Jan 25, 2005 (UTC)
I disagree that a bot running as an admin is the right way to go. Dumping hundreds of IP addresses into the Block log will clog it up and make it less useful when reviewing normal blocks. It sounds like the larger question of "should we block all open proxies" should be tossed about first, and then a back-end solution would be preferable. I am not opposed to blocking open proxies myself, just not convinced this is the right solution. -- Netoholic @ 14:23, 2005 Jan 24 (UTC)
This isn't really the place for that discussion. But if you do open it up elsewhere, please put up a link to it here. —Ben Brockert (42) UE News 01:45, Jan 25, 2005 (UTC)
Great idea. Support. Neutralitytalk 17:04, Jan 25, 2005 (UTC)
I second Brockert and everyone else in support. Ambi 03:02, 26 Jan 2005 (UTC)
Support. This would be very useful. Proteus (Talk) 16:30, 26 Jan 2005 (UTC)
Support. Vandalism by Open proxies needs to be slowed - I also like the alternative of using cookies and a 10 char ID for tracking edits by a user. Trödel (talk · contribs) 17:01, 26 Jan 2005 (UTC)
Definitely support as a trial run. If it causes problems then it needs to be reworked, but worth a try. - Taxman 08:44, Jan 28, 2005 (UTC)
Support OneGuy 05:27, 2 Feb 2005 (UTC)

Thanks for your support everyone, I'm currently running the first batch of hosts at User:Fvw/proxytest and User:Fvw/proxytest2. A rought estimate gives that this run will find around 1000 open proxies. Blocking and unblocking those regularly would be too spammy in the block log, so I'm considering indefinite-blocking them and checking them regularly and unblocking when necessary. I'll put the list of blocked proxies up in my user space so should I disappear someone else can do the checking or unblock them. Sound ok? --fvw* 08:33, 2005 Feb 2 (UTC)

Opposed. Something like this recently ran at the Japanese Wikipedia. One of the victims was a friend of mine from the Dutch Wikipedia who lives in Thailand. He automatically gets a proxy from his IP, which according to him is the largest in Thailand, which was recognised as an open proxy. To make a long story short, we could easily victimize innocent and useful users this way. - Andre Engels 20:29, 3 Feb 2005 (UTC)

That's a shame, and individual cases can be worked around, I've already agreed with Waerth that we're going to figure something out before we block the open proxy. It's not a reason not to block in the general case though. Vandals like Wik and Willy have already caused more than enough trouble. --fvw* 22:50, 2005 Feb 3 (UTC)

The blocking code currently in use on the site does a full table scan of the block table for every attempt to edit a page. That is, it does work proportional to the number of blocks in place and delays the edit page load and save until that has been done. Emergency changes to improve this behavior were requested by me and I've taken some steps myself to reduce the impact of this (I'm optimising the table almost every night). At present making the block list larger than it has to be to block those actually vandalising the site is piling on more trouble to an area which is already very troubled. Using a bot to dramatically expand the number of entries and slow down all edits while emergency steps are already being taken to try to reduce the pain in this area is contrary to the do no harm principle for bot operations. If this bot is seen adding entries, please block it immediately so that it doesn't slow down edits for all human and bot editors. If you're aware of any blocks not in place to deal with actual vandalism, please remove them until the programming changes are known to be in place on the live site. This is a typical query, taken from the slow query log of one of the database servers today:

/* Block::load */ SELECT * FROM `ipblocks` WHERE (ipb_address='81.57.248.96' OR ipb_user=3763);

Every edit page load does that. Improved code is on the way but for now the use of two terms blocks the use of any index. Updated code, when in use, will use a union instead, so each part can use an index. Work to use memcached as a preliminary check is also ongoing, because crawlers loading edit pages and causing database checks have caused the site to be unusable (and the size of the block list affects how much pain they cause as well...) Likely to be in place within a few months. Jamesday 21:51, 3 Feb 2005 (UTC)

Note that this slow query only gets hit for logged in users though, so it isn't relevant for web crawlers. Anyway, I'm backing out the proxies I've blocked until the patch to fix this I've sent to wiki-tech is applied. --fvw* 22:50, 2005 Feb 3 (UTC)

VFD Hourly Discussions

I know that Anthony_DiPierro was running a bot to update his own personal subpage to keep a track of the hourly discussions... but since the format on VFD has changed, he has not fixed his hourly discussions. I would like to use a bot for the purpose of updating a subpage on my user page, keeping a track of all the new discussions being added by the hour. -- AllyUnion (talk) 04:26, 2 Feb 2005 (UTC)

You mean just links to the newly created vote-pages or also links to changes in vote pages? What exactly do you hope to achieve, I think if we're going to run a bot on VfD we might as well make it one that organises VfD in such a way you don't need to run a personal bot to check for new VfDs. --fvw* 04:32, 2005 Feb 2 (UTC)
Each hour, it would post links to any new VFD discussions, keeping 7 days of VFD on the list. Here's a sample of what I mean: User:AllyUnion/VFD_List -- AllyUnion (talk) 07:53, 2 Feb 2005 (UTC)
Ah, right. I don't find it very elegant (and I think doing it hourly instead of on-demand maybe wasting server resources unless a lot of people use it), but as long as VfD isn't reorganised into something that can provide something similar, support. --fvw* 08:20, 2005 Feb 2 (UTC)
Well, there is no elegance in utility, unless you make it that way. I'm going for functionality over pretty. -- AllyUnion (talk) 11:23, 2 Feb 2005 (UTC)
Support. The server resources used by running a script once an hour are negligible. anthony 警告 20:33, 4 Feb 2005 (UTC)
The bot is using pywikipediabot framework, and is currently running to do this task. -- AllyUnion (talk) 02:24, 7 Feb 2005 (UTC)
The bot works! And if there is no objection soon, I would like to request a bot flag for the bot. -- AllyUnion (talk) 10:47, 8 Feb 2005 (UTC)
I don't think you should set a bot flag on an account that is not used exclusively as a bot. Allyunion used to be your real account. I would rather this were renamed to a non-used account, and one that isn't so similar to your current user name, in order to avoid confusion. Angela. 10:46, Feb 19, 2005 (UTC)
Okay. I'll create a new account for the bot. See User:VFD Bot. -- AllyUnion (talk) 04:50, 24 Feb 2005 (UTC)


WML Gateway

Hi folks! I've been working on a WML gateway so that I can read wikipedia on my moblie phone while waiting for public transport. The gateway is read only, and I've got no plans to change that. I think the bot policy is geared towards robots that edit, but I'm asking for permission here to be on the safe side. The gateway is apache/mod_perl/libwww-perl based. The gateway uses Special:Export as the backend, and then parses the wikimarkup to make a fairly basic WML page. Long term, I would like to see the functions of this gateway moved closer to the database, but I'm happy to keep it running as long as it is is needed (and I'm happy to switch it off as soon as it is not needed). There is a testing version avaiible on my DSL line (so it runs very slow when my flatmates are filesharing), however I will move it to a better hosted site once I've got permission. (I don't want to put the link anywhere too popular untill testing is finished and I've got permission to run a bot, so you will need to look at the Talk page for Wikipedia:WAP_access if you have a WML browser and want to try it). Thanks. Osric 10:12, 3 Feb 2005 (UTC)

  • Sounds good, definitely support. --fvw* 17:06, 2005 Feb 3 (UTC)
  • One vital precondition: the Special:Export page has a check box to include only the current version, not the full history. It is vital that your operations always use that check box. Some pages have many tens of thousands of revisions and that special page is already in the query killer because it has caused serious load problems. At present, every revision displayed can be assumed to cause at least one disk seek. Improvements are planned for MediaWiki 1.5, after not making the cut for 1.4. In addition, if your pages are going on a site which can be crawled, it's vital that you include a robots.txt file which gives any crawlers of your or other sites the same robots.txt instructions as given by the wikimedia servers. Bots crawling third party sites not doing this have led us to block the crawler of Ask Jeeves (ask.com) on a couple of occasions and proxying sites are likely to be very quickly firewalled if we notice that happening - which may mean firewalling you. Please exercise great care in this area - it's a high risk activity at present. I also strongly recommend including contact details in your browser ID string. Jamesday 22:12, 3 Feb 2005 (UTC)
    • I think you shouldn't allow crawling at all, i.e. Disallow: * in robots.txt. There's no need for robots to crawl wikipedia both through the direct interface and through the WML gateway. --fvw* 22:53, 2005 Feb 3 (UTC)
      • Try to persuade the people here that we should modify robots.txt to block all crawlers, including bots.:) The most common crawlers are search engines, which we do not want to block. Jamesday 20:14, 4 Feb 2005 (UTC)
  • If needed, I'll write a cron job to grab the robots.txt once a week or however infrequenly it is updated. However, I'd be happier with a simple Disallow: * robots.txt. How paranoid should I be about robots that don't folow robots.txt?
    • Disallow: * should do the job. If not, well, we'll find out and are happy to remove a firewall rule once the problem is gone, if we do have to put one in because something unruly ignored it. If it's practical for you you might consider rate limiting based on IP as a second line of defence. We don't currently do that but it's on our to do list. Jamesday 20:14, 4 Feb 2005 (UTC)
  • The text on Special:Export says that if I ask for a page as '.../Special:Export/page', I only get the most recent version (and thats how I'm doing it).
  • My email address is in the User-Agent header, along with a browser tag of 'WikiToWML/0.1'. Osric 00:11, 4 Feb 2005 (UTC)
    • Sounds good. Thanks. Jamesday 20:14, 4 Feb 2005 (UTC)

Security update may block some bots/tools

An emergency security release may currently be blocking access to existing bots and tools like editors. "Additional protections have been added against off-site form submissions hijacking user credentials. Authors of bot tools may need to update their code to include additional fields". To get started, you'll have to fetch the wpEditToken value from an edit form after you login, and provide that with your save form submissions. The new release is live on all Wikimedia hosted sites and is a recommended security update for all MediaWiki sites. See the release notes for more details of what you need to do to modify a bot or tool to deal with this. Please also take the opportunity to add rate limiting based on how long the last response took, if you haven't already done that based on earlier discussions of how to avoid hurting the servers. Jamesday 20:14, 4 Feb 2005 (UTC)

  • Okay, so where exactly are these alleged release notes? RedWolf 04:46, Feb 5, 2005 (UTC)
  • I opened bug 1116690 for the Python Wikipedia Robot Framework. RedWolf 21:35, Feb 5, 2005 (UTC)
Bug has been fixed. Kudos to A. Engels for his quick turnaround fixes on the problem. There's a new snapshot if one doesn't want to pull it from CVS themselves. RedWolf 06:06, Feb 8, 2005 (UTC)

Additional uses to bot (User:Allyunion)

Well, due to the security update, I will not be able to implement this bot yet, but I would like the approval to have my bot additionally do the following tasks at roughly exactly around every 00:00 UTC:

  1. Add each VFD subpage day. Example: {{Wikipedia:Votes for deletion/Log/2005 February 5}}
    Reason for this is because the VFD system has changed such that each day has a subpage for VFD based on the UTC clock. The transincludes must be included every day in order to properly present the VFD. -- AllyUnion (talk) 17:07, 5 Feb 2005 (UTC)
  2. Edit each new subpage day to include: == MONTHNAME DATE == (Like == February 5 ==)
    Again, VFD system changed, and each day will require to display its day section, allowing a person direct access to the VFD day. -- AllyUnion (talk) 17:07, 5 Feb 2005 (UTC)
  3. Take the last recently moved subpage day (transinclude) to Wikipedia:Votes for deletion/Old, edit all VFD subpages to include the content from Template:Vfd top and Template:Vfd bottom. (This bot will not count any of the votes, just merely include content from the templates, making it easier on the maintainers)
    Each VFD subpage includes the content from both of these templates. I believe they are not applied due to the technical limitations of the number of templates that can be used on each page. -- AllyUnion (talk) 17:07, 5 Feb 2005 (UTC)
    I have decided not to do this feature, until the bot really can do something useful. -- AllyUnion (talk) 00:59, 6 Feb 2005 (UTC)
    Actually, rethinking to add the content from Template:Vfd top and Template:Vfd bottom but as HTML comments, on all VFD subpages after moved to WP:VFD/Old -- AllyUnion (talk) 07:13, 6 Feb 2005 (UTC)

-- AllyUnion (talk) 17:07, 5 Feb 2005 (UTC)

Having a bot to update VfD would be wonderful. One suggestion: as step 3), have it create the subpage for tomorrow. There's no reason not to create it ahead of time, so it's ready when it's needed.
About adding {{subst:vfd top}} and {{subst:vfd bottom}}: I don't know if this is such a good idea. When I'm scanning /Old, I ignore the sections with colored backgrounds, because they've already been closed. Adding the templates ahead of time would make it harder to see which pages have been processed.
Also, if it's feasible, change "roughly around" to "exactly". This thing'll be run as a cron job anyway, so there's no reason not to run it at the right time. dbenbenn | talk 21:00, 5 Feb 2005 (UTC)
You're forgetting about the time it takes for it to get a Wikipedia page. That is why I say roughly. Because the load time to get the page is not always exact, the time when the bot posts something to the Wikipedia will not always fall exactly 00:00 UTC. -- AllyUnion (talk) 00:54, 6 Feb 2005 (UTC)
How do you think I manage to do it by hand every day at midnight (except when I flake out)? You load the page ten minutes early, get the changes ready, then hit "save page" at the right time. Edit conflicts aren't an issue, because the page is only edited once or twice a day. Of course, doing the same thing with a bot would complicate the code slightly. It's no big deal either way. dbenbenn | talk 05:10, 6 Feb 2005 (UTC)
The code won't be complicated any further than it already is. If the bot runs exactly 10 minutes before the UTC date rolls over to the next day, all it has to do is search for the current date, then replace {{Wikipedia:Votes for deletion/Log/2005 February 5}} for {{Wikipedia:Votes for deletion/Log/2005 February 5}}\n{{Wikipedia:Votes for deletion/Log/TODAY}}. Or in computing terms: {{Wikipedia:Votes for deletion/Log/TODAY}} for {{Wikipedia:Votes for deletion/Log/TODAY}}\n{{Wikipedia:Votes for deletion/Log/TOMORROW}}. -- AllyUnion (talk) 07:08, 6 Feb 2005 (UTC)
Additionally, if it is running 10 minutes before the next UTC day, then yes, it would create the VFD page for tomorrow. I would actual want the bot to do that one hour before hand. just to be on the safe side. Of course, all changes presume that the Wikipedia is working, and is not down at the time of the updates. -- AllyUnion (talk) 07:31, 6 Feb 2005 (UTC)

Well, the additional complication would be having it wait until midnight before submitting the change. Also, there are actually 5 lines to edit on VfD. See this diff for a representative example. dbenbenn | talk 08:31, 6 Feb 2005 (UTC)

As I said, it would only be adding, not removing the discussions. Removing the discussion would still be a human job, since the bot will not know when a discussion is closed. It would be changing at line 13 and line 50, but it will not be making any changes at line 19 and line 45. -- AllyUnion (talk) 09:57, 6 Feb 2005 (UTC)
I think there's a slight misconception here about how the edits to VfD work. Every day at midnight, the new day is added, and the oldest day (now 6 days old) is removed, and added to /Old. The discussions on that old day aren't closed yet—anyone can still vote. It's just that the day doesn't appear on the main VfD page anymore. The votes get closed one by one as someone gets around to closing and tallying them. There's no reason for the bot not to do the complete extension (all 5 lines of the diff above), and it would be tremendously helpful. dbenbenn | talk 21:17, 6 Feb 2005 (UTC)
Oh, if that is the case, then it can be done. -- AllyUnion (talk) 22:52, 6 Feb 2005 (UTC)
And the submission should be right at midnight or slightly thereafter. -- AllyUnion (talk) 22:54, 6 Feb 2005 (UTC)
And of course, it would move the page appropriately to the Old page as well. -- AllyUnion (talk) 22:55, 6 Feb 2005 (UTC)
Technical detail: Search for "'''Current votes''':<br>" and find and replace that. Search for "<br>'''''Old votes''':'' <br> <!-- Old votes need both their day's link moved here from Current ones, just above, and the day's link moved to the /Old file -->" and find and replace that. -- AllyUnion (talk) 22:57, 6 Feb 2005 (UTC)

Changed to exactly. I still question whether it can post two pages at once at exactly 00:00 UTC. -- AllyUnion (talk) 23:56, 6 Feb 2005 (UTC)

Please let me know when you're going to start running your bot. Also, I'm curious to see the source code, if you're willing to share it. Thanks, dbenbenn | talk 02:26, 7 Feb 2005 (UTC)
I don't see any problem with two edits at once a day, the rate limiting is just there to avoid overloading the server and allow for verification of bot actions. --fvw* 02:30, 2005 Feb 7 (UTC)
Another minor technicality: have it submit the changes for VfD first, and the change to /Old second (like, a couple seconds later). That way, you never have votes on /Old and VfD at the same time, which would contradict the Deletion process. (Of course, it's only a matter of a few seconds anyway, but I'm a mathematician. :) dbenbenn | talk 02:55, 7 Feb 2005 (UTC)
If that's how you want it, then it will submit the new VFD page first, then the Old page after that in the same script. -- AllyUnion (talk) 06:18, 7 Feb 2005 (UTC)
Oh, and yes, I will let you look at the code. -- AllyUnion (talk) 07:08, 7 Feb 2005 (UTC)

Okay. I'm all done with tasks 1 and 2. Task 3 will be another script. If anyone cares to view the scripts made based on the pywikipediabot framework, please leave a comment at my talk page. -- AllyUnion (talk) 10:54, 7 Feb 2005 (UTC)

Further additional tasks

(clarification of task 1)

  1. Post a new edited WP:VFD page
    1. Add the new UTC day section link on WP:VFD (in the Jump to specific days box)
    2. Remove the six days ago section link on WP:VFD (in the Jump to specific days box)
    3. Add a new six days ago /Old section link on WP:VFD (in the Jump to specific days box)
    4. Add the new UTC day transinclude on WP:VFD
    5. Remove the six days ago transinclude on WP:VFD
  2. Add the six days ago transinclude on WP:VFD/Old

Summary of tasks

A summary of its tasks can be found on its user page and whether or not it is working at this time: Allyunion -- AllyUnion (talk) 05:36, 8 Feb 2005 (UTC)

Interlanguage specialities at eo:

Additions added there: Links to eo: will "lead to the target" throught a redirect. Gangleri | Th | T 08:36, 2005 Feb 9 (UTC)

Upload script request

I'm in need of a script. I recently found a large dump of classical music, all CC-by-SA. I used wget recursively to fetch all of it (650 files; 5.29 gigs). I need to upload them now. I'd like to do it with a unix command line program. I figure the syntax should be something like:

>wikiupload Raul654bot:mypassword ./song.ogg --rate=100 --location=commons "This is a public domain song I uploaded {{PD}}"
  • The 1st arguement is the username and password (necessary for uploading files)
  • The 2nd arguement is the file to upload. So in the case of uploading a large number of files, I can just use the *
  • The 3rd arguement specifices the upload rate. I believe this is necessary because bots are supposed to be able to run at low speeds initially.
  • The 4th arguement specifies where it should go: en, de, commons, wikibooks, etc.
  • The 5th arguement is the upload text - IE, the text to be put on the image page.

→Raul654 07:36, Feb 9, 2005 (UTC)

cURL might already do what you're after. It's what I use for putting automated content into the Wikipedia - (I'm only adding article text, and I'm doing it using PHP's cURL library, so I don't think my code is likely to be of much use to you, otherwise I'd just give you the code). Nevertheless I think it should be able to do what you're after from the command line.

The following args look most applicable:

  • --cookie (you'll need a cookie for your uploading data under your username / password)
  • --limit-rate (so as to not upload too fast)
  • --form (With various args as per the Upload forms, which will include the description, and the path to the file)
  • The destination URL (which will be different depending on where you want it to go, but will presumably be the commons most of the time).

If you're lucky then maybe someone has already done this using cURL on the command line, and will let us know the command they used.

Docs and source code available at: http://curl.haxx.se/

Note that you'll probably have to call it multiple times (e.g. using "xargs"), if you want wildcard / multi-file-upload functionality.

All the best, Nickj (t) 08:08, 9 Feb 2005 (UTC)


I talked it over with Kate ,and here's what we got:

  • curl -F wpName=Bob -F wpPassword=asdf --cookie-jar ./mycookies "http://commons.wikimedia.org/w/wiki.phtml?Special:Userlogin&action=submit" > output1.html
  • curl -F wpUpload=@./Bob_test_file.txt -F wpUploadDescription=This_is_a_test -F wpUploadAffirm=1 --limit-rate 100k --cookie-jar ./mycookies "http://commons.wikimedia.org/w/wiki.phtml?Special:Upload&action=submit" > output2.html

The first one logs in Bob (whose password is asdf) and creates a cookie jar containing the log in cookie, and the second one actually does the upload (of file Bob_test_file.txt with description This_is_a_test). I tested, and the first one works *I think* but the 2nd one does not. I could appreciate someone helping debug it. →Raul654 09:38, Feb 9, 2005 (UTC)

You can try the pywikipediabot framework. They have a script called upload.py that you could use, if you made the script runable. Then you can create a perl script or a bash script based on upload.py to loop through the contents of the directory. I am uncertain if they have the pywikipedia framework ready for the commons. -- AllyUnion (talk) 10:13, 9 Feb 2005 (UTC)
Not only is pywikipediabot extremely difficult to get working, but it's nonfunctional at the moment. →Raul654 15:28, Feb 9, 2005 (UTC)
The newest version is working. -- AllyUnion (talk) 11:03, 10 Feb 2005 (UTC)

Debugging - Raul, you did good, and you were 95% of the way there:

With the first command:

  • I added wpLoginattempt="Log in" (which is the same as clicking the "log in" button) - (may or may not be needed, but it won't hurt).
  • Added wpRemember=1 (may or may not be needed, but it won't hurt).

With the second command:

  • The URL could be wrong - I used http://commons.wikimedia.org/wiki/Special:Upload instead.
  • Need the file's path in "wpUploadFile", rather than "wpUpload".
  • Add 'wpUpload="Upload file"', which is the same as clicking the button (may or may not be needed, but it won't hurt).
  • With uploading, I think you want "--cookie", rather than "--cookie-jar", since "--cookie" is read-only, whereas --cookie-jar is for storing stuff (i.e. use store to log in, then read to upload).
  • Note that you'll also want to put in a license tag in the description, otherwise the tagging folks will hunt down and nail you to a tree ;-)

Putting all that together into two commands, we get:

  • First command :

curl -F wpName=Nickj -F wpPassword=NotMyRealPassword -F wpLoginattempt="Log in" -F wpRemember=1 --cookie-jar ./mycookies "http://commons.wikimedia.org/w/index.php?title=Special:Userlogin&action=submit" > output1.html

  • Second command (note that I have omitted the rate limiting bit, as my installed curl is so ancient that it doesn't have that option, but you probably want to add it back):

curl -F wpUpload="Upload file" -F wpUploadFile=@./Part of Great Barrier Reef from Helicopter.jpg -F wpUploadDescription="Photo I took in Jan 2005 over part of the Great Barrier Reef in a helicopter {{GFDL}}" -F wpUploadAffirm=1 --cookie ./mycookies "http://commons.wikimedia.org/wiki/Special:Upload" > output2.html

And to the right is the result, as a thumbnail:

View of part of the Great Barrier Reef from helicopter

All the best, Nickj (t) 23:23, 9 Feb 2005 (UTC)

Ok, I did a lot of work on this. The problem is, the method above fails for files above 4.7 megs (5000000 bytes) because mediawiki gives you an "Are you sure you want to upload this big file?" prompt. I tried a workaround but it doesn't work yet. You can see my script here. Run it by by doing: ./wikiupload username pass file →Raul654 08:39, Feb 10, 2005 (UTC)

For the record i fixed Rauls bot so that it no longer has this limit. [16]. —Ævar Arnfjörð Bjarmason 11:37, 2005 Feb 10 (UTC)

VFD Old Bot work

On all pages moved to VFD/Old: On their talk pages, include a link to the VFD discussion, with a signed name. On all VFD subpages, include <!-- {{subst:Vfd top}} --> on the top and <!-- {{subst:Vfd bottom}} -->, with no finalization of the count. -- AllyUnion (talk) 10:25, 9 Feb 2005 (UTC)

No, this is bad. The usual way to finalise the votes is to go to /Old, pick an article that hasn't been boked, and resolve it. If they're all pre-boxed, it will make the process more difficult and less likely to be completed. —Ben Brockert (42) UE News 06:13, Feb 10, 2005 (UTC)
Just to clarify: <!-- HTML Comment --> Anything between those two brackets will not show up on the page. -- AllyUnion (talk) 21:59, 10 Feb 2005 (UTC)
Ah, I completely ignored that they were in comment tags. In that case, please do that, it would help a lot. —Ben Brockert (42) UE News 04:02, Feb 11, 2005 (UTC)
I've decided against creating this feature as there are two shortcuts now. -- AllyUnion (talk) 23:39, 3 Mar 2005 (UTC)

User:Allyunion - Changing template to subst

A few users forget to use {{vfd top}} as {{subst:vfd top}}. This bot is to correc that as well as: {{vfd bottom}} to {{subst:vfd bottom}}. -- AllyUnion (talk) 02:45, 10 Feb 2005 (UTC)

Can't really hurt. It would be good to check that the action was completed at the same time. —Ben Brockert (42) UE News 06:14, Feb 10, 2005 (UTC)