> Brave new world!
> I liked the old days when it was just simple to use Pywiki. No reinstall
> after each update, no 100-mile-long commands.
If you install core as side package, the command line of ore and compat is the same.
Without installing core the core command is just 4 characters longer than compat:
in compat you run for example
in core it is
pwb.py touch user:xqt/Test
this is p+w+b+<blank> more.
btw you may shorten command lines with command files. I always use command files to invoke bot with all needed options.
Binaris, just change your monitor settings. You shouldn't use font size with 7^7 Point which make 4 charaters 100 miles long ;)
I had used pywikibot for years, but in the past years I didn't use it.
Now, I would like to use it again on Windows7, therefore I downloaded the
code from here: https://github.com/wikimedia/pywikibot-core using
TortoiseSVN, and installed Python 2.7.5. I use the last version of these.
I set the path for Python27, pywikipedia and pywikibot folders as PATH
I have my own user-config.py file.
I followed the instruction on
As I want to run login.py, I get this message:
Traceback (most recent call last):
File "C:\Program Files\pywikipedia\pywikibot\login.py", line 15, in
ImportError: No module named pywikibot
I got similar error for other scripts: import module --> ImportError: No
module named ...
Could you please help me, how could I solve this problem?
Thanks in advance.
Core and Compat are the two faces of pywikipedia. Who is interested to
answer some questions that helps me understand this better.
I would like to ask ten questions about this and publish the answers on my
PS please answer off-list
As I've said in a previous thread, I believe that feature parity
should exist between core and compat, but not necessarely vice-versa
(i.e. all non-deprecated features from compat should exist in core).
I've logged bug #55880 as a tracking bug for such issues. If you know
of other useful features from compat which are not (yet) in core,
please log bugs for them and set them as blocking for bug 55880.
Also, if you can fix any of the bugs on that list, feel free to send a patch. :)
I'm trying to convert a fairly large set of scripts from compat to
core and I found a significant loss of functionality in getting image
and template info. While writing this, I've noticed that the latest
version of core also has some of these prblems. I will elaborate on
this loss of functionality below, but I would like to know if this
simplification is intended or if this is part of some work in
For the image parsing, the function linkedPages(withImageLinks = True)
used to provide images that were not included through templates, while
imageLinks would provide all the images. In core, the linkedPages
function no longer provides this capability, and I haven't found any
replacement (I ported the old function in my code)
For template parsing, templatesWithParams from class Page used to
provide a pair containing the template name and a list of parameters,
with the full "key=value" string. Nowadays, we're getting a dictionary
instead of that list. Normally there is nothing wrong with that,
except that in Python 2 the dictionary is unordered, which means that:
* the order of the parameters is forever lost
* the original text cannot be reconstructed (because of the above and
the missing whitespace information) - this means there is no easy way
to identify and/or replace a particular instance of the template in a
page with many identical templates. It used to be you could do it with
simple find/replace operations, now it takes some more work.
I personally would like to have the old behavior back, it would save
me and probably others a lot of work.
That's when a trunk user develops a script. :-) -save was invented by me,
but I never used rewrite/core, and nobody felt like porting this feature to
> --- Comment #3 from xqt <info(a)gno.de> ---
> Sorry, -search option is quite right but -save option is available on
> version only (yet).
Crossposting to all lists of interest, sorry for the fancy long title
but should be easier to search for future reference.
I recently had the opportunity to mentor during the GHCOSD for the
Wikimedia Foundation. We were two mentors from the Foundation, and I
took on mentoring what we called the challenging tasks: collaborating
your first patch and writing your first bot.
This e-mail is about my approach to the writing your first bot task,
posting it here for future reference in case someone finds it useful,
and for comments/opinions. My approach consisted of challenging the
participants to write a game called Wikiflashcards. The game would use
pygame to display an index card with the name of a country and,
after clicking, it would reveal the name of the capital city of that
country. The frontend was all given so that participants wouldn't
have to worry about pygame at all (yet, we learned all the possible
ways to install pygame on a relatively old Mac, pretty complicated),
instead their task was to implement the backend using pywikibot to
generate the list of cities and getting the capital for each city.
This would naturally introduce the concept of listing a set of pages
of interest, searching through the wikicode, mining templates,
filtering links, etc.
This approach differs from that of teaching people how to use
pywikibot to collaborate directly with the wikipedia. My hypothesis is
that teaching how to use these tools to "scratch your own itch",
personal research, hobby, etc would make people match pywikibot with
their own interest, make them active users of the framework and that
will eventually lead them to use their expertise to collaborate with
any of the WMF projects.
After finishing a first version of the backend, I introduced the
concept and purpose of Wikidata, challenged the participants to
rewrite the backend using Wikidata items and properties and compare
the two approaches - in particular, the complexity of the first
approach vs the advantages of having a new backend ready for i18n and
whatnot. The goal was to naturally introduce the need of a structured
way to store and retrieve data, since I believe a direct introduction
to Wikidata to someone that has never been involved in a task of
mining data out of a Wikipedia looks very artificial.
At the end the challenge seemed to be very engaging for the
participants, and I had positive feedback about it but that doesn't
really tell if the goals listed above were achieved or not. If you
have further comments or questions just let me know.
Disclaimer: I'm not implying this is a good idea (in particular, I'm
not implying this was the best idea for this particular event), just
David E. Narvaez
This project has done a big change in its infrastructure and it's normal
to hit sharp corners at the beginning. Looking at how the transition has
gone for the rest of MediaWiki / Wikimedia projects you can be quite
confident that the change is worth.
The type of discussions you are having here remind the discussions that
some MediaWiki core and extension developers lead a year ago when
switching from SVN to Git. Nowadays most people is mostly happy with the
current setup, and in fact a lot happier than before. Yes, there is a
bit more of process but yes it is a lot more difficult to get bugs and
regressions sneaking into your master branch and deployments.
Also, Git and code review workflows are widely adopted. We are using
standard tools. Anything you learn here will be useful in many other
About Windows users: they exist :) and the documentation has
instructions specific for them.
About GitHub users: they also exist, and fwiw there is a way to sync
GitHub and Gerrit repos.
If you still find problems please report them in Bugzilla:
Thank you for using the Wikimedia infrastructure.
Technical Contributor Coordinator @ Wikimedia Foundation
I still don't understand, why was abandoned SVN and is there some git
In svn times when there was some critical problem, usually was patch after
some hours. One person wrote it, submited it and other users can download
now there was critical bug with interwiki.py, which happened about 15th
september. In these days was old sourceforge tracker moved to bugzilla, so
report was lost somewhere. After ten days I reported this bug again.
Three days later there was patch, but we had to wait one week more when
another developer rewieved this patch.
now there are hundrets of new unconnected articles in wiktionaries,
In the meantime there was some diff, from which was possible to patch
manually scripts , but not in plaintext, with tabs instead of spaces;
and nowhere was complete patched file to download.
The second problem is git: some people on IRC said, that there were many
people in Hackathon who weren't able to instal git correctly - and all of
them have PC with Windows - and it were about 80% of people with windows.
Is somewhere *simple manual* how to install and run git updates on windows?
or is somewhere *simple manual* how to use svn again?
and is somewhere possibility do download certain file from bot? now there
are only nightly dumps, which overwrites my changes in files when I want to