Hi
I am working on my wikipedia project. I require to know the number of back links for a page. For this i am planning to use the mediawiki api. But i need help regarding how can be they be used. Can api be used offline. Thans
ah, I was pretty sure I saw your name somewhere before. A GSoCer =)
Have you tried simply reading http://en.wikipedia.org/w/api.php ? It gives a simple overview of most of the API's possibilities. I also links to.... http://www.mediawiki.org/wiki/API the API documentation =)
Regarding your question, searching for "backlinks" on the first page links you to a good example: http://en.wikipedia.org/w/api.php?action=query&list=backlinks&bltitl...
GSoC is a lot about looking by yourself for documentation, about being independent. Come on', dig a bit more, the answer wasn't so hard to find =)
2009/1/18 Ankuj Gupta ankuj2004@gmail.com:
Hi
I am working on my wikipedia project. I require to know the number of back links for a page. For this i am planning to use the mediawiki api. But i need help regarding how can be they be used. Can api be used offline. Thans
-- Ankuj Gupta Computer Engineering NSIT India
Mediawiki-api mailing list Mediawiki-api@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
Ankuj Gupta wrote:
Can api be used offline.
The API is a way to communicate with the server. Thus you can't really expect to use the api (communicate with the server) being offline (no connection to the server). You could have a local copy but that wouldn't qualify to much as offline. You just move the server to your computer.
And regarding
Will wikipedia be part of GSOC 09
The Wikimedia Foundation was on past GSOCs, so it is likely to be again this year.
2009/1/18 Platonides platonides@gmail.com:
Ankuj Gupta wrote:
Can api be used offline.
The API is a way to communicate with the server. Thus you can't really expect to use the api (communicate with the server) being offline (no connection to the server). You could have a local copy but that wouldn't qualify to much as offline. You just move the server to your computer.
I have an unfinished Firefox extension which emulates a couple of read only API functions for accessing an offline Wiktionary dump. If anyone wants to improve it I could release it.
Andrew Dunbar (hippietrail)
And regarding
Will wikipedia be part of GSOC 09
The Wikimedia Foundation was on past GSOCs, so it is likely to be again this year.
Mediawiki-api mailing list Mediawiki-api@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
Andrew Dunbar wrote:
2009/1/18 Platonides platonides@gmail.com:
Ankuj Gupta wrote:
Can api be used offline.
The API is a way to communicate with the server. Thus you can't really expect to use the api (communicate with the server) being offline (no connection to the server). You could have a local copy but that wouldn't qualify to much as offline. You just move the server to your computer.
I have an unfinished Firefox extension which emulates a couple of read only API functions for accessing an offline Wiktionary dump. If anyone wants to improve it I could release it.
Andrew Dunbar (hippietrail)
Which ones? Do they make sense? I mean, the api is usually for external processing, how useful is it from inside Firefox?
2009/1/19 Platonides platonides@gmail.com:
Andrew Dunbar wrote:
2009/1/18 Platonides platonides@gmail.com:
Ankuj Gupta wrote:
Can api be used offline.
The API is a way to communicate with the server. Thus you can't really expect to use the api (communicate with the server) being offline (no connection to the server). You could have a local copy but that wouldn't qualify to much as offline. You just move the server to your computer.
I have an unfinished Firefox extension which emulates a couple of read only API functions for accessing an offline Wiktionary dump. If anyone wants to improve it I could release it.
Andrew Dunbar (hippietrail)
Which ones? Do they make sense? I mean, the api is usually for external processing, how useful is it from inside Firefox?
Just the couple I needed for checking for and fetching raw articles: /w/index.php?title=...&action=raw /w/api.php?format=json&action=query&titles=...|...&redirects
plus an extension which returned a list of languages for which there were articles in an English Wiktionary page
For outside Firefox I also had a Perl CGI version but on my Eee PC that was way more expensive to run
Andrew Dunbar (hippietrail)
Mediawiki-api mailing list Mediawiki-api@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
Andrew Dunbar wrote:
Just the couple I needed for checking for and fetching raw articles: /w/index.php?title=...&action=raw /w/api.php?format=json&action=query&titles=...|...&redirects
And what's the point of Firefox retrieving article contents when you're offline?
plus an extension which returned a list of languages for which there were articles in an English Wiktionary page
For outside Firefox I also had a Perl CGI version but on my Eee PC that was way more expensive to run
Probably because the perl is interpreted and the extension compiled.
The use case I would deem more common would be to access page contents from outside for processing, eg. to show it on a page, as http://wiki-web.es/mediawiki-offline-reader/
2009/1/20 Platonides platonides@gmail.com:
Andrew Dunbar wrote:
Just the couple I needed for checking for and fetching raw articles: /w/index.php?title=...&action=raw /w/api.php?format=json&action=query&titles=...|...&redirects
And what's the point of Firefox retrieving article contents when you're offline?
Err well being able to read articles when you're offline.
But also analysing Wiktionary. Checking whether it had entries for certain words etc.
plus an extension which returned a list of languages for which there were articles in an English Wiktionary page
For outside Firefox I also had a Perl CGI version but on my Eee PC that was way more expensive to run
Probably because the perl is interpreted and the extension compiled.
Well Firefox extensions are in JavaScript, also an interpreted language, but the cost of starting up the perl interpreter was expensive, especially doing a few simultaneous operations.
The use case I would deem more common would be to access page contents from outside for processing, eg. to show it on a page, as http://wiki-web.es/mediawiki-offline-reader/
Perhaps. I wasn't making something for the more common use case, I was making something I had a use for (-:
Andrew Dunbar (hippietrail)
Mediawiki-api mailing list Mediawiki-api@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
Andrew Dunbar wrote:
2009/1/20 Platonides platonides@gmail.com:
Andrew Dunbar wrote:
Just the couple I needed for checking for and fetching raw articles: /w/index.php?title=...&action=raw /w/api.php?format=json&action=query&titles=...|...&redirects
And what's the point of Firefox retrieving article contents when you're offline?
Err well being able to read articles when you're offline.
But also analysing Wiktionary. Checking whether it had entries for certain words etc.
Then why the API? I'd prefer a graphical interface, not just wikitext into XML. :)
Do you place all articles on the same page using AJAX?
plus an extension which returned a list of languages for which there were articles in an English Wiktionary page
For outside Firefox I also had a Perl CGI version but on my Eee PC that was way more expensive to run
Probably because the perl is interpreted and the extension compiled.
Well Firefox extensions are in JavaScript, also an interpreted language, but the cost of starting up the perl interpreter was expensive, especially doing a few simultaneous operations.
I thought it was a binary one. Firefox extensions can be in javascript or XPCOM. Still, it caches the opcodes and the trunk version is even faster.
The use case I would deem more common would be to access page contents from outside for processing, eg. to show it on a page, as http://wiki-web.es/mediawiki-offline-reader/
Perhaps. I wasn't making something for the more common use case, I was making something I had a use for (-:
Well, showing mediawiki pages is exactly what that app does. It's just that I wouldn't use the dumps into offline Firefox to feed the API. Seems you have different uses (or perhaps I have completed misunderstood it :)
2009/1/20 Platonides platonides@gmail.com:
Andrew Dunbar wrote:
2009/1/20 Platonides platonides@gmail.com:
Andrew Dunbar wrote:
Just the couple I needed for checking for and fetching raw articles: /w/index.php?title=...&action=raw /w/api.php?format=json&action=query&titles=...|...&redirects
And what's the point of Firefox retrieving article contents when you're offline?
Err well being able to read articles when you're offline.
But also analysing Wiktionary. Checking whether it had entries for certain words etc.
Then why the API? I'd prefer a graphical interface, not just wikitext into XML. :)
Emulating the API made it trivial to have several classes which worked in exactly the same way. I could look up the real Wiktionary online, a local MySQL db, or a an offline dump without changing the client code.
I could also work on developing Wiktionary tools without paying for internet all day when I was travelling.
Graphical interfaces for just reading wikis already existed. But I'm a programmer who wanted to do more than just read articles.
Do you place all articles on the same page using AJAX?
I used it for a couple of personal web apps. I could collect strange new words when reading and enter them as a list which gave me red or blue links for each term so I could see whether they needed a Wiktionary entry or not. I also had a second back end which looked up a reverse engineered Spanish dictionary CD rom with the same API. I was planning to extend it then add all those redlinks to Wiktionary's requested entries page next time I was online. The other web app subscribed to a bunch of word-of-the-day RSS feeds and checked all the words for presence in Wiktionary.
plus an extension which returned a list of languages for which there were articles in an English Wiktionary page
For outside Firefox I also had a Perl CGI version but on my Eee PC that was way more expensive to run
Probably because the perl is interpreted and the extension compiled.
Well Firefox extensions are in JavaScript, also an interpreted language, but the cost of starting up the perl interpreter was expensive, especially doing a few simultaneous operations.
I thought it was a binary one. Firefox extensions can be in javascript or XPCOM. Still, it caches the opcodes and the trunk version is even faster.
Well XPCOM extensions are platform specific and I'm a big fan of cross-platform tools. I could've built a Windows version but no *nix or OS X version on my little Eee PC. And the js version was faster at looking up offline Wiktionary then looking up the online Wiktionary with a slow connection.
The use case I would deem more common would be to access page contents from outside for processing, eg. to show it on a page, as http://wiki-web.es/mediawiki-offline-reader/
Perhaps. I wasn't making something for the more common use case, I was making something I had a use for (-:
Well, showing mediawiki pages is exactly what that app does. It's just that I wouldn't use the dumps into offline Firefox to feed the API. Seems you have different uses (or perhaps I have completed misunderstood it :)
(-: Andrew Dunbar (hippietrail)
Mediawiki-api mailing list Mediawiki-api@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
Andrew Dunbar wrote:
Do you place all articles on the same page using AJAX?
I used it for a couple of personal web apps. I could collect strange new words when reading and enter them as a list which gave me red or blue links for each term so I could see whether they needed a Wiktionary entry or not. I also had a second back end which looked up a reverse engineered Spanish dictionary CD rom with the same API. I was planning to extend it then add all those redlinks to Wiktionary's requested entries page next time I was online. The other web app subscribed to a bunch of word-of-the-day RSS feeds and checked all the words for presence in Wiktionary.
I start to see your reasons. I couldn't figure out why you wanted it on firefox while online. I'm interested on knowing more about what did you do with that dictionary but it may go outside the list scope, so feel free to email directly.
Well XPCOM extensions are platform specific and I'm a big fan of cross-platform tools. I could've built a Windows version but no *nix or OS X version on my little Eee PC. And the js version was faster at looking up offline Wiktionary then looking up the online Wiktionary with a slow connection.
I'm pretty sure they could also be written in a platform agnostic way. But sure, they're easier to do in javascript.
mediawiki-api@lists.wikimedia.org