Hi guys,
I have an own idea for my GSoC project that I'd like to share with you. Its not a perfect one, so please forgive any mistakes.
The project is related to the existing GSoC project "*Incremental Data dumps *" , but is in no way a replacement for it.
*Offline Wikipedia*
For a long time, a lot of offline solutions for Wikipedia have sprung up on the internet. All of these have been unofficial solutions, and have limitations. A major problem is the* increasing size of the data dumps*, and the problem of *updating the local content. *
Consider the situation in a place where internet is costly/ unavailable.(For the purpose of discussion, lets consider a school in a 3rd world country.) Internet speeds are extremely slow, and accessing Wikipedia directly from the web is out of the question. Such a school would greatly benefit from an instance of Wikipedia on a local server. Now up to here, the school can use any of the freely available offline Wikipedia solutions to make a local instance. The problem arises when the database in the local instance becomes obsolete. The client is then required to download an entire new dump(approx. 10 GB in size) and load it into the database. Another problem that arises is that most 3rd part programs *do not allow network access*, and a new instance of the database is required(approx. 40 GB) on each installation.For instance, in a school with around 50 desktops, each desktop would require a 40 GB database. Plus, *updating* them becomes even more difficult.
So here's my *idea*: Modify the existing MediaWiki software and to add a few PHP/Python scripts which will automatically update the database and will run in the background.(Details on how the update is done is described later). Initially, the MediaWiki(modified) will take an XML dump/ SQL dump (SQL dump preferred) as input and will create the local instance of Wikipedia. Later on, the updates will be added to the database automatically by the script.
The installation process is extremely easy, it just requires a server package like XAMPP and the MediaWiki bundle.
Process of updating:
There will be two methods of updating the server. Both will be implemented into the MediaWiki bundle. Method 2 requires the functionality of incremental data dumps, so it can be completed only after the functionality is available. Perhaps I can collaborate with the student selected for incremental data dumps.
Method 1: (online update) A list of all pages are made and published by Wikipedia. This can be in an XML format. The only information in the XML file will be the page IDs and the last-touched date. This file will be downloaded by the MediaWiki bundle, and the page IDs will be compared with the pages of the existing local database.
case 1: A new page ID in XML file: denotes a new page added. case 2: A page which is present in the local database is not among the page IDs- denotes a deleted page. case 3: A page in the local database has a different 'last touched' compared to the one in the local database- denotes an edited page.
In each case, the change is made in the local database and if the new page data is required, the data is obtained using MediaWiki API. These offline instances of Wikipedia will be only used in cases where the internet speeds are very low, so they *won't cause much load on the servers* .
method 2: (offline update): (Requires the functionality of the existing project "Incremental data dumps"): In this case, the incremental data dumps are downloaded by the user(admin) and fed to the MediaWiki installation the same way the original dump is fed(as a normal file), and the corresponding changes are made by the bundle. Since I'm not aware of the XML format used in incremental updates, I cannot describe it now.
Advantages : An offline solution can be provided for regions where internet access is a scarce resource. this would greatly benefit developing nations , and would help in making the world's information more free and openly available to everyone.
All comments are welcome !
PS: about me: I'm a 2nd year undergraduate student in Indian Institute of Technology, Patna. I code for fun. Languages: C/C++,Python,PHP,etc. hobbies: CUDA programming, robotics, etc.
On 26/04/13 22:23, Kiran Mathew Koshy wrote:
Hi guys,
I have an own idea for my GSoC project that I'd like to share with you. Its not a perfect one, so please forgive any mistakes.
The project is related to the existing GSoC project "*Incremental Data dumps *" , but is in no way a replacement for it.
*Offline Wikipedia*
For a long time, a lot of offline solutions for Wikipedia have sprung up on the internet. All of these have been unofficial solutions, and have limitations. A major problem is the* increasing size of the data dumps*, and the problem of *updating the local content. *
Consider the situation in a place where internet is costly/ unavailable.(For the purpose of discussion, lets consider a school in a 3rd world country.) Internet speeds are extremely slow, and accessing Wikipedia directly from the web is out of the question. Such a school would greatly benefit from an instance of Wikipedia on a local server. Now up to here, the school can use any of the freely available offline Wikipedia solutions to make a local instance. The problem arises when the database in the local instance becomes obsolete. The client is then required to download an entire new dump(approx. 10 GB in size) and load it into the database. Another problem that arises is that most 3rd part programs *do not allow network access*, and a new instance of the database is required(approx. 40 GB) on each installation.For instance, in a school with around 50 desktops, each desktop would require a 40 GB database. Plus, *updating* them becomes even more difficult.
Well, some programs allow network access, and even if not, the school should download once, and distribute from there to the desktops, not downloading once per installation. But I agree having a copy on each computer could be problematic.
So here's my *idea*: Modify the existing MediaWiki software and to add a few PHP/Python scripts which will automatically update the database and will run in the background.(Details on how the update is done is described later). Initially, the MediaWiki(modified) will take an XML dump/ SQL dump (SQL dump preferred) as input and will create the local instance of Wikipedia. Later on, the updates will be added to the database automatically by the script.
Actually, you only need to add some scripts, not to modify mediawiki :)
The installation process is extremely easy, it just requires a server package like XAMPP and the MediaWiki bundle.
Process of updating:
There will be two methods of updating the server. Both will be implemented into the MediaWiki bundle. Method 2 requires the functionality of incremental data dumps, so it can be completed only after the functionality is available. Perhaps I can collaborate with the student selected for incremental data dumps.
Method 1: (online update) A list of all pages are made and published by Wikipedia. This can be in an XML format. The only information in the XML file will be the page IDs and the last-touched date. This file will be downloaded by the MediaWiki bundle, and the page IDs will be compared with the pages of the existing local database.
This is available in page.sql.gz
case 1: A new page ID in XML file: denotes a new page added. case 2: A page which is present in the local database is not among the page IDs- denotes a deleted page. case 3: A page in the local database has a different 'last touched' compared to the one in the local database- denotes an edited page.
(here you would compare the revision id)
In each case, the change is made in the local database and if the new page data is required, the data is obtained using MediaWiki API. These offline instances of Wikipedia will be only used in cases where the internet speeds are very low, so they *won't cause much load on the servers* .
method 2: (offline update): (Requires the functionality of the existing project "Incremental data dumps"): In this case, the incremental data dumps are downloaded by the user(admin) and fed to the MediaWiki installation the same way the original dump is fed(as a normal file), and the corresponding changes are made by the bundle. Since I'm not aware of the XML format used in incremental updates, I cannot describe it now.
Advantages : An offline solution can be provided for regions where internet access is a scarce resource. this would greatly benefit developing nations , and would help in making the world's information more free and openly available to everyone.
All comments are welcome !
Some work on improving the import scripts would be welcome, although I wonder if what you propose would be big enough for GSoC.
wikitech-l@lists.wikimedia.org