On Mon, Oct 8, 2012 at 3:45 PM, Venkatesh Channal <venkateshchannal@gmail.com> wrote:
Hi,

I would like to fetch all page text information of all wiki pages that belong to a movie category. Eg: http://en.wikipedia.org/wiki/Category:Hindi_songs

>From the page text I would like to extract information related to song title, song length, singer, name of movie/album etc. I am not interested in extracting images just the information about the song.

My questions:

1) Is there a way to download only those pages that I am interested in that belong to a particular category instead of downloading the entire dump?

2) Is it required to have PHP knowledge to install the db dump on a local machine?

3) Are there are tools that extract the information and provide the required data to be stored in MySQL database?

If this is not the right forum to have my questions answered could you please redirect me to the appropriate forum.

Thanks and regards,
Venkatesh Channal

_______________________________________________
Xmldatadumps-l mailing list
Xmldatadumps-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/xmldatadumps-l




--
 

Expertini - the international job board
29th Floor
One Canada Square,
Canary Wharf,
London,
E14 5DY,
United Kingdom,
(Phone)  +44 (0) 207 193 1729
(Mob) +44 (0) 742 5873 580, +44 (0) 7881 346475
Email: Info@SearchLondonJobs.Co.UK
Our sites:
America: www.SearchAmericanJobs.Com 
Australia: www.SearchAustralianJobs.Com 
Canada: www.SearchCanadaJobs.Com
Europe: www.SearchEuropeanJobs.Com
UK: www.SearchLondonJobs.Co.UK