Here another question,
different topic:
we would like to examine the network property of the wiki.
There are already some results here an there, though we would like to have a closer look at it, to eventually improve the knowledge base.
To do that, we need to access the pages of wiki (only articles by now), with article name, abstract, meta keys, internal hyperlinks connecting them, and external hyperlinks base.
We found the db list in gz but they are very large files, and here my question.
how to manipulate them with phpmyadmin?
any other open source tool to handle datafiles of such size?
an easy way to get first results would be to have the db of articles with above parameters in xml sheet.
Also a portion of it would be interesting for a demo project to work on.
Any idea/reference?
Many thanks,
Luigi Assom
I introduce myself too:
my background is in visual communication + international development.
I am working with a friend who is PhD in theoretical physics, we are both interested in learning platform and emerging self-organized information patterns.
On Thu, Jan 27, 2011 at 2:28 PM, Luigi Assom
<luigi.assom@gmail.com> wrote:
Hello folksthanks for sharing your projects!
I have a question.
Is there a way to see from which source users land to wikipedia's pages?
I mean: are the users entering their keywords in wiki search field, or are they landed from google?
that means: are the visibility of the articles depending on keywords users look for, or on structure of the internet (turining wiki pages very visible)?
--
Luigi Assom
Skype contact: oggigigi
--
Luigi Assom
Skype contact: oggigigi