[Apologies for cross-posting; this same e-mail is being sent to wikipedia-l, WikiEN-l and foundation-l]
Hi everyone,
We are a research group conducting a systematic literature review on Wikipedia-related peer-reviewed academic studies published in the English language. (Although there are many excellent studies in other languages, we unfortunately do not have the resources to systematically review these at any kind of acceptable scholarly level. Also, our study is about Wikipedia only, not about other Wikimedia Foundation projects. However, we do include studies about other language Wikipedias, as long as the studies are published in English.) We have completed a search using many major databases of scholarly research. We've posted separate messages to wiki-research-l related to this literature review.
We have identified over 2,100 peer-reviewed studies that have "wikipedia", "wikipedian", or "wikipedians" in their title, abstract or keywords. As this number of studies is far too large for conducting a review synthesis, we have decided to focus only on peer-reviewed journal publications and doctoral theses; we identified 638 such studies. In addition, we identified around 1,500 peer-reviewed conference articles.
We hope that our review would provide useful insights for both wikipedians and researchers. (Although we know that most Wikipedia researchers are also wikipedians, we define wikipedian or "Wikipedia practitioner" here as someone involved in the Wikipedia project who is not also a scholarly researcher.) In particular, here is a list of some of the research questions we are investigating in our review that are particularly pertinent to wikipedians (you can check wiki-research-l for the full set of research questions):
1. What high-quality research has been conducted with Wikipedia as a major topic or data source? As mentioned in the introductory e-mail, we have already identified over 2,100 studies, though we will only analyze 638 of them in depth. We will group the articles by field of study.
2. What research questions have been asked by various sources, both academic scholarly and practitioner? We want to know both the subjects that the existing research has covered, and also catalogue key questions that practitioners would like to be answered, whether or not academic research has broached these questions. Also, we categorize the research questions based on their purposes.
6. What conclusions have been made from existing research? That is, what questions from RQ2 have been answered, and what are these answers?
7. What questions from RQ2 are left unanswered? (These present directions for future research.)
Regarding our RQ2, on the research questions that have been asked, we want to identify not only the research questions that we extract from the articles, but also what questions are of interest that have not been studied. For this, we have identified a few banks of Wikipedia-related research questions.
We are most of all interested in questions that wikipedians are asking, other than what researchers are asking. There is an old list of research questions or goals at http://meta.wikimedia.org/wiki/Wikimedia_Foundation_Research_Goals; these questions are about Wikimedia Foundation projects in general, though Wikipedia is of course included. Could you please review this list and update that page directly with any additional questions? Alternately, you could reply us directly, and we could update the list.
Another bank of questions we have identified is more directed towards academics and researchers: http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Wikidemia#Research_Questi.... We have asked the wiki-research-l subscribers to update that list. We will draw from both lists for our bank of research questions.
Thanks for your help.
Chitu Okoli, Concordia University, Montreal, Canada (http://chitu.okoli.org/professional/open-content/wikipedia-and-open-content....) Arto Lanamäki, University of Agder, Kristiansand, Norway Mohamad Mehdi, Concordia University, Montreal, Canada Mostafa Mesgari, Concordia University, Montreal, Canada
TL;DR wall of text amirite?
On Mon, Mar 14, 2011 at 9:40 PM, Chitu Okoli Chitu.Okoli@concordia.cawrote:
[Apologies for cross-posting; this same e-mail is being sent to wikipedia-l, WikiEN-l and foundation-l]
Hi everyone,
We are a research group conducting a systematic literature review on Wikipedia-related peer-reviewed academic studies published in the English language. (Although there are many excellent studies in other languages, we unfortunately do not have the resources to systematically review these at any kind of acceptable scholarly level. Also, our study is about Wikipedia only, not about other Wikimedia Foundation projects. However, we do include studies about other language Wikipedias, as long as the studies are published in English.) We have completed a search using many major databases of scholarly research. We've posted separate messages to wiki-research-l related to this literature review.
We have identified over 2,100 peer-reviewed studies that have "wikipedia", "wikipedian", or "wikipedians" in their title, abstract or keywords. As this number of studies is far too large for conducting a review synthesis, we have decided to focus only on peer-reviewed journal publications and doctoral theses; we identified 638 such studies. In addition, we identified around 1,500 peer-reviewed conference articles.
We hope that our review would provide useful insights for both wikipedians and researchers. (Although we know that most Wikipedia researchers are also wikipedians, we define wikipedian or "Wikipedia practitioner" here as someone involved in the Wikipedia project who is not also a scholarly researcher.) In particular, here is a list of some of the research questions we are investigating in our review that are particularly pertinent to wikipedians (you can check wiki-research-l for the full set of research questions):
- What high-quality research has been conducted with Wikipedia as a major
topic or data source? As mentioned in the introductory e-mail, we have already identified over 2,100 studies, though we will only analyze 638 of them in depth. We will group the articles by field of study.
- What research questions have been asked by various sources, both
academic scholarly and practitioner? We want to know both the subjects that the existing research has covered, and also catalogue key questions that practitioners would like to be answered, whether or not academic research has broached these questions. Also, we categorize the research questions based on their purposes.
- What conclusions have been made from existing research? That is, what
questions from RQ2 have been answered, and what are these answers?
- What questions from RQ2 are left unanswered? (These present directions
for future research.)
Regarding our RQ2, on the research questions that have been asked, we want to identify not only the research questions that we extract from the articles, but also what questions are of interest that have not been studied. For this, we have identified a few banks of Wikipedia-related research questions.
We are most of all interested in questions that wikipedians are asking, other than what researchers are asking. There is an old list of research questions or goals at http://meta.wikimedia.org/wiki/Wikimedia_Foundation_Research_Goals; these questions are about Wikimedia Foundation projects in general, though Wikipedia is of course included. Could you please review this list and update that page directly with any additional questions? Alternately, you could reply us directly, and we could update the list.
Another bank of questions we have identified is more directed towards academics and researchers: http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Wikidemia#Research_Questi.... We have asked the wiki-research-l subscribers to update that list. We will draw from both lists for our bank of research questions.
Thanks for your help.
Chitu Okoli, Concordia University, Montreal, Canada ( http://chitu.okoli.org/professional/open-content/wikipedia-and-open-content.... ) Arto Lanamäki, University of Agder, Kristiansand, Norway Mohamad Mehdi, Concordia University, Montreal, Canada Mostafa Mesgari, Concordia University, Montreal, Canada
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Sad to say, I only read about a quarter of it before I gave up too.
Sent from my Droid2 Elias Friedman A.S., CCEMT-P אליהו מתתיהו בן צבי elipongo@gmail.com On Mar 15, 2011 12:25 AM, "I Love Plankton" iloveplankton@gmail.com wrote:
TL;DR wall of text amirite?
On Mon, Mar 14, 2011 at 9:40 PM, Chitu Okoli <Chitu.Okoli@concordia.ca wrote:
[Apologies for cross-posting; this same e-mail is being sent to wikipedia-l, WikiEN-l and foundation-l]
Hi everyone,
We are a research group conducting a systematic literature review on Wikipedia-related peer-reviewed academic studies published in the English language. (Although there are many excellent studies in other languages,
we
unfortunately do not have the resources to systematically review these at any kind of acceptable scholarly level. Also, our study is about
Wikipedia
only, not about other Wikimedia Foundation projects. However, we do
include
studies about other language Wikipedias, as long as the studies are published in English.) We have completed a search using many major
databases
of scholarly research. We've posted separate messages to wiki-research-l related to this literature review.
We have identified over 2,100 peer-reviewed studies that have
"wikipedia",
"wikipedian", or "wikipedians" in their title, abstract or keywords. As
this
number of studies is far too large for conducting a review synthesis, we have decided to focus only on peer-reviewed journal publications and doctoral theses; we identified 638 such studies. In addition, we
identified
around 1,500 peer-reviewed conference articles.
We hope that our review would provide useful insights for both
wikipedians
and researchers. (Although we know that most Wikipedia researchers are
also
wikipedians, we define wikipedian or "Wikipedia practitioner" here as someone involved in the Wikipedia project who is not also a scholarly researcher.) In particular, here is a list of some of the research
questions
we are investigating in our review that are particularly pertinent to wikipedians (you can check wiki-research-l for the full set of research questions):
- What high-quality research has been conducted with Wikipedia as a
major
topic or data source? As mentioned in the introductory e-mail, we have already identified over 2,100 studies, though we will only analyze 638 of them in depth. We will group the articles by field of study.
- What research questions have been asked by various sources, both
academic scholarly and practitioner? We want to know both the subjects
that
the existing research has covered, and also catalogue key questions that practitioners would like to be answered, whether or not academic research has broached these questions. Also, we categorize the research questions based on their purposes.
- What conclusions have been made from existing research? That is, what
questions from RQ2 have been answered, and what are these answers?
- What questions from RQ2 are left unanswered? (These present directions
for future research.)
Regarding our RQ2, on the research questions that have been asked, we
want
to identify not only the research questions that we extract from the articles, but also what questions are of interest that have not been studied. For this, we have identified a few banks of Wikipedia-related research questions.
We are most of all interested in questions that wikipedians are asking, other than what researchers are asking. There is an old list of research questions or goals at http://meta.wikimedia.org/wiki/Wikimedia_Foundation_Research_Goals; these questions are about Wikimedia Foundation projects in general, though Wikipedia is of course included. Could you please review this list and update that page directly with any additional questions? Alternately, you could reply us directly, and we could update the list.
Another bank of questions we have identified is more directed towards academics and researchers:
http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Wikidemia#Research_Questi... .
We have asked the wiki-research-l subscribers to update that list. We
will
draw from both lists for our bank of research questions.
Thanks for your help.
Chitu Okoli, Concordia University, Montreal, Canada (
http://chitu.okoli.org/professional/open-content/wikipedia-and-open-content....
) Arto Lanamäki, University of Agder, Kristiansand, Norway Mohamad Mehdi, Concordia University, Montreal, Canada Mostafa Mesgari, Concordia University, Montreal, Canada
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
I read it several times before figuring out (maybe?) that this is a request that update the list of questions at [[WP:WD]]. That page looks rather...well, "outdated" is an understatement...
However, it would seem to me that a researcher would generate questions to research by looking at the discussion sections of recent papers on the topic of Wikipedia. Usually in the discussion section, the author mentions something like "more research needs to be done on /x/ /topic/."
God bless, Bob
On 3/14/2011 11:28 PM, Elias Friedman wrote:
Sad to say, I only read about a quarter of it before I gave up too.
Sent from my Droid2 Elias Friedman A.S., CCEMT-P אליהו מתתיהו בן צבי elipongo@gmail.com On Mar 15, 2011 12:25 AM, "I Love Plankton"iloveplankton@gmail.com wrote:
TL;DR wall of text amirite?
On Mon, Mar 14, 2011 at 9:40 PM, Chitu Okoli<Chitu.Okoli@concordia.ca wrote:
[Apologies for cross-posting; this same e-mail is being sent to wikipedia-l, WikiEN-l and foundation-l]
Hi everyone,
We are a research group conducting a systematic literature review on Wikipedia-related peer-reviewed academic studies published in the English language. (Although there are many excellent studies in other languages,
we
unfortunately do not have the resources to systematically review these at any kind of acceptable scholarly level. Also, our study is about
Wikipedia
only, not about other Wikimedia Foundation projects. However, we do
include
studies about other language Wikipedias, as long as the studies are published in English.) We have completed a search using many major
databases
of scholarly research. We've posted separate messages to wiki-research-l related to this literature review.
We have identified over 2,100 peer-reviewed studies that have
"wikipedia",
"wikipedian", or "wikipedians" in their title, abstract or keywords. As
this
number of studies is far too large for conducting a review synthesis, we have decided to focus only on peer-reviewed journal publications and doctoral theses; we identified 638 such studies. In addition, we
identified
around 1,500 peer-reviewed conference articles.
We hope that our review would provide useful insights for both
wikipedians
and researchers. (Although we know that most Wikipedia researchers are
also
wikipedians, we define wikipedian or "Wikipedia practitioner" here as someone involved in the Wikipedia project who is not also a scholarly researcher.) In particular, here is a list of some of the research
questions
we are investigating in our review that are particularly pertinent to wikipedians (you can check wiki-research-l for the full set of research questions):
- What high-quality research has been conducted with Wikipedia as a
major
topic or data source? As mentioned in the introductory e-mail, we have already identified over 2,100 studies, though we will only analyze 638 of them in depth. We will group the articles by field of study.
- What research questions have been asked by various sources, both
academic scholarly and practitioner? We want to know both the subjects
that
the existing research has covered, and also catalogue key questions that practitioners would like to be answered, whether or not academic research has broached these questions. Also, we categorize the research questions based on their purposes.
- What conclusions have been made from existing research? That is, what
questions from RQ2 have been answered, and what are these answers?
- What questions from RQ2 are left unanswered? (These present directions
for future research.)
Regarding our RQ2, on the research questions that have been asked, we
want
to identify not only the research questions that we extract from the articles, but also what questions are of interest that have not been studied. For this, we have identified a few banks of Wikipedia-related research questions.
We are most of all interested in questions that wikipedians are asking, other than what researchers are asking. There is an old list of research questions or goals at http://meta.wikimedia.org/wiki/Wikimedia_Foundation_Research_Goals; these questions are about Wikimedia Foundation projects in general, though Wikipedia is of course included. Could you please review this list and update that page directly with any additional questions? Alternately, you could reply us directly, and we could update the list.
Another bank of questions we have identified is more directed towards academics and researchers:
http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Wikidemia#Research_Questi... .
We have asked the wiki-research-l subscribers to update that list. We
will
draw from both lists for our bank of research questions.
Thanks for your help.
Chitu Okoli, Concordia University, Montreal, Canada (
http://chitu.okoli.org/professional/open-content/wikipedia-and-open-content....
) Arto Lanamäki, University of Agder, Kristiansand, Norway Mohamad Mehdi, Concordia University, Montreal, Canada Mostafa Mesgari, Concordia University, Montreal, Canada
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Now that i understood. It was short, clear, and concise. Thank you
On Tue, Mar 15, 2011 at 8:12 AM, Bob the Wikipedian < bobthewikipedian@gmail.com> wrote:
I read it several times before figuring out (maybe?) that this is a request that update the list of questions at [[WP:WD]]. That page looks rather...well, "outdated" is an understatement...
However, it would seem to me that a researcher would generate questions to research by looking at the discussion sections of recent papers on the topic of Wikipedia. Usually in the discussion section, the author mentions something like "more research needs to be done on /x/ /topic/."
God bless, Bob
On 3/14/2011 11:28 PM, Elias Friedman wrote:
Sad to say, I only read about a quarter of it before I gave up too.
Sent from my Droid2 Elias Friedman A.S., CCEMT-P אליהו מתתיהו בן צבי elipongo@gmail.com On Mar 15, 2011 12:25 AM, "I Love Plankton"iloveplankton@gmail.com
wrote:
TL;DR wall of text amirite?
On Mon, Mar 14, 2011 at 9:40 PM, Chitu Okoli<Chitu.Okoli@concordia.ca wrote:
[Apologies for cross-posting; this same e-mail is being sent to wikipedia-l, WikiEN-l and foundation-l]
Hi everyone,
We are a research group conducting a systematic literature review on Wikipedia-related peer-reviewed academic studies published in the
English
language. (Although there are many excellent studies in other
languages,
we
unfortunately do not have the resources to systematically review these
at
any kind of acceptable scholarly level. Also, our study is about
Wikipedia
only, not about other Wikimedia Foundation projects. However, we do
include
studies about other language Wikipedias, as long as the studies are published in English.) We have completed a search using many major
databases
of scholarly research. We've posted separate messages to
wiki-research-l
related to this literature review.
We have identified over 2,100 peer-reviewed studies that have
"wikipedia",
"wikipedian", or "wikipedians" in their title, abstract or keywords. As
this
number of studies is far too large for conducting a review synthesis,
we
have decided to focus only on peer-reviewed journal publications and doctoral theses; we identified 638 such studies. In addition, we
identified
around 1,500 peer-reviewed conference articles.
We hope that our review would provide useful insights for both
wikipedians
and researchers. (Although we know that most Wikipedia researchers are
also
wikipedians, we define wikipedian or "Wikipedia practitioner" here as someone involved in the Wikipedia project who is not also a scholarly researcher.) In particular, here is a list of some of the research
questions
we are investigating in our review that are particularly pertinent to wikipedians (you can check wiki-research-l for the full set of research questions):
- What high-quality research has been conducted with Wikipedia as a
major
topic or data source? As mentioned in the introductory e-mail, we have already identified over 2,100 studies, though we will only analyze 638
of
them in depth. We will group the articles by field of study.
- What research questions have been asked by various sources, both
academic scholarly and practitioner? We want to know both the subjects
that
the existing research has covered, and also catalogue key questions
that
practitioners would like to be answered, whether or not academic
research
has broached these questions. Also, we categorize the research
questions
based on their purposes.
- What conclusions have been made from existing research? That is,
what
questions from RQ2 have been answered, and what are these answers?
- What questions from RQ2 are left unanswered? (These present
directions
for future research.)
Regarding our RQ2, on the research questions that have been asked, we
want
to identify not only the research questions that we extract from the articles, but also what questions are of interest that have not been studied. For this, we have identified a few banks of Wikipedia-related research questions.
We are most of all interested in questions that wikipedians are asking, other than what researchers are asking. There is an old list of
research
questions or goals at http://meta.wikimedia.org/wiki/Wikimedia_Foundation_Research_Goals;
these
questions are about Wikimedia Foundation projects in general, though Wikipedia is of course included. Could you please review this list and update that page directly with any additional questions? Alternately,
you
could reply us directly, and we could update the list.
Another bank of questions we have identified is more directed towards academics and researchers:
http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Wikidemia#Research_Questi...
.
We have asked the wiki-research-l subscribers to update that list. We
will
draw from both lists for our bank of research questions.
Thanks for your help.
Chitu Okoli, Concordia University, Montreal, Canada (
http://chitu.okoli.org/professional/open-content/wikipedia-and-open-content....
) Arto Lanamäki, University of Agder, Kristiansand, Norway Mohamad Mehdi, Concordia University, Montreal, Canada Mostafa Mesgari, Concordia University, Montreal, Canada
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Hi everybody,
I miss in Wikipedia a page like a book-page in an old-style lexicon. In the past I search something in an printed lexicon and found on the same page an interesting image and read a total other text, about this interesting image.
Now we can do this also in Wikipedia. I wrote a Perl-script which scan the dumps of a language and sort the title. An other script get via API the first paragraph and the first image of all articles of one page. This result will be represented in a multicolumn page in many browsers (FF, Safari, Chromium). At the moment 36 languages are supported.
Project page with screens http://de.wikipedia.org/wiki/Benutzer:Stefan_K%C3%BChn/The_Book
Homepage of "The Book" http://toolserver.org/~sk/cgi-bin/book/book.cgi
Maybe a better programmer can implement something like this in the Mediawiki-Software.
Stefan
I was a bit confused at first, but the screenshots you posted at your user subpage helped. It would appear this is a script that serves as a visual index with previews, allowing the user to view the lead image and first paragraph of all articles in alphabetical order.
Stefan, my only question right now is this-- it is very nifty, but does it have any practical use? I can't think of any.
Gott segnet, Bob
On 3/20/2011 1:04 PM, Stefan Kühn wrote:
Hi everybody,
I miss in Wikipedia a page like a book-page in an old-style lexicon. In the past I search something in an printed lexicon and found on the same page an interesting image and read a total other text, about this interesting image.
Now we can do this also in Wikipedia. I wrote a Perl-script which scan the dumps of a language and sort the title. An other script get via API the first paragraph and the first image of all articles of one page. This result will be represented in a multicolumn page in many browsers (FF, Safari, Chromium). At the moment 36 languages are supported.
Project page with screens http://de.wikipedia.org/wiki/Benutzer:Stefan_K%C3%BChn/The_Book
Homepage of "The Book" http://toolserver.org/~sk/cgi-bin/book/book.cgi
Maybe a better programmer can implement something like this in the Mediawiki-Software.
Stefan
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
I love this idea, and Stefan's proof of concept. I agree that we need this sort of way of viewing pages. If there were an automatic way to do this, it would be a most welcome variation on the current Category view, for instance.
It would also be helpful for people looking for a brief bit of information about every topic for print.
Sam.
On Sun, Mar 20, 2011 at 2:04 PM, Stefan Kühn kuehn-s@gmx.net wrote:
Hi everybody,
I miss in Wikipedia a page like a book-page in an old-style lexicon. In the past I search something in an printed lexicon and found on the same page an interesting image and read a total other text, about this interesting image.
Now we can do this also in Wikipedia. I wrote a Perl-script which scan the dumps of a language and sort the title. An other script get via API the first paragraph and the first image of all articles of one page. This result will be represented in a multicolumn page in many browsers (FF, Safari, Chromium). At the moment 36 languages are supported.
Project page with screens http://de.wikipedia.org/wiki/Benutzer:Stefan_K%C3%BChn/The_Book
Homepage of "The Book" http://toolserver.org/~sk/cgi-bin/book/book.cgi
Maybe a better programmer can implement something like this in the Mediawiki-Software.
Stefan
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Now we can do this also in Wikipedia. I wrote a Perl-script which scan the dumps of a language and sort the title. An other script get via API the first paragraph and the first image of all articles of one page.
Looks like there was a slight duplication of efforts. http://toolserver.org/~dschwen/synopsis/?l=en&t=Synopsis I developed the synopsis script on the toolserver for the WikiMiniAtlas, where it allows a quick preview of the articles on the map. I found the task to be not entirely trivial. At first I tried fetching the raw wikitext and stripping the markup. However Templates (some Wikipedias use templates to insert population numbers!!), Comments, References, Links make this tedious. If you want to retain basic formatting such as Bold/Italic it becomes a near impossible task. So I switched to fetching action=rendered and using PHP:DOMDocument to extract the first paragraph (Minus tables and minus short paragraph elements that contain coordinates and removing internal links to the reference section etc.). Works quite well.
I've unsuscribed myself twice from this stupid list!!!!!!!
Why I'm still receiving messages!!!?!?!?!?!?!?!!?!?!?!?
On 26 March 2011 22:43, Daniel Schwen lists@schwen.de wrote:
Now we can do this also in Wikipedia. I wrote a Perl-script which scan the dumps of a language and sort the title. An other script get via API the first paragraph and the first image of all articles of one page.
Looks like there was a slight duplication of efforts. http://toolserver.org/~dschwen/synopsis/?l=en&t=Synopsis I developed the synopsis script on the toolserver for the WikiMiniAtlas, where it allows a quick preview of the articles on the map. I found the task to be not entirely trivial. At first I tried fetching the raw wikitext and stripping the markup. However Templates (some Wikipedias use templates to insert population numbers!!), Comments, References, Links make this tedious. If you want to retain basic formatting such as Bold/Italic it becomes a near impossible task. So I switched to fetching action=rendered and using PHP:DOMDocument to extract the first paragraph (Minus tables and minus short paragraph elements that contain coordinates and removing internal links to the reference section etc.). Works quite well.
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Some people are stupid.
________________________________ From: Yordan Andreevski dakata@gmail.com To: Wikipedia mailing list wikipedia-l@lists.wikimedia.org Sent: Sun, March 27, 2011 7:33:42 AM Subject: Re: [Wikipedia-l] The Book
I've unsuscribed myself twice from this stupid list!!!!!!!
Why I'm still receiving messages!!!?!?!?!?!?!?!!?!?!?!?
On 26 March 2011 22:43, Daniel Schwen lists@schwen.de wrote:
Now we can do this also in Wikipedia. I wrote a Perl-script which scan the dumps of a language and sort the title. An other script get via API the first paragraph and the first image of all articles of one page.
Looks like there was a slight duplication of efforts. http://toolserver.org/~dschwen/synopsis/?l=en&t=Synopsis I developed the synopsis script on the toolserver for the WikiMiniAtlas, where it allows a quick preview of the articles on the map. I found the task to be not entirely trivial. At first I tried fetching the raw wikitext and stripping the markup. However Templates (some Wikipedias use templates to insert population numbers!!), Comments, References, Links make this tedious. If you want to retain basic formatting such as Bold/Italic it becomes a near impossible task. So I switched to fetching action=rendered and using PHP:DOMDocument to extract the first paragraph (Minus tables and minus short paragraph elements that contain coordinates and removing internal links to the reference section etc.). Works quite well.
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
_______________________________________________ Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Up yours!!!
On 27 March 2011 20:25, Lino Dial linoadial2001@yahoo.com wrote:
Some people are stupid.
From: Yordan Andreevski dakata@gmail.com To: Wikipedia mailing list wikipedia-l@lists.wikimedia.org Sent: Sun, March 27, 2011 7:33:42 AM Subject: Re: [Wikipedia-l] The Book
I've unsuscribed myself twice from this stupid list!!!!!!!
Why I'm still receiving messages!!!?!?!?!?!?!?!!?!?!?!?
On 26 March 2011 22:43, Daniel Schwen lists@schwen.de wrote:
Now we can do this also in Wikipedia. I wrote a Perl-script which scan the dumps of a language and sort the title. An other script get via API the first paragraph and the first image of all articles of one page.
Looks like there was a slight duplication of efforts. http://toolserver.org/~dschwen/synopsis/?l=en&t=Synopsis I developed the synopsis script on the toolserver for the WikiMiniAtlas, where it allows a quick preview of the articles on the map. I found the task to be not entirely trivial. At first I tried fetching the raw wikitext and stripping the markup. However Templates (some Wikipedias use templates to insert population numbers!!), Comments, References, Links make this tedious. If you want to retain basic formatting such as Bold/Italic it becomes a near impossible task. So I switched to fetching action=rendered and using PHP:DOMDocument to extract the first paragraph (Minus tables and minus short paragraph elements that contain coordinates and removing internal links to the reference section etc.). Works quite well.
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l _______________________________________________ Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
calm down children.
On Sun, Mar 27, 2011 at 8:33 PM, Yordan Andreevski dakata@gmail.com wrote:
Up yours!!!
On 27 March 2011 20:25, Lino Dial linoadial2001@yahoo.com wrote:
Some people are stupid.
From: Yordan Andreevski dakata@gmail.com To: Wikipedia mailing list wikipedia-l@lists.wikimedia.org Sent: Sun, March 27, 2011 7:33:42 AM Subject: Re: [Wikipedia-l] The Book
I've unsuscribed myself twice from this stupid list!!!!!!!
Why I'm still receiving messages!!!?!?!?!?!?!?!!?!?!?!?
On 26 March 2011 22:43, Daniel Schwen lists@schwen.de wrote:
Now we can do this also in Wikipedia. I wrote a Perl-script which
scan
the dumps of a language and sort the title. An other script get via
API
the first paragraph and the first image of all articles of one page.
Looks like there was a slight duplication of efforts. http://toolserver.org/~dschwen/synopsis/?l=en&t=Synopsis I developed the synopsis script on the toolserver for the WikiMiniAtlas, where it allows a quick preview of the articles on the map. I found the task to be not entirely trivial. At first I tried fetching the raw wikitext and stripping the markup. However Templates (some Wikipedias use templates to insert population numbers!!), Comments, References, Links make this tedious. If you want to retain basic formatting such as Bold/Italic it becomes a near impossible task. So I switched to fetching action=rendered and using PHP:DOMDocument to extract the first paragraph (Minus tables and minus short paragraph elements that contain coordinates and removing internal links to the reference section etc.). Works quite well.
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l _______________________________________________ Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
so why I'm still in the list?
On 28 March 2011 01:52, I Love Plankton iloveplankton@gmail.com wrote:
calm down children.
On Sun, Mar 27, 2011 at 8:33 PM, Yordan Andreevski dakata@gmail.com wrote:
Up yours!!!
On 27 March 2011 20:25, Lino Dial linoadial2001@yahoo.com wrote:
Some people are stupid.
From: Yordan Andreevski dakata@gmail.com To: Wikipedia mailing list wikipedia-l@lists.wikimedia.org Sent: Sun, March 27, 2011 7:33:42 AM Subject: Re: [Wikipedia-l] The Book
I've unsuscribed myself twice from this stupid list!!!!!!!
Why I'm still receiving messages!!!?!?!?!?!?!?!!?!?!?!?
On 26 March 2011 22:43, Daniel Schwen lists@schwen.de wrote:
Now we can do this also in Wikipedia. I wrote a Perl-script which
scan
the dumps of a language and sort the title. An other script get via
API
the first paragraph and the first image of all articles of one
page.
Looks like there was a slight duplication of efforts. http://toolserver.org/~dschwen/synopsis/?l=en&t=Synopsis I developed the synopsis script on the toolserver for the WikiMiniAtlas, where it allows a quick preview of the articles on the map. I found the task to be not entirely trivial. At first I tried
fetching
the raw wikitext and stripping the markup. However Templates (some Wikipedias use templates to insert population numbers!!), Comments, References, Links make this tedious. If you want to retain basic formatting such as Bold/Italic it becomes a near impossible task. So
I
switched to fetching action=rendered and using PHP:DOMDocument to extract the first paragraph (Minus tables and minus short paragraph elements that contain coordinates and removing internal links to the reference section etc.). Works quite well.
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l _______________________________________________ Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Did you use the form at:
https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Which is linked to at the bottom of ever post?
Sent from my Droid2 Elias Friedman A.S., CCEMT-P אליהו מתתיהו בן צבי elipongo@gmail.com On Mar 28, 2011 12:43 AM, "Yordan Andreevski" dakata@gmail.com wrote:
so why I'm still in the list?
On 28 March 2011 01:52, I Love Plankton iloveplankton@gmail.com wrote:
calm down children.
On Sun, Mar 27, 2011 at 8:33 PM, Yordan Andreevski dakata@gmail.com wrote:
Up yours!!!
On 27 March 2011 20:25, Lino Dial linoadial2001@yahoo.com wrote:
Some people are stupid.
From: Yordan Andreevski dakata@gmail.com To: Wikipedia mailing list wikipedia-l@lists.wikimedia.org Sent: Sun, March 27, 2011 7:33:42 AM Subject: Re: [Wikipedia-l] The Book
I've unsuscribed myself twice from this stupid list!!!!!!!
Why I'm still receiving messages!!!?!?!?!?!?!?!!?!?!?!?
On 26 March 2011 22:43, Daniel Schwen lists@schwen.de wrote:
Now we can do this also in Wikipedia. I wrote a Perl-script which
scan
the dumps of a language and sort the title. An other script get
via
API
the first paragraph and the first image of all articles of one
page.
Looks like there was a slight duplication of efforts. http://toolserver.org/~dschwen/synopsis/?l=en&t=Synopsis I developed the synopsis script on the toolserver for the WikiMiniAtlas, where it allows a quick preview of the articles on
the
map. I found the task to be not entirely trivial. At first I tried
fetching
the raw wikitext and stripping the markup. However Templates (some Wikipedias use templates to insert population numbers!!), Comments, References, Links make this tedious. If you want to retain basic formatting such as Bold/Italic it becomes a near impossible task.
So
I
switched to fetching action=rendered and using PHP:DOMDocument to extract the first paragraph (Minus tables and minus short paragraph elements that contain coordinates and removing internal links to
the
reference section etc.). Works quite well.
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l _______________________________________________ Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Speaking for myself, you haven't exactly found an endearing way to make me care why you're still on the list. If you want help, shouting at the other subscribers isn't the most effective way to get it.
-Pete
On Mar 27, 2011, at 9:43 PM, Yordan Andreevski wrote:
so why I'm still in the list?
On 28 March 2011 01:52, I Love Plankton iloveplankton@gmail.com wrote:
calm down children.
On Sun, Mar 27, 2011 at 8:33 PM, Yordan Andreevski dakata@gmail.com wrote:
Up yours!!!
On 27 March 2011 20:25, Lino Dial linoadial2001@yahoo.com wrote:
Some people are stupid.
From: Yordan Andreevski dakata@gmail.com To: Wikipedia mailing list wikipedia-l@lists.wikimedia.org Sent: Sun, March 27, 2011 7:33:42 AM Subject: Re: [Wikipedia-l] The Book
I've unsuscribed myself twice from this stupid list!!!!!!!
Why I'm still receiving messages!!!?!?!?!?!?!?!!?!?!?!?
On 26 March 2011 22:43, Daniel Schwen lists@schwen.de wrote:
Now we can do this also in Wikipedia. I wrote a Perl-script which
scan
the dumps of a language and sort the title. An other script get via
API
the first paragraph and the first image of all articles of one
page.
Looks like there was a slight duplication of efforts. http://toolserver.org/~dschwen/synopsis/?l=en&t=Synopsis I developed the synopsis script on the toolserver for the WikiMiniAtlas, where it allows a quick preview of the articles on the map. I found the task to be not entirely trivial. At first I tried
fetching
the raw wikitext and stripping the markup. However Templates (some Wikipedias use templates to insert population numbers!!), Comments, References, Links make this tedious. If you want to retain basic formatting such as Bold/Italic it becomes a near impossible task. So
I
switched to fetching action=rendered and using PHP:DOMDocument to extract the first paragraph (Minus tables and minus short paragraph elements that contain coordinates and removing internal links to the reference section etc.). Works quite well.
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l _______________________________________________ Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Pete Forsyth peteforsyth@gmail.com 503-383-9454 mobile
Stop bitching, this has nothing to do with the topic. Honestly, if you're going to insult and yell at everyone using blind idiot punctuation, you're better off taking a long walk off a short pier. That'll *definitely* get you to stop receiving messages.
On Sun, Mar 27, 2011 at 10:33 PM, Yordan Andreevski dakata@gmail.comwrote:
I've unsuscribed myself twice from this stupid list!!!!!!!
Why I'm still receiving messages!!!?!?!?!?!?!?!!?!?!?!?
wikipedia-l@lists.wikimedia.org