[[September 2006 Thailand coup attempt]]
Sorry to ask the obvious rhetorical questions here: * Wasn't WikiNews designed to be for news? * Why is [[Wikipedia:Recentism]] only an essay? * Is at a good idea to write an encyclopedia article based a news tickers and CNN?
Regards, [[User:Pjacobi]]
On 9/20/06, Peter Jacobi peter_jacobi@gmx.net wrote:
[[September 2006 Thailand coup attempt]]
Sorry to ask the obvious rhetorical questions here:
- Wasn't WikiNews designed to be for news?
- Why is [[Wikipedia:Recentism]] only an essay?
- Is at a good idea to write an encyclopedia article based a news tickers and CNN?
It's been this way for a while
http://en.wikipedia.org/wiki/11_March_2004_Madrid_train_bombings
http://en.wikipedia.org/wiki/2004_Indian_Ocean_earthquake
http://en.wikipedia.org/wiki/7_July_2005_London_bombings
-Andrew (User:Fuzheado)
"Andrew Lih" andrew.lih@gmail.com wrote:
It's been this way for a while
http://en.wikipedia.org/wiki/11_March_2004_Madrid_train_bombings
Yep, but this was before "forking" WikiNews or at the very beginning of that project. Shouldn't we draw a line now and leave the news to WikiNews, only remerging into Wikipedia once importance is clearer and deeper analysis is available from secondardy sources?
[[User:Pjacobi]]
On 9/20/06, Peter Jacobi peter_jacobi@gmx.net wrote:
"Andrew Lih" andrew.lih@gmail.com wrote:
It's been this way for a while
http://en.wikipedia.org/wiki/11_March_2004_Madrid_train_bombings
Yep, but this was before "forking" WikiNews or at the very beginning of that project. Shouldn't we draw a line now and leave the news to WikiNews, only remerging into Wikipedia once importance is clearer and deeper analysis is available from secondardy sources?
Wikinews was never considered a fork. It's a different beast altogether: - Original reporting vs. WP:NOR - Deadline oriented vs. no deadline in Wikipedia - Snapshot of situation vs. constantly evolving story
There's no reason at all why one should eliminate the other.
-Andrew (User:Fuzheado)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Jacobi wrote:
"Andrew Lih" andrew.lih@gmail.com wrote:
It's been this way for a while
http://en.wikipedia.org/wiki/11_March_2004_Madrid_train_bombings
Yep, but this was before "forking" WikiNews or at the very beginning of that project. Shouldn't we draw a line now and leave the news to WikiNews, only remerging into Wikipedia once importance is clearer and deeper analysis is available from secondardy sources?
[[User:Pjacobi]]
Just because something happened recently doesn't mean it should be included in Wikipedia. Granted, Wikipedia will generally have less information on a current event than Wikinews (given WP's greater emphasis on verifiability) but that doesn't mean that no article is better than a short article. Besides which, these sorts of articles are generally plastered with WN links to provide the more up-to-date, less verifiable information.
Cynical
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
David Alexander Russell wrote:
Just because something happened recently doesn't mean it should be included in Wikipedia.
Cynical
I actually meant to say that just because something happened recently doesn't mean it _shouldn't_ be included in Wikipedia. Damn typos.
On 9/20/06, David Alexander Russell webmaster@davidarussell.co.uk wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Jacobi wrote:
"Andrew Lih" andrew.lih@gmail.com wrote:
It's been this way for a while
http://en.wikipedia.org/wiki/11_March_2004_Madrid_train_bombings
Yep, but this was before "forking" WikiNews or at the very beginning of that project. Shouldn't we draw a line now and leave the news to WikiNews, only remerging into Wikipedia once importance is clearer and deeper analysis is available from secondardy sources?
[[User:Pjacobi]]
Just because something happened recently doesn't mean it *shouldn't* be included in Wikipedia. Granted, Wikipedia will generally have less information on a current event than Wikinews (given WP's greater emphasis on verifiability) but that doesn't mean that no article is better than a short article.
Typically a Wikipedia article has much *more* information than a Wikinews article, since the editing "workforce" on Wikipedia is larger, and there is no "deadline" in Wikipedia and is constantly morphing. Verifiability doesn't mean the WP article will be shorter either. If you provide an external link to a new source, then - boom, it's verified. And it can be included in the article.
Besides which, these sorts of articles are generally plastered with WN links to provide the more up-to-date, less verifiable information.
Yes, there will be often be a WN article linked to from the relevant WP article. But the WP article will also have tons of links to external news sources, which are tough for Wikinews to keep up with.
FYI, German Wikipedia is more in line with the idea that "not every news event deserves an article." They are much more selective and are not shy in telling you so. :)
-Andrew (User:Fuzheado)
"Andrew Lih" andrew.lih@gmail.com wrote:
FYI, German Wikipedia is more in line with the idea that "not every news event deserves an article." They are much more selective and are not shy in telling you so. :)
Yeah. I've recently managed to get an "Wikipedia is not a News Portal" into the German equivalent of [[WP:WWIN]]. But this POV seems to collide with the consensus at enwiki.
[[User:Pjacobi]]
Peter Jacobi wrote:
"Andrew Lih" andrew.lih@gmail.com wrote:
FYI, German Wikipedia is more in line with the idea that "not every news event deserves an article." They are much more selective and are not shy in telling you so. :)
Yeah. I've recently managed to get an "Wikipedia is not a News Portal" into the German equivalent of [[WP:WWIN]]. But this POV seems to collide with the consensus at enwiki.
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
-Mark
On 9/20/06, Delirium delirium@hackish.org wrote:
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
-Mark
Problem is that a lot of the data that would be useful in answering your question is stored on microfilm and there isn't really a quick way to scan that.
On 20/09/06, geni geniice@gmail.com wrote:
On 9/20/06, Delirium delirium@hackish.org wrote:
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
Problem is that a lot of the data that would be useful in answering your question is stored on microfilm and there isn't really a quick way to scan that.
It'll come, it'll come. Dumping everything onto disk scans in the first instance. Just under two years doubling time. You won't be *able* to buy a disk smaller than a petabyte in twenty years.
(Googling "hard disk" "moore's law" leads me to [[Moore's Law]], which points me to [[Kryder's Law]], which is a useful study in hideous self-reference and Wikipedia editorial decisions forming neologisms. I'm so glad [[analogue disc record]] was moved before achieving any currency anywhere else.)
- d.
On 9/21/06, David Gerard dgerard@gmail.com wrote:
It'll come, it'll come. Dumping everything onto disk scans in the first instance. Just under two years doubling time. You won't be *able* to buy a disk smaller than a petabyte in twenty years.
Who is going to do the physical scanning?
Another issues is OS maps over 50 years old. All public domain but a pain to scan. It would be nice if the UK branch of wikimedia could do something but I doubt we could get hold of the equipment.
On 21/09/06, geni geniice@gmail.com wrote:
On 9/21/06, David Gerard dgerard@gmail.com wrote:
It'll come, it'll come. Dumping everything onto disk scans in the first instance. Just under two years doubling time. You won't be *able* to buy a disk smaller than a petabyte in twenty years.
Who is going to do the physical scanning?
Some dedicated fools, I expect.
Another issues is OS maps over 50 years old. All public domain but a pain to scan. It would be nice if the UK branch of wikimedia could do something but I doubt we could get hold of the equipment.
There *must* be efforts to this end. If not, it'd be an ideal project for Wikimedia UK to look at when it's operational.
[cc to wmuk-l]
- d.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
David Gerard wrote:
On 21/09/06, geni geniice@gmail.com wrote:
On 9/21/06, David Gerard dgerard@gmail.com wrote:
It'll come, it'll come. Dumping everything onto disk scans in the first instance. Just under two years doubling time. You won't be *able* to buy a disk smaller than a petabyte in twenty years.
Who is going to do the physical scanning?
Some dedicated fools, I expect.
Another issues is OS maps over 50 years old. All public domain but a pain to scan. It would be nice if the UK branch of wikimedia could do something but I doubt we could get hold of the equipment.
There *must* be efforts to this end. If not, it'd be an ideal project for Wikimedia UK to look at when it's operational.
[cc to wmuk-l]
- d.
Here's an idea. I presume the main problem with scanning them is their size (ie they won't fit on an ordinary scanner). Why not just scan them in sections and then use 'photo stitching' software (the kind that's designed to turn a collection of photos into a panorama) to turn the scanned sections into a single image?
Cynical
On 9/23/06, David Alexander Russell webmaster@davidarussell.co.uk wrote:
Here's an idea. I presume the main problem with scanning them is their size (ie they won't fit on an ordinary scanner). Why not just scan them in sections and then use 'photo stitching' software (the kind that's designed to turn a collection of photos into a panorama) to turn the scanned sections into a single image?
Posible in theory but requires a lot of scans per map which increases the required time a lot.
geni wrote:
On 9/23/06, David Alexander Russell webmaster@davidarussell.co.uk wrote:
Here's an idea. I presume the main problem with scanning them is their size (ie they won't fit on an ordinary scanner). Why not just scan them in sections and then use 'photo stitching' software (the kind that's designed to turn a collection of photos into a panorama) to turn the scanned sections into a single image?
Posible in theory but requires a lot of scans per map which increases the required time a lot.
Unless someone has access to a mega-size scanner that takes a whole sheet at once. Such beasts are already used by companies who reproduce building construction plans.
Ec
Ray Saintonge wrote:
geni wrote:
On 9/23/06, David Alexander Russell webmaster@davidarussell.co.uk wrote:
Here's an idea. I presume the main problem with scanning them is their size (ie they won't fit on an ordinary scanner). Why not just scan them in sections and then use 'photo stitching' software (the kind that's designed to turn a collection of photos into a panorama) to turn the scanned sections into a single image?
Posible in theory but requires a lot of scans per map which increases the required time a lot.
Unless someone has access to a mega-size scanner that takes a whole sheet at once. Such beasts are already used by companies who reproduce building construction plans.
Ec
The approach used by Google and the Internet Archive is to chuck them into a right-angle frame with glass sheets and cameras.
On 9/25/06, Alphax (Wikipedia email) alphasigmax@gmail.com wrote:
The approach used by Google and the Internet Archive is to chuck them into a right-angle frame with glass sheets and cameras.
36 inch (OS maps are 35) scanners are around.
On 9/25/06, Alphax (Wikipedia email) alphasigmax@gmail.com wrote:
Ray Saintonge wrote:
geni wrote:
On 9/23/06, David Alexander Russell webmaster@davidarussell.co.uk wrote:
Here's an idea. I presume the main problem with scanning them is their size (ie they won't fit on an ordinary scanner). Why not just scan them in sections and then use 'photo stitching' software (the kind that's designed to turn a collection of photos into a panorama) to turn the scanned sections into a single image?
Posible in theory but requires a lot of scans per map which increases the required time a lot.
Unless someone has access to a mega-size scanner that takes a whole sheet at once. Such beasts are already used by companies who reproduce building construction plans.
Ec
The approach used by Google and the Internet Archive is to chuck them into a right-angle frame with glass sheets and cameras.
That's a workable setup for books where you need to avoid cracking the spine, but for large formats what you really want is drum scanning. Would be nice to organize an effort to have institutions donate some time on their scanners for Wikipedians to use.
-Andrew (User:Fuzheado)
On 9/23/06, David Alexander Russell webmaster@davidarussell.co.uk wrote:
Here's an idea. I presume the main problem with scanning them is their size (ie they won't fit on an ordinary scanner). Why not just scan them in sections and then use 'photo stitching' software (the kind that's designed to turn a collection of photos into a panorama) to turn the scanned sections into a single image?
It's time consuming and difficult. It is very easy to screw up in scanning something like this, so all of the parts will fit except for one (i.e. if something is not scanned at exactly the same elevation or gets skewed slightly in some other way).
It can be done. It's just not very easy. And it's certainly not fun. I wouldn't do it for free.
In the end I suspect that most digitization of such things will be done either by countries which will try to sell access to them (i.e. ProQuest) or as not-for-profit grants (i.e. a university deciding to put its PD library online, which looks good on a yearly report even if it doesn't create any revenue). This sort of dull work is not the sort of thing that too many volunteers would be interested in doing over time, IMO. (Maybe I'm wrong! Hopefully!)
FF
On 23/09/06, Fastfission fastfission@gmail.com wrote:
It's time consuming and difficult. It is very easy to screw up in scanning something like this, so all of the parts will fit except for one (i.e. if something is not scanned at exactly the same elevation or gets skewed slightly in some other way). It can be done. It's just not very easy. And it's certainly not fun. I wouldn't do it for free. In the end I suspect that most digitization of such things will be done either by countries which will try to sell access to them (i.e. ProQuest) or as not-for-profit grants (i.e. a university deciding to put its PD library online, which looks good on a yearly report even if it doesn't create any revenue). This sort of dull work is not the sort of thing that too many volunteers would be interested in doing over time, IMO. (Maybe I'm wrong! Hopefully!)
We'd try to get the best scans we could. But I suspect that if you put up the raw scans and any of it could be turned to an actual use, volunteers to do the tedious labour would turn up. I'm amazed at the tedious jobs people do for Wikimedia projects in the cause of getting information into the public domain.
See also discussion on wikimediauk-l (to which this is cc'd).
- d.
On 9/23/06, David Gerard dgerard@gmail.com wrote:
We'd try to get the best scans we could. But I suspect that if you put up the raw scans and any of it could be turned to an actual use, volunteers to do the tedious labour would turn up. I'm amazed at the tedious jobs people do for Wikimedia projects in the cause of getting information into the public domain.
The scanning itself is pretty tedious labor. If the map is of any size it means you have to fold it to get the middle sections, which increases the likelihood that things won't line up later (slightly different angles, etc., make for very unsightly alignment problems).
Anyway, I've no doubt of the power of Wikipedians to volunteer for boring tasks -- I've done some pretty dull ones in my day for the project (like extracting and cropping and formatting hi-res images of all of the major plates in Vesalius' [[De Humanis Corporis Fabrica]], which is now at Commons), but having tried to stich together maps and other large images from scans in the past, I have to admit I'm pretty skeptical. It is not just boring work, it is boring work where more often that not one just feels frustrated by it. In my experience.
But again... I'm thrilled if people want to do it! :-)
FF
On 9/22/06, David Gerard dgerard@gmail.com wrote:
On 21/09/06, geni geniice@gmail.com wrote:
On 9/21/06, David Gerard dgerard@gmail.com wrote:
It'll come, it'll come. Dumping everything onto disk scans in the first instance. Just under two years doubling time. You won't be *able* to buy a disk smaller than a petabyte in twenty years.
Who is going to do the physical scanning?
Some dedicated fools, I expect.
A while back I heard how Amazon.com scans entire books.
1. Some books are scanned in North America using, IIRC (foggy memory), a homemade page-turning apparatus that somehow works together with a scanner. It is fully automatic.
2. Some books are scanned in India. It is cheap to hire manual workers there, so they hire people to run the scanners.
It seems that most of the Wikimedia Foundation's budget goes towards buying hardware, but I wonder if it'd be practical for us to hire people to do either of these things.
Cheers, [[User:Unforgettableid]]
[[User:Unforgettableid]] wrote:
On 9/22/06, David Gerard dgerard@gmail.com wrote:
On 21/09/06, geni geniice@gmail.com wrote:
On 9/21/06, David Gerard dgerard@gmail.com wrote:
It'll come, it'll come. Dumping everything onto disk scans in the first instance. Just under two years doubling time. You won't be *able* to buy a disk smaller than a petabyte in twenty years.
Who is going to do the physical scanning?
Some dedicated fools, I expect.
A while back I heard how Amazon.com scans entire books.
- Some books are scanned in North America using, IIRC (foggy memory),
a homemade page-turning apparatus that somehow works together with a scanner. It is fully automatic.
- Some books are scanned in India. It is cheap to hire manual workers
there, so they hire people to run the scanners.
It seems that most of the Wikimedia Foundation's budget goes towards buying hardware, but I wonder if it'd be practical for us to hire people to do either of these things.
We would still need the hardware. Then there's the delicate question of who decides exactly what this employee would scan.
Ec
On 9/25/06, [[User:Unforgettableid]] unforgettableid@gmail.com wrote:
A while back I heard how Amazon.com scans entire books.
- Some books are scanned in North America using, IIRC (foggy memory),
a homemade page-turning apparatus that somehow works together with a scanner. It is fully automatic.
For non-rare books, they actually feed the entire book to a machine that cuts of the spine of the book and loads the book page by page. That way they can very rapidly scan large amounts of text.
- Some books are scanned in India. It is cheap to hire manual workers
there, so they hire people to run the scanners.
It seems that most of the Wikimedia Foundation's budget goes towards buying hardware, but I wonder if it'd be practical for us to hire people to do either of these things.
WMF is strapped for cash, that money is needed elsewhere. The servers are very expensive.
Besides, WP has always been about volunteers, for virtually every aspect of it's operation and for all of it's content creation. It shouldn't be any different for this, even though this is a lot harder.
--Oskar
On 9/20/06, geni geniice@gmail.com wrote:
On 9/20/06, Delirium delirium@hackish.org wrote:
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
Problem is that a lot of the data that would be useful in answering your question is stored on microfilm and there isn't really a quick way to scan that.
Actually ProQuest has massive microfilm newspaper databases which are fulltext searchable that would fit the bill (the entire contents of the NY Times, Wash Post, LA Times, Chicago Trib, etc. which go back to the 1840s in some cases) as well as the American Periodicals Series which goes back to 1740. It's out there, though it helps to have an institutional account to get access to it.
FF
On 21/09/06, Fastfission fastfission@gmail.com wrote:
On 9/20/06, geni geniice@gmail.com wrote:
On 9/20/06, Delirium delirium@hackish.org wrote:
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
Problem is that a lot of the data that would be useful in answering your question is stored on microfilm and there isn't really a quick way to scan that.
Actually ProQuest has massive microfilm newspaper databases which are fulltext searchable that would fit the bill (the entire contents of the NY Times, Wash Post, LA Times, Chicago Trib, etc. which go back to the 1840s in some cases) as well as the American Periodicals Series which goes back to 1740. It's out there, though it helps to have an institutional account to get access to it.
FF _______________________________________________ WikiEN-l mailing list WikiEN-l@Wikipedia.org To unsubscribe from this mailing list, visit: http://mail.wikipedia.org/mailman/listinfo/wikien-l
On 21/09/06, David Gerard dgerard@gmail.com wrote:
On 21/09/06, Fastfission fastfission@gmail.com wrote:
Sorry, this was me hitting 'send' instead of 'archive' ...
On 21/09/06, Fastfission fastfission@gmail.com wrote:
On 9/20/06, geni geniice@gmail.com wrote:
On 9/20/06, Delirium delirium@hackish.org wrote:
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
Problem is that a lot of the data that would be useful in answering your question is stored on microfilm and there isn't really a quick way to scan that.
Actually ProQuest has massive microfilm newspaper databases which are fulltext searchable that would fit the bill (the entire contents of the NY Times, Wash Post, LA Times, Chicago Trib, etc. which go back to the 1840s in some cases) as well as the American Periodicals Series which goes back to 1740. It's out there, though it helps to have an institutional account to get access to it.
Non-institutional UK users have the entire /Times/ archive (1790ish on) via local libraries, which is a similar situation. Goodness only knows what you can get if you *pay* for it.
(And then, of course, there's actual printed contemporary sources. I'm doing a lot of work at the moment by paraphrasing material in old almanacs, rewriting it to be slightly more comprehensible, slap on a couple of contextual sentences at the beginning and end... and, bingo, a biographical stub on a contemporary politican, or a short article summarising some 1840s law, or the like. No microfilm required...)
geni wrote:
On 9/20/06, Delirium delirium@hackish.org wrote:
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
Problem is that a lot of the data that would be useful in answering your question is stored on microfilm and there isn't really a quick way to scan that.
This is a Wikisource function, but that dosn't make it easier. I have most of the first 20 years of McClure's Magazine. It was a monthly that became famous for muckraking journalism, and exposing the behaviour of big companies and government administration in the pre WWI era. 1,200 pages per year for 20 years gives 24,000 pages, and is a daunting task. Weeklies and dailies don't make things any easier.
Ec
Ray Saintonge wrote:
geni wrote:
On 9/20/06, Delirium delirium@hackish.org wrote:
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
Problem is that a lot of the data that would be useful in answering your question is stored on microfilm and there isn't really a quick way to scan that.
This is a Wikisource function, but that dosn't make it easier. I have most of the first 20 years of McClure's Magazine. It was a monthly that became famous for muckraking journalism, and exposing the behaviour of big companies and government administration in the pre WWI era. 1,200 pages per year for 20 years gives 24,000 pages, and is a daunting task. Weeklies and dailies don't make things any easier.
While it would certainly be nice to have it all scanned, I don't think it's necessary. We already cite lots of sources that aren't available on the internet---recently published books, journal articles, etc.---so I don't see why it would be a bigger problem that old news articles are only available in archives, on microfilm, or via digital subscription. Ain't nothin' wrong with citing sources that require a visit to a library to access.
-Mark
On 24/09/06, Delirium delirium@hackish.org wrote:
Ray Saintonge wrote:
This is a Wikisource function, but that dosn't make it easier. I have most of the first 20 years of McClure's Magazine. It was a monthly that became famous for muckraking journalism, and exposing the behaviour of big companies and government administration in the pre WWI era. 1,200 pages per year for 20 years gives 24,000 pages, and is a daunting task. Weeklies and dailies don't make things any easier.
While it would certainly be nice to have it all scanned, I don't think it's necessary. We already cite lots of sources that aren't available on the internet---recently published books, journal articles, etc.---so I don't see why it would be a bigger problem that old news articles are only available in archives, on microfilm, or via digital subscription. Ain't nothin' wrong with citing sources that require a visit to a library to access.
Yeah. It'd just be *nice* to have them scanned if they're PD. (Or privately scanned if they're not.)
- d.
I've crossposted my response to the Foundation and Wikisource lists since it could interest people there.
Delirium wrote:
Ray Saintonge wrote:
geni wrote:
On 9/20/06, Delirium delirium@hackish.org wrote:
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
Problem is that a lot of the data that would be useful in answering your question is stored on microfilm and there isn't really a quick way to scan that.
This is a Wikisource function, but that dosn't make it easier. I have most of the first 20 years of McClure's Magazine. It was a monthly that became famous for muckraking journalism, and exposing the behaviour of big companies and government administration in the pre WWI era. 1,200 pages per year for 20 years gives 24,000 pages, and is a daunting task. Weeklies and dailies don't make things any easier.
While it would certainly be nice to have it all scanned, I don't think it's necessary. We already cite lots of sources that aren't available on the internet---recently published books, journal articles, etc.---so I don't see why it would be a bigger problem that old news articles are only available in archives, on microfilm, or via digital subscription. Ain't nothin' wrong with citing sources that require a visit to a library to access.
This is certainly a fair comment. Of course the recent publications have copyright constraints that are a block to any kind of scanning. Certainly, for the sake of discussion I am limiting my comments to material where the public domain status is unquestioned. That's enough material to keep us busy.
Some of my old bound volumes of "McClure's", "Scientific American", "Popular Science", and other odd volumes have library markings and indications that they were discarded by some public or college library. I have no objection to people visiting libraries, but there's no guarantee that a nearby library will have the material sought. Project Gutenberg already includes 6 issues of "McClure's, a far but complete but substantial number of "Scientific American" when it was a weekly, and no "Popular Science". ("Popular Science" in the 19th century had far more in-depth articles than its present incarnation.) In general, I don't think we should be duplicating the efforts of PG; there's more than enough work for everybody to do.
Other important magazines like [[The Smart Set]], where H. L. Mencken wrote, are much more difficult to find. We do need to stay within the realm of the possible. Making information freely available is not a simple task; it will likely take the co-operation and co-ordination of many players who will each establish where they can work best. I would love to be able to create direct links from a WMF project to a specific spot in a book that has been digitized by another player without having to contend with a lot of proprietary restrictions being applied to public domain books.
The task is enormous.
Ec
On 9/20/06, Delirium delirium@hackish.org wrote:
Peter Jacobi wrote:
"Andrew Lih" andrew.lih@gmail.com wrote:
FYI, German Wikipedia is more in line with the idea that "not every news event deserves an article." They are much more selective and are not shy in telling you so. :)
Yeah. I've recently managed to get an "Wikipedia is not a News Portal" into the German equivalent of [[WP:WWIN]]. But this POV seems to collide with the consensus at enwiki.
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
-Mark
My personal opinion is that Wikinews is for stuff that is too recent to
have reliable and verifiable sourcing. Wikipedia is for when the smoke has cleared and there is verifiable information. I think about how many times something has been reported by every single news outlet and then later retracted by every single news outlet because of misinformation. I'm thinking specifically of the mining accident a few months ago where the newspapers ran the story about finding nearly all the miners alive, when that morning, as people were reading their papers, the TV news stations were reporting the exact opposite.
Carl
On 9/21/06, Carl Peterson carlopeterson@gmail.com wrote:
On 9/20/06, Delirium delirium@hackish.org wrote:
Peter Jacobi wrote:
"Andrew Lih" andrew.lih@gmail.com wrote:
FYI, German Wikipedia is more in line with the idea that "not every news event deserves an article." They are much more selective and are not shy in telling you so. :)
Yeah. I've recently managed to get an "Wikipedia is not a News Portal" into the German equivalent of [[WP:WWIN]]. But this POV seems to collide with the consensus at enwiki.
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
-Mark
My personal opinion is that Wikinews is for stuff that is too recent to
have reliable and verifiable sourcing. Wikipedia is for when the smoke has cleared and there is verifiable information. I think about how many times something has been reported by every single news outlet and then later retracted by every single news outlet because of misinformation. I'm thinking specifically of the mining accident a few months ago where the newspapers ran the story about finding nearly all the miners alive, when that morning, as people were reading their papers, the TV news stations were reporting the exact opposite.
At Wikimania 2006, I described this phenomenon - Wikipedia uniquely fills the gap between "the news" and the history books. It's an instantaneous cumulative view of the state of the world, given the best information at that point in time. Rather than shedding this function, we should be embracing and celebrating it.
Back in 1995, when I was teaching journalism, I pondered when a "rolling memory" system might be realized given the development of the Internet. That's why I was captivated by Wikipedia back in 2003. Wikipedia has accomplished this, whether by design or fluke. And it's been revolutionary.
I'm curious if there is a reasonable reason against Wikipedia serving this function, other than "encyclopedias are not news", which I would argue is old-style thinking (and something I've heard from more than one so-called "academic" committee.)
-Andrew (User:Fuzheado)
On 9/21/06, Andrew Lih andrew.lih@gmail.com wrote:
At Wikimania 2006, I described this phenomenon - Wikipedia uniquely fills the gap between "the news" and the history books. It's an instantaneous cumulative view of the state of the world, given the best information at that point in time. Rather than shedding this function, we should be embracing and celebrating it.
An excellent way of viewing it, I agree wholeheartedly. This is an area, in my view, where the wiki model really shows its strengths compared with traditional formats. Rolling news coverage from the large outlets can offer detail and nuance that a Wikipedia article can't match, but there's really no equivalent to a Wikipedia article in terms of providing a comprehensive overview of a news story, including its historical context, as it happens.
Take [[2006 Thailand coup d'état]], already a pretty strong article with plenty of references. It covers the immediate events, the current state of affairs politically, and summarises the responses in Thailand and abroad. It's even beginning to synthesise a little analysis emerging from the media.
The articles on last year's London bombings are also good examples. [[2006 transatlantic aircraft plot]], within a few hours after the story broke, was just about the best source available.
"Stephen Bain" stephen.bain@gmail.com wrote:
The articles on last year's London bombings are also good examples. [[2006 transatlantic aircraft plot]], within a few hours after the story broke, was just about the best source available.
If a Wikipedia article is "the best source available", it has become original research.
[[User:Pjacobi]]
On 21/09/06, Peter Jacobi peter_jacobi@gmx.net wrote:
"Stephen Bain" stephen.bain@gmail.com wrote:
The articles on last year's London bombings are also good examples. [[2006 transatlantic aircraft plot]], within a few hours after the story broke, was just about the best source available.
If a Wikipedia article is "the best source available", it has become original research.
Or just the best-researched. Your assertion appears trivially false.
- d.
On 21/09/06, Peter Jacobi peter_jacobi@gmx.net wrote:
"Stephen Bain" stephen.bain@gmail.com wrote:
The articles on last year's London bombings are also good examples. [[2006 transatlantic aircraft plot]], within a few hours after the story broke, was just about the best source available.
If a Wikipedia article is "the best source available", it has become original research.
Not automatically. The problem is, you have a big and fast-breaking story, you have everyone covering it. Everyone brings n points of information to the table, but there are 2n points out there in total - whilst there's a lot of overlap they don't all quote the same people, they don't all have a reporter in this place or that. Because we're stripmining the news sources rather than relying on our own primary work, we can quote all 2n.
As long as we avoid drawing inferences or making synthesis by juxtaposition, we're okay at keeping away from original research.
On 9/21/06, Peter Jacobi peter_jacobi@gmx.net wrote:
"Stephen Bain" stephen.bain@gmail.com wrote:
The articles on last year's London bombings are also good examples. [[2006 transatlantic aircraft plot]], within a few hours after the story broke, was just about the best source available.
If a Wikipedia article is "the best source available", it has become original research.
Note that I'm referring specifically to current events articles. While online media outlets and the print media do occasionally offer "in-depth" stories on current events, setting out background information and historical context, they're rarely (if ever) as comprehensive or as timely as Wikipedia's articles are.
Also note that I'm talking about sources that Wikipedia would be competing against. A personal account written by a veteran journo on the scene may well be the best thing to read about an event, but it's not something that's comparable to Wikipedia.
Stephen Bain wrote:
On 9/21/06, Peter Jacobi peter_jacobi@gmx.net wrote:
"Stephen Bain" stephen.bain@gmail.com wrote:
The articles on last year's London bombings are also good examples. [[2006 transatlantic aircraft plot]], within a few hours after the story broke, was just about the best source available.
If a Wikipedia article is "the best source available", it has become original research.
Note that I'm referring specifically to current events articles. While online media outlets and the print media do occasionally offer "in-depth" stories on current events, setting out background information and historical context, they're rarely (if ever) as comprehensive or as timely as Wikipedia's articles are.
Also note that I'm talking about sources that Wikipedia would be competing against. A personal account written by a veteran journo on the scene may well be the best thing to read about an event, but it's not something that's comparable to Wikipedia.
In avoiding original research we create a composite. With thousands of eyes watching the development of a topic from different angles we are in a better position to represent multiple sources. A for-profit media outlet can't possibly afford to take into account whether its sources are independent of each other. In the interest of getting the information out quickly, or at least more quickly than its competitors it needs to go with what it considers reliable sources,. like the wireservices. Once the story is public it can't unprint the newspapers. The opportunities for broadcast media are a little better. We (especially in Wikinews) are in a position where we can more easily adapt our report based on evolving information. Not only that, but our article histories are able to chronicle how the story developed. Background information can be tracked down almost on demand.
Ec
On 9/21/06, Ray Saintonge saintonge@telus.net wrote:
In avoiding original research we create a composite. With thousands of eyes watching the development of a topic from different angles we are in a better position to represent multiple sources. A for-profit media outlet can't possibly afford to take into account whether its sources are independent of each other. In the interest of getting the information out quickly, or at least more quickly than its competitors it needs to go with what it considers reliable sources,. like the wireservices. Once the story is public it can't unprint the newspapers. The opportunities for broadcast media are a little better. We (especially in Wikinews) are in a position where we can more easily adapt our report based on evolving information. Not only that, but our article histories are able to chronicle how the story developed. Background information can be tracked down almost on demand.
And frankly we can often tap our community of experts and quasi-experts in ways that journalists and editors of traditional media seem to often not do. About 80% of the mainstream media could not understand or convey the difference between a "heavy water production plant" and a "heavy water reactor" when Iran announced the completion of the former (but was reported over and over again as the latter by many different news organizations, even sometimes in contradictory ways; they are, to say the least, not the same thing at all). That sort of things gets noticed in about two seconds on Wikipedia and Wikinews, though. There are just more eyes looking at things which can have input into them. It doesn't catch everything by a long-shot but in many cases I trust the community of Wikipedians more than I trust one journalist and his/her editor(s) to ferret out nuanced details (or even not-so-nuanced details) where there is a high opportunity to make a sensationalistic point by blurring them.
FF
Peter Jacobi wrote:
"Stephen Bain" stephen.bain@gmail.com wrote:
The articles on last year's London bombings are also good examples. [[2006 transatlantic aircraft plot]], within a few hours after the story broke, was just about the best source available.
If a Wikipedia article is "the best source available", it has become original research.
I don't think that's true at all. The entire point of Wikipedia is to produce a good reference that didn't previously exist. We take the huge amount of information out there and produce high-quality, reliable, referenced summaries. If we do that, we become the best reference on a number of things---not because we're conducting original research, but because we've summarized widely-spread information into a readable summary all in one place.
-Mark
On 20/09/06, Delirium delirium@hackish.org wrote:
Peter Jacobi wrote:
Yeah. I've recently managed to get an "Wikipedia is not a News Portal" into the German equivalent of [[WP:WWIN]]. But this POV seems to collide with the consensus at enwiki.
I guess as a reader I don't see the benefit in *not* covering everything. I agree there is a slant towards more coverage of recent news events, but that's simply because they're easier to cover. The solution, IMO, is not to cover recent events less, but to cover older events more. I want to know the equivalent of this stuff for other time periods! Were there short-lived but at the time massively-covered events in the 1890s, equivalent to today's frenzies over child kidnappings? What about the thousands of political scandals, major and minor, that have at various times shortened governments' tenures, forced cabinet reshuffles, etc., etc.? It's all good info we're missing!
I concur. Good well-written and well-referenced articles can be written about these things, are being written and *should* be written. I would go so far as to say that de: is wrong to arbitrarily keep these things out considering how well they can be done.
- d.
On 9/19/06, Peter Jacobi peter_jacobi@gmx.net wrote:
Shouldn't we draw a line now and leave the news to WikiNews, only remerging into Wikipedia once importance is clearer and deeper analysis is available from secondardy sources?
Where exactly would one draw a line? And how could one draw one in a case like this that would not put huge amounts of beloved material on the wrong side of it? What advantage would that have over just letting things work out the way they always have so far? Is there any great reason to undertake a policy which would potentially 1. lead to the removal of articles which had been heavily worked on by many contributors and contained copious footnotes and references, 2. create arbitrary barriers to the creation of new articles, 3. only acknowledge an academia-centric POV over what is relevant or not?
FF
On 9/19/06, Peter Jacobi peter_jacobi@gmx.net wrote:
[[September 2006 Thailand coup attempt]]
Sorry to ask the obvious rhetorical questions here:
- Wasn't WikiNews designed to be for news?
Partly but it was also meant to allow for a degree of original research.
- Why is [[Wikipedia:Recentism]] only an essay?
- Is at a good idea to write an encyclopedia article based a news tickers and CNN?
It would be more trouble than it was worth to stop people from doing so.
On 9/19/06, Peter Jacobi peter_jacobi@gmx.net wrote:
[[September 2006 Thailand coup attempt]]
Sorry to ask the obvious rhetorical questions here:
- Wasn't WikiNews designed to be for news?
- Why is [[Wikipedia:Recentism]] only an essay?
- Is at a good idea to write an encyclopedia article based a news tickers and CNN?
So, if we weren't to take the approach of having everything up to date, what should we have? How many days should we wait until we change the "Prime Minister" field in the infobox on Thailand? When should our article about New Orleans have started mentioning the fact it had drowned? Should we attempt to be an encyclopedia for today minus a month, or what?
On 9/19/06, Peter Jacobi peter_jacobi@gmx.net wrote:
- Wasn't WikiNews designed to be for news?
Wikinews usually does short little articles and is usually written in an entirely different style than a Wikipedia article. Having one centralized encyclopedia article allows constant update rather than the spawning of a dozen new stories everytime something changes.
- Is at a good idea to write an encyclopedia article based a news tickers and CNN?
Is it a bad idea? It'll evolve like everything else, perhaps even better than most other things because it will be highly attended to. It will no doubt need a little more synthesis from the vantage point of a few month's hindsight but that's no reason not to have it operating concurrently.
FF