Many articles lack sources. I just happened to look at a biography of a Swedish journalist, born in 1968. He received some fine awards, and there is no doubt he is notable enough. But the article has no sources. Ten years ago, in 1999, for a journalist born in 1958, I would just look him up in the Swedish "Who's who", which was published every two years. But that title seems to be discontinued. Or if another issue is ever published, it comes with much longer intervals.
Such reference works go the same way as printed encyclopedias and dictionaries. For a young, ambitious journalist today, being in Facebook and Linkedin (and Wikipedia) counts just as much as being in Who's who did ten years ago.
Should I use the journalist's Linkedin profile as a source? I don't think that is acceptable. All sorts of lies could hide there. And users could remove themselves from Linkedin or edit their profile at any time. Old issues of Who's who don't change, they are a stable reference.
But the fact is, Who's who is/was also based on user-submitted autobiographies. The editors made a list of people who "should" be in there, and sent invitations with a form where the person could fill in details about family, education, career, publications, awards, and hobbies. I'm not sure how the editors fact checked the entries. Perhaps the risk of public shame was enough to keep people from lying.
Printed editions have another advantage for the historian. If a Swedish person "forgot" to mention in the 1945 edition that they received a German medal of honor in 1938, perhaps that information can be found in the 1939 edition. In this era of Linkedin and Facebook profiles, how can we ever dig up information from the past, that a person wants to hide?
2009/8/8 Lars Aronsson lars@aronsson.se:
Should I use the journalist's Linkedin profile as a source? I don't think that is acceptable. All sorts of lies could hide there. And users could remove themselves from Linkedin or edit their profile at any time. Old issues of Who's who don't change, they are a stable reference.
Primary sources are useful, but only for certain things. Linkedin could never be the main source for an article.
But the fact is, Who's who is/was also based on user-submitted autobiographies. The editors made a list of people who "should" be in there, and sent invitations with a form where the person could fill in details about family, education, career, publications, awards, and hobbies. I'm not sure how the editors fact checked the entries. Perhaps the risk of public shame was enough to keep people from lying.
What makes you think people didn't lie?
Thomas Dalton wrote:
But the fact is, Who's who is/was also based on user-submitted autobiographies. The editors made a list of people who "should" be in there, and sent invitations with a form where the person could fill in details about family, education, career, publications, awards, and hobbies. I'm not sure how the editors fact checked the entries. Perhaps the risk of public shame was enough to keep people from lying.
What makes you think people didn't lie?
What makes you think they did?
I prefer to Assume Good Faith.
Ec
2009/8/9 Ray Saintonge saintonge@telus.net:
Thomas Dalton wrote:
But the fact is, Who's who is/was also based on user-submitted autobiographies. The editors made a list of people who "should" be in there, and sent invitations with a form where the person could fill in details about family, education, career, publications, awards, and hobbies. I'm not sure how the editors fact checked the entries. Perhaps the risk of public shame was enough to keep people from lying.
What makes you think people didn't lie?
What makes you think they did?
I prefer to Assume Good Faith.
I prefer not to make assumptions when I can help it.
Ray Saintonge wrote:
Thomas Dalton wrote:
But the fact is, Who's who is/was also based on user-submitted autobiographies. The editors made a list of people who "should" be in there, and sent invitations with a form where the person could fill in details about family, education, career, publications, awards, and hobbies. I'm not sure how the editors fact checked the entries. Perhaps the risk of public shame was enough to keep people from lying.
What makes you think people didn't lie?
What makes you think they did?
Because people tend to sort of do this thing, in such circumstances -- tone up the positives, tone down the negatives. I was translating the entries for the equivalent of the mentioned biographical dictionaries once. Another example is how like everybody's bio in post-Soviets of 1990s sort of suddenly lost any relation to the Communist party.
--
Yury Tarasievich wrote:
Ray Saintonge wrote:
Thomas Dalton wrote:
But the fact is, Who's who is/was also based on user-submitted autobiographies. The editors made a list of people who "should" be in there, and sent invitations with a form where the person could fill in details about family, education, career, publications, awards, and hobbies. I'm not sure how the editors fact checked the entries. Perhaps the risk of public shame was enough to keep people from lying.
What makes you think people didn't lie?
What makes you think they did?
Because people tend to sort of do this thing, in such circumstances -- tone up the positives, tone down the negatives. I was translating the entries for the equivalent of the mentioned biographical dictionaries once. Another example is how like everybody's bio in post-Soviets of 1990s sort of suddenly lost any relation to the Communist party.
Failing to tell the *whole* truth by selective omissions is not the same as a lie, which would be to make claims that are knowingly false. If the 1985 and 1995 editions of a biographical dictionary treat a person's Communist Party association differently that doesn't change the fact that both of those editions were in fact published You can compare the two editions, and note any differences.
Sure, *some* people will tend to do this sort of thing, but that is not the same as accusing all biographies there of being full of lies.
Ec
2009/8/9 Ray Saintonge saintonge@telus.net:
Failing to tell the *whole* truth by selective omissions is not the same as a lie, which would be to make claims that are knowingly false. If the 1985 and 1995 editions of a biographical dictionary treat a person's Communist Party association differently that doesn't change the fact that both of those editions were in fact published You can compare the two editions, and note any differences.
A "lie of omission" is often considered to be a lie, as the name suggests.
Sure, *some* people will tend to do this sort of thing, but that is not the same as accusing all biographies there of being full of lies.
Nobody made any such accusation.
Thomas Dalton wrote:
2009/8/9 Ray Saintonge saintonge@telus.net:
Failing to tell the *whole* truth by selective omissions is not the same as a lie, which would be to make claims that are knowingly false. If the 1985 and 1995 editions of a biographical dictionary treat a person's Communist Party association differently that doesn't change the fact that both of those editions were in fact published You can compare the two editions, and note any differences.
A "lie of omission" is often considered to be a lie, as the name suggests.
That term is one of your own invention.
Sure, *some* people will tend to do this sort of thing, but that is not the same as accusing all biographies there of being full of lies.
Nobody made any such accusation.
Not directly, only by innuendo.
Ec
On Mon, Aug 10, 2009 at 3:08 AM, Ray Saintongesaintonge@telus.net wrote:
A "lie of omission" is often considered to be a lie, as the name suggests.
That term is one of your own invention.
Lie of omission is his invention?
1m ghits http://www.google.com/search?num=100&q=%22lie%20of%20omission%22 and 612 book hits http://books.google.com/books?num=100&q=%22lie of omission" and [[Lie#Types of lies]] (not to mention the multiple usages in our other articles: https://secure.wikimedia.org/wikipedia/en/w/index.php?title=Special%3ASearch...)
all say to me that lie of omission is exactly what I thought it was: a common English phrase. Where is the invention here?
From: marudubshinki@gmail.com To: wikipedia-l@lists.wikimedia.org Date: Mon, 10 Aug 2009 03:39:25 -0400 Subject: Re: [Wikipedia-l] Are we running out of sources
On Mon, Aug 10, 2009 at 3:08 AM, Ray Saintongesaintonge@telus.net wrote:
A "lie of omission" is often considered to be a lie, as the name suggests.
That term is one of your own invention.
Lie of omission is his invention?
1m ghits http://www.google.com/search?num=100&q=%22lie%20of%20omission%22 and 612 book hits http://books.google.com/books?num=100&q=%22lie of omission" and [[Lie#Types of lies]] (not to mention the multiple usages in our other articles: https://secure.wikimedia.org/wikipedia/en/w/index.php?title=Special%3ASearch...)
all say to me that lie of omission is exactly what I thought it was: a common English phrase. Where is the invention here?
-- gwern
What I believe was the question here was not who coined the phrase "lie of ommission", but rather who brought it into the discussion.
/Lennart
Lennart Guldbrandsson, ordförande för Wikimedia Sverige http://se.wikimedia.org/wiki/Huvudsida
Tfn: 031 - 12 50 48 Mobil: 070 - 207 80 05 Epost: l_guldbrandsson@hotmail.com Användarsida: http://sv.wikipedia.org/wiki/Anv%C3%A4ndare:Hannibal Blogg: http://mrchapel.wordpress.com/
_________________________________________________________________ Med Windows Live kan du ordna, redigera och dela med dig av dina foton. http://www.microsoft.com/sverige/windows/windowslive/products/photo-gallery-...
I'm not quite sure that's the case. Ray said "That term is one of your own invention" which seemed to convey a belief that it was a new usage made up by Thomas Dalton.
Mark
skype: node.ue
On Mon, Aug 10, 2009 at 12:46 AM, lennart guldbrandssonl_guldbrandsson@hotmail.com wrote:
From: marudubshinki@gmail.com To: wikipedia-l@lists.wikimedia.org Date: Mon, 10 Aug 2009 03:39:25 -0400 Subject: Re: [Wikipedia-l] Are we running out of sources
On Mon, Aug 10, 2009 at 3:08 AM, Ray Saintongesaintonge@telus.net wrote:
A "lie of omission" is often considered to be a lie, as the name suggests.
That term is one of your own invention.
Lie of omission is his invention?
1m ghits http://www.google.com/search?num=100&q=%22lie%20of%20omission%22 and 612 book hits http://books.google.com/books?num=100&q=%22lie of omission" and [[Lie#Types of lies]] (not to mention the multiple usages in our other articles: https://secure.wikimedia.org/wikipedia/en/w/index.php?title=Special%3ASearch...)
all say to me that lie of omission is exactly what I thought it was: a common English phrase. Where is the invention here?
-- gwern
What I believe was the question here was not who coined the phrase "lie of ommission", but rather who brought it into the discussion.
/Lennart
Lennart Guldbrandsson, ordförande för Wikimedia Sverige http://se.wikimedia.org/wiki/Huvudsida
Tfn: 031 - 12 50 48 Mobil: 070 - 207 80 05 Epost: l_guldbrandsson@hotmail.com Användarsida: http://sv.wikipedia.org/wiki/Anv%C3%A4ndare:Hannibal Blogg: http://mrchapel.wordpress.com/
Med Windows Live kan du ordna, redigera och dela med dig av dina foton. http://www.microsoft.com/sverige/windows/windowslive/products/photo-gallery-... _______________________________________________ Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Thomas Dalton wrote:
Primary sources are useful, but only for certain things. Linkedin could never be the main source for an article.
I think everybody agrees. But it is still interesting to ask: How would we cite information found in Linkedin? Via some web archive service? And have there been any widely reported cases of Linkedin fraud, where somebody listed a PhD title that they really didn't have? Or is that something very common?
What makes you think people didn't lie?
I can only speak for the Swedish Who's who ("Vem är det"), which has been published 45 times between 1912 and 1999 (roughly every second year), with the 46th edition in 2007 (an 8 year gap and change of publisher, it was discontinued and then revived). I haven't heard of any cases of fraud. Criticism was launched against the 2007 edition, that it lacked several business leaders and the prime minister, since people were allowed to opt-out.
The Norwegian ("Hvem er Hvem") has been published only 15 times between 1912 and 2008. The most recent edition only listed 1000 people, but editions 1-14 listed between 3200 and 4900 people.
The Danish ("Kraks Blå Bog") has been published every year since 1910, with recent editions listing some 8000 people, meaning that one in every 700 citizens (5.5 million Danes) is listed. Among these three countries, the Danes have the best reference works (closest to Germany), but the smallest version of Wikipedia.
It was very interesting to hear about the Russian version, and its problems with post-Soviet denial.
What I'm coming to is that Wikipedia might have to adopt the method of sending out forms to select people, asking for their biographic details (or for verification or denial of what's already in the Wikipedia article). That doesn't mean we should trust such autobiographic information blindly, but allow this input in a controlled form to make Wikipedia more complete without encouraging the uncontrolled editing of your own article. Such a suggestion of course begs many questions, e.g.:
* Who should send out the forms? How do we introduce ourselves? How do we explain that all of Wikipedia's rules still apply (no, you can't opt out; no, I can't edit to your favour), to people who never thought of editing Wikipedia, and might have no idea how it works?
* How should the received forms be stored and referenced?
* If we discover false claims or grave omissions in the received forms, how do we handle the next contact with that person?
* Should the input perhaps be handled as interviews for Wikinews? Somebody can do a detailed interview for Wikinews, and then Wikipedia can cite (parts of) that interview. Does that scale? We would have to explain to each person what Wikinews is, but perhaps "an interview" is easier to understand than "a form from Wikipedia" (the latter sounds like "you have won a lottery from Microsoft", the typical spam scam).
In the case of Who's who, it's of course the editors employed by the publisher that sends out forms asking for details. This only highlights how completely different Wikipedia is.
It would be interesting to hear if anybody tried something like this already, perhaps within a limited Wikiproject?
I suggest that if we send out queries, we do so personally, not in the name of Wikipedia. Someone might, for example say, I think your accomplishments in your line of work are suitable for an encyclopedia article--could you please confirm some of the basics beyond what is on your web site., and tell me if you know of any newspaper or magazine articles that have been written about your work. This is more likely to avoid the tendency to spam (or over-modesty) that results from actually expecting people to write about themselves.
We could assist this perhaps by having an explicit input form and standardized layout for bio articles in various fields--essentially this would be an extension of the infoboxes which already tend to duplicate the text in large part. It seems a little absurd to do everything twice, and I suggest that we perhaps adopt infoboxes as the basic format for many types of articles, to be automatically turned into prose if anyone really wants it to look like a conventional encyclopedia--and, in many cases, supplemented by free-form more conventional writing. This is in essence providing information for a semantic web, not conventional writing--but it has advantages, such as clarity, comparability, and search capability. If someone wants to see articles for everyone born in Seattle in 1960, they could do so. They could even print it out as a book.
The minority of wikipedians who actually have the skills to write coherent prose, or who are willing to learn, would still have enough scope in the famous people and the general articles. An actual printed example of this is Louis Kronenberger's "Atlantic Brief Lives: a Biographical Companion to the Arts." (1965) which consists of 1081 one- or two-hundred word fairly standardized biographies of famous people writer by a small research staff--211 of which are supplemented by one- or two-thousand word diverse free-structured essays on the very most famous, written by distinguished critics or scholars. Browning gets a bio; Tennyson gets a bio plus an essay. Just as in WP, the choice depended considerably on whom the distinguished critics and scholars wanted to write about.
David Goodman, Ph.D, M.L.S. http://en.wikipedia.org/wiki/User_talk:DGG
On Sun, Aug 9, 2009 at 7:44 PM, Lars Aronssonlars@aronsson.se wrote:
Thomas Dalton wrote:
Primary sources are useful, but only for certain things. Linkedin could never be the main source for an article.
I think everybody agrees. But it is still interesting to ask: How would we cite information found in Linkedin? Via some web archive service? And have there been any widely reported cases of Linkedin fraud, where somebody listed a PhD title that they really didn't have? Or is that something very common?
What makes you think people didn't lie?
I can only speak for the Swedish Who's who ("Vem är det"), which has been published 45 times between 1912 and 1999 (roughly every second year), with the 46th edition in 2007 (an 8 year gap and change of publisher, it was discontinued and then revived). I haven't heard of any cases of fraud. Criticism was launched against the 2007 edition, that it lacked several business leaders and the prime minister, since people were allowed to opt-out.
The Norwegian ("Hvem er Hvem") has been published only 15 times between 1912 and 2008. The most recent edition only listed 1000 people, but editions 1-14 listed between 3200 and 4900 people.
The Danish ("Kraks Blå Bog") has been published every year since 1910, with recent editions listing some 8000 people, meaning that one in every 700 citizens (5.5 million Danes) is listed. Among these three countries, the Danes have the best reference works (closest to Germany), but the smallest version of Wikipedia.
It was very interesting to hear about the Russian version, and its problems with post-Soviet denial.
What I'm coming to is that Wikipedia might have to adopt the method of sending out forms to select people, asking for their biographic details (or for verification or denial of what's already in the Wikipedia article). That doesn't mean we should trust such autobiographic information blindly, but allow this input in a controlled form to make Wikipedia more complete without encouraging the uncontrolled editing of your own article. Such a suggestion of course begs many questions, e.g.:
* Who should send out the forms? How do we introduce ourselves? How do we explain that all of Wikipedia's rules still apply (no, you can't opt out; no, I can't edit to your favour), to people who never thought of editing Wikipedia, and might have no idea how it works?
* How should the received forms be stored and referenced?
* If we discover false claims or grave omissions in the received forms, how do we handle the next contact with that person?
* Should the input perhaps be handled as interviews for Wikinews? Somebody can do a detailed interview for Wikinews, and then Wikipedia can cite (parts of) that interview. Does that scale? We would have to explain to each person what Wikinews is, but perhaps "an interview" is easier to understand than "a form from Wikipedia" (the latter sounds like "you have won a lottery from Microsoft", the typical spam scam).
In the case of Who's who, it's of course the editors employed by the publisher that sends out forms asking for details. This only highlights how completely different Wikipedia is.
It would be interesting to hear if anybody tried something like this already, perhaps within a limited Wikiproject?
-- Lars Aronsson (lars@aronsson.se) Aronsson Datateknik - http://aronsson.se
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Lars Aronsson wrote:
Thomas Dalton wrote
What makes you think people didn't lie?
I can only speak for the Swedish Who's who ("Vem är det"), which has been published 45 times between 1912 and 1999 (roughly every second year), with the 46th edition in 2007 (an 8 year gap and change of publisher, it was discontinued and then revived). I haven't heard of any cases of fraud.
Exactly, and it should be up to those paranoiacs who imagine a high level of fraud to show some evidence about its extent before they tar everything with the same brush of suspicion.
What I'm coming to is that Wikipedia might have to adopt the method of sending out forms to select people, asking for their biographic details (or for verification or denial of what's already in the Wikipedia article). That doesn't mean we should trust such autobiographic information blindly, but allow this input in a controlled form to make Wikipedia more complete without encouraging the uncontrolled editing of your own article.
I think that most who do respond will try to do so honestly. Verification would still be desirable, but we would start from an assumption of good faith.
- If we discover false claims or grave omissions in the received forms, how do we handle the next contact with that person?
False claims would make us suspicious of anything the person says. But what would be a grave omission if a specific question about something is not asked?
Ec
I have thought about the question below as well, but from a different angle: should we start trusting blogs more? This may seem like a no-brainer, but it is actually a complex issue. A few thoughts:
1. some blogs have matured and present well-researched information 2. some blogs are written by people who are notable 3. some blogs are more up-to-date than traditional media-outlets (c.f. Twitter bringing news faster than CNN during some of the latest crises) 4. more people write blogs than work in traditional media (c.f. Wikipedia having more editors than Encyclopaedia Britannica) 5. traditional media decrease their presence in both hot spots and cold spots and rely more and more on wire agencies like Reuters, which means that the content is more streamlined - or POV in some cases, whereas blogs potentially cover the entire spectrum.
In other words, the world is moving to the internet and if Wikipedia wants to stay ahead it will need to adapt.
This is, as I said, a complex issue: we cannot trust any old blog, and we shouldn't (see the "some blogs" comments). But let me just take one Swedish example: one of the most prominent thinkers in the Swedish debate about the internet is Oscar Swartz (http://en.wikipedia.org/wiki/Oscar_Swartz). He is of course interviewed in several newspapers, and have written at least two books. But his major contributions are made on his blog (http://swartz.typepad.com/texplorer/ ), and other internet sites, not in his interviews or his books. And it will stay that way for him. He will, when he ends his career, have made a far lesser stamp in traditional media than on the internet. But nevertheless, he is an important figure in the Swedish internet culture, and any encyclopedic article about him worth its salt should acknowledge and reflect that. (By the way, I don't think I have edited his article, so this is not a way to push POV, it's just an example. I am sure there are plenty of other examples I could have mentioned.) With the current situation, can Wikipedia reflect the emerging world order?
In still other words, we may need to think this through a little bit more. Is the Internet Archive the way to go? Who decide what blogs are reliable sources? And should we try to bring in more archivists who have already wrestled with this question on saving the internet for future generations?
Best wishes,
Lennart Guldbrandsson, ordförande för Wikimedia Sverige http://se.wikimedia.org/wiki/Huvudsida
Tfn: 031 - 12 50 48 Mobil: 070 - 207 80 05 Epost: l_guldbrandsson@hotmail.com Användarsida: http://sv.wikipedia.org/wiki/Anv%C3%A4ndare:Hannibal Blogg: http://mrchapel.wordpress.com/
Date: Sat, 8 Aug 2009 13:09:53 +0200 From: lars@aronsson.se To: wikipedia-l@lists.wikimedia.org Subject: [Wikipedia-l] Are we running out of sources
Many articles lack sources. I just happened to look at a biography of a Swedish journalist, born in 1968. He received some fine awards, and there is no doubt he is notable enough. But the article has no sources. Ten years ago, in 1999, for a journalist born in 1958, I would just look him up in the Swedish "Who's who", which was published every two years. But that title seems to be discontinued. Or if another issue is ever published, it comes with much longer intervals.
Such reference works go the same way as printed encyclopedias and dictionaries. For a young, ambitious journalist today, being in Facebook and Linkedin (and Wikipedia) counts just as much as being in Who's who did ten years ago.
Should I use the journalist's Linkedin profile as a source? I don't think that is acceptable. All sorts of lies could hide there. And users could remove themselves from Linkedin or edit their profile at any time. Old issues of Who's who don't change, they are a stable reference.
But the fact is, Who's who is/was also based on user-submitted autobiographies. The editors made a list of people who "should" be in there, and sent invitations with a form where the person could fill in details about family, education, career, publications, awards, and hobbies. I'm not sure how the editors fact checked the entries. Perhaps the risk of public shame was enough to keep people from lying.
Printed editions have another advantage for the historian. If a Swedish person "forgot" to mention in the 1945 edition that they received a German medal of honor in 1938, perhaps that information can be found in the 1939 edition. In this era of Linkedin and Facebook profiles, how can we ever dig up information from the past, that a person wants to hide?
-- Lars Aronsson (lars@aronsson.se) Aronsson Datateknik - http://aronsson.se
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
_________________________________________________________________ Med Windows Live kan du ordna, redigera och dela med dig av dina foton. http://www.microsoft.com/sverige/windows/windowslive/products/photo-gallery-...
Wikipedians,
We're experimenting with and attempting to develop a new internet based source of reliable crowd sourcing information at http://canonizer.com.
The idea is to add open survey capabilities to a wiki system. "Topics" of controversial issues can have multiple 'camps' where various people that see things similarly can collaboratively develop, defend, and concisely state their POV. It is basically a tool to enable large groups of people to communicate concisely and quantitatively - rather than alone and individually. There are millions of blogs, and long lists of comments to the good ones out there, the question is, how do you know, concisely and quantitatively, what they are all saying about any important controversial issues? And which ones do the people agree are reliable?
One of our first efforts, in a proof of concept way, has been to start an open survey amongst experts on what are currently the best theories of consciousness. The question being: "Is there any kind of scientific consensus at all about what are the best theories of consciousness?" Often times a scientific consensus is claimed, or some experts have a general idea that there is a consensus, but how do you document such rigorously, quantitatively, and in a trusted way for all to accept? An example topic where experts are starting to concicely state and develop the best theories is this one on Theories of Mind and Consciousness:
http://canonizer.com/topic.asp/88
Once you have open survey capabilities like this, all that remains is knowing who are the trusted experts. This can be easily accomplished via a peer ranking process where all the experts rank each other in a top 10 kind of way. An example of this can be found in the "Mind Experts" topic here:
http://canonizer.com/topic.asp/81
And the associated canonization algorithm which uses the peer ranking data documented here:
http://canonizer.com/topic.asp/53/11
The default canonization algorithm is one person one vote. This results in a rigorous and quantitative measure of consensus amongst the general population (or at least of all participators in the open survey). There are various other algorithms that can be used to 'canonize' things as the reader may desire. Instead of filtering things on the way in, canonizer.com allows the browsers of the data to filter (or canonizer, if you will) things any way they want simply by selecting the canonization algorithm on the side bar. When the 'Mind Experts' canoniztion algorithm is selected on the side bar you get a rigorous and quantitative measure of scientific consensus.
The survey results are still far from comprehensive, but already there is a growing number of experts like Steven Lehar, John Smithies, Jonathan Edwards... are participating, basically declaring their beliefs in a dynamic and real time way about what they think are the best theories of consciousness. So far, the more experts that 'canonizer' their beliefs in this open survey the more the 'Consciousness is Representational and real' camp continues to extend it's lead in the amount of scientific consensus it has:
http://canonizer.com/topic.asp/88/6
There has already been several starting attempts to use this definitive information as references in various wikipedia articles on philosophy of mind. One example being the article on qualia. An initial proposal to include some of this data was made in the talk page of this article here:
http://en.wikipedia.org/wiki/Talk:Qualia#Proposed_addition_to_the_.22Scienti...
But this, and other similar entries on other pages were initially shut down by Jw2035 and a wiki war seems to be in process on this issue with possibly different points of view. A topic has been created at canonizer.com to consolidate the various descussions on different article talk pages, and to find out how much consensus there might be on both sides of this issue here (If there is any other real competing POV about the validity of such):
http://canonizer.com/topic.asp/104
A second attempt is now being proposed for the qualia article here:
http://en.wikipedia.org/wiki/Talk:Qualia#Smythies_section_needs_rewriting
in which it is simply being used to definitively document John Smythies (one of the somewhat arbitrary listed 'proponents of qualia') beliefs on this issue.
The ultimate goal would be as things become more developed, to have a quantitative measure of how trusted any particular 'camp' is. Obviously, anyone can create and support a camp, or a camp may not be supported at all. All such should be taken 'with a grain of salt'. But if there is a clear 'scientific consensus' supporting a camp, the degree to which it can be trusted goes up significantly and quantitatively. Perhaps in the future, various scientific publications might stipulate a quantitative value, when using a particular specified canonization algorithm, which a 'camp' must achieve before it can be used as a source in anything published in their peer reviewed scientific publication?
All the people involved in this open source volunteer developed project would love to know what all you wikipedians think about such efforts to 'measure' scientific consensus - and the using of such as trusted sources of information in wikipedia and elsewhere. Sure, no one can claim any of this is 'truth' (except for the fact of who currently believes what is true) - but what better measure of truth might there be than that for which there is a clear scientific consensus? And canonizer.com includes a historical mechanism (see the 'as of' control box on the side bar) so we can watch and rigorously document the various theories or 'camps' as they come and go as ever more scientific data comes in.
What do you all think?
Thanks
Brent Allsop
lennart guldbrandsson wrote:
In other words, the world is moving to the internet and if Wikipedia wants to stay ahead it will need to adapt.
El dg 16 de 08 de 2009 a les 16:54 -0600, en/na Brent Allsop va escriure:
Wikipedians,
We're experimenting with and attempting to develop a new internet based source of reliable crowd sourcing information at http://canonizer.com.
The idea is to add open survey capabilities to a wiki system. "Topics" of controversial issues can have multiple 'camps' where various people that see things similarly can collaboratively develop, defend, and concisely state their POV. It is basically a tool to enable large groups of people to communicate concisely and quantitatively - rather than alone and individually. There are millions of blogs, and long lists of comments to the good ones out there, the question is, how do you know, concisely and quantitatively, what they are all saying about any important controversial issues? And which ones do the people agree are reliable?
One of our first efforts, in a proof of concept way, has been to start an open survey amongst experts on what are currently the best theories of consciousness. The question being: "Is there any kind of scientific consensus at all about what are the best theories of consciousness?" Often times a scientific consensus is claimed, or some experts have a general idea that there is a consensus, but how do you document such rigorously, quantitatively, and in a trusted way for all to accept? An example topic where experts are starting to concicely state and develop the best theories is this one on Theories of Mind and Consciousness:
http://canonizer.com/topic.asp/88
Once you have open survey capabilities like this, all that remains is knowing who are the trusted experts. This can be easily accomplished via a peer ranking process where all the experts rank each other in a top 10 kind of way. An example of this can be found in the "Mind Experts" topic here:
http://canonizer.com/topic.asp/81
And the associated canonization algorithm which uses the peer ranking data documented here:
http://canonizer.com/topic.asp/53/11
The default canonization algorithm is one person one vote. This results in a rigorous and quantitative measure of consensus amongst the general population (or at least of all participators in the open survey). There are various other algorithms that can be used to 'canonize' things as the reader may desire. Instead of filtering things on the way in, canonizer.com allows the browsers of the data to filter (or canonizer, if you will) things any way they want simply by selecting the canonization algorithm on the side bar. When the 'Mind Experts' canoniztion algorithm is selected on the side bar you get a rigorous and quantitative measure of scientific consensus.
The survey results are still far from comprehensive, but already there is a growing number of experts like Steven Lehar, John Smithies, Jonathan Edwards... are participating, basically declaring their beliefs in a dynamic and real time way about what they think are the best theories of consciousness. So far, the more experts that 'canonizer' their beliefs in this open survey the more the 'Consciousness is Representational and real' camp continues to extend it's lead in the amount of scientific consensus it has:
http://canonizer.com/topic.asp/88/6
There has already been several starting attempts to use this definitive information as references in various wikipedia articles on philosophy of mind. One example being the article on qualia. An initial proposal to include some of this data was made in the talk page of this article here:
http://en.wikipedia.org/wiki/Talk:Qualia#Proposed_addition_to_the_.22Scienti...
But this, and other similar entries on other pages were initially shut down by Jw2035 and a wiki war seems to be in process on this issue with possibly different points of view. A topic has been created at canonizer.com to consolidate the various descussions on different article talk pages, and to find out how much consensus there might be on both sides of this issue here (If there is any other real competing POV about the validity of such):
http://canonizer.com/topic.asp/104
A second attempt is now being proposed for the qualia article here:
http://en.wikipedia.org/wiki/Talk:Qualia#Smythies_section_needs_rewriting
in which it is simply being used to definitively document John Smythies (one of the somewhat arbitrary listed 'proponents of qualia') beliefs on this issue.
The ultimate goal would be as things become more developed, to have a quantitative measure of how trusted any particular 'camp' is. Obviously, anyone can create and support a camp, or a camp may not be supported at all. All such should be taken 'with a grain of salt'. But if there is a clear 'scientific consensus' supporting a camp, the degree to which it can be trusted goes up significantly and quantitatively. Perhaps in the future, various scientific publications might stipulate a quantitative value, when using a particular specified canonization algorithm, which a 'camp' must achieve before it can be used as a source in anything published in their peer reviewed scientific publication?
All the people involved in this open source volunteer developed project would love to know what all you wikipedians think about such efforts to 'measure' scientific consensus - and the using of such as trusted sources of information in wikipedia and elsewhere. Sure, no one can claim any of this is 'truth' (except for the fact of who currently believes what is true) - but what better measure of truth might there be than that for which there is a clear scientific consensus? And canonizer.com includes a historical mechanism (see the 'as of' control box on the side bar) so we can watch and rigorously document the various theories or 'camps' as they come and go as ever more scientific data comes in.
What do you all think?
Thanks
Brent Allsop
Your suggested text reads like a spam aimed at getting people to read about the issue on canoniser.com instead of Wikipedia.
=========================================================================
"John Smytheis is currently concisely stating, collaboratively developing, and definitively declaring his current beliefs on this issue in the Smythies-Carr Hypothesis camp on the Theories of Mind and Consciousness topic at canonizer.com. (User id: john lock) His beliefs also include that which is contained in and he has helped develop the Consciousness is Representational and Real camp, and all other parent camps above it. As ever more experts continue to contribute to this open survey on the best theories of consciousness the Representational and Real camp continues to extend its lead in the amount of scientific consensus it has compared to all other theories of consciousness. Though John is in the current consensus camp at this level, his particular valid theories about what qualia are and where they are located diverge from the majority. The Smythies-Carr Hypothesis camp is a competitor to the more well accepted Mind-Brain Identity Theory camp. The people in that camp believe the best theory is that qualia are something in our brain in a growing set of diverse, possible, and concisely stated ways. The people in John's camp believe qualia are a property of something causally connected to, yet contained in the higher dimensional space described in string theory."
=========================================================================
Fran
No reason why people should not use other sources than Wikipedia. Our job is to improve Wikipedia, not to discourage other projects.
But as far as including content from this source in Wikipedia is concerned the posts on what remains fundamentally a blog or non-scientific survey is a much less reliable representation of the considered view of experts than are their published works.
David Goodman, Ph.D, M.L.S. http://en.wikipedia.org/wiki/User_talk:DGG
On Mon, Aug 17, 2009 at 4:06 AM, Francis Tyersspectre@ivixor.net wrote:
El dg 16 de 08 de 2009 a les 16:54 -0600, en/na Brent Allsop va escriure:
Wikipedians,
We're experimenting with and attempting to develop a new internet based source of reliable crowd sourcing information at http://canonizer.com.
The idea is to add open survey capabilities to a wiki system. "Topics" of controversial issues can have multiple 'camps' where various people that see things similarly can collaboratively develop, defend, and concisely state their POV. It is basically a tool to enable large groups of people to communicate concisely and quantitatively - rather than alone and individually. There are millions of blogs, and long lists of comments to the good ones out there, the question is, how do you know, concisely and quantitatively, what they are all saying about any important controversial issues? And which ones do the people agree are reliable?
One of our first efforts, in a proof of concept way, has been to start an open survey amongst experts on what are currently the best theories of consciousness. The question being: "Is there any kind of scientific consensus at all about what are the best theories of consciousness?" Often times a scientific consensus is claimed, or some experts have a general idea that there is a consensus, but how do you document such rigorously, quantitatively, and in a trusted way for all to accept? An example topic where experts are starting to concicely state and develop the best theories is this one on Theories of Mind and Consciousness:
http://canonizer.com/topic.asp/88
Once you have open survey capabilities like this, all that remains is knowing who are the trusted experts. This can be easily accomplished via a peer ranking process where all the experts rank each other in a top 10 kind of way. An example of this can be found in the "Mind Experts" topic here:
http://canonizer.com/topic.asp/81
And the associated canonization algorithm which uses the peer ranking data documented here:
http://canonizer.com/topic.asp/53/11
The default canonization algorithm is one person one vote. This results in a rigorous and quantitative measure of consensus amongst the general population (or at least of all participators in the open survey). There are various other algorithms that can be used to 'canonize' things as the reader may desire. Instead of filtering things on the way in, canonizer.com allows the browsers of the data to filter (or canonizer, if you will) things any way they want simply by selecting the canonization algorithm on the side bar. When the 'Mind Experts' canoniztion algorithm is selected on the side bar you get a rigorous and quantitative measure of scientific consensus.
The survey results are still far from comprehensive, but already there is a growing number of experts like Steven Lehar, John Smithies, Jonathan Edwards... are participating, basically declaring their beliefs in a dynamic and real time way about what they think are the best theories of consciousness. So far, the more experts that 'canonizer' their beliefs in this open survey the more the 'Consciousness is Representational and real' camp continues to extend it's lead in the amount of scientific consensus it has:
http://canonizer.com/topic.asp/88/6
There has already been several starting attempts to use this definitive information as references in various wikipedia articles on philosophy of mind. One example being the article on qualia. An initial proposal to include some of this data was made in the talk page of this article here:
http://en.wikipedia.org/wiki/Talk:Qualia#Proposed_addition_to_the_.22Scienti...
But this, and other similar entries on other pages were initially shut down by Jw2035 and a wiki war seems to be in process on this issue with possibly different points of view. A topic has been created at canonizer.com to consolidate the various descussions on different article talk pages, and to find out how much consensus there might be on both sides of this issue here (If there is any other real competing POV about the validity of such):
http://canonizer.com/topic.asp/104
A second attempt is now being proposed for the qualia article here:
http://en.wikipedia.org/wiki/Talk:Qualia#Smythies_section_needs_rewriting
in which it is simply being used to definitively document John Smythies (one of the somewhat arbitrary listed 'proponents of qualia') beliefs on this issue.
The ultimate goal would be as things become more developed, to have a quantitative measure of how trusted any particular 'camp' is. Obviously, anyone can create and support a camp, or a camp may not be supported at all. All such should be taken 'with a grain of salt'. But if there is a clear 'scientific consensus' supporting a camp, the degree to which it can be trusted goes up significantly and quantitatively. Perhaps in the future, various scientific publications might stipulate a quantitative value, when using a particular specified canonization algorithm, which a 'camp' must achieve before it can be used as a source in anything published in their peer reviewed scientific publication?
All the people involved in this open source volunteer developed project would love to know what all you wikipedians think about such efforts to 'measure' scientific consensus - and the using of such as trusted sources of information in wikipedia and elsewhere. Sure, no one can claim any of this is 'truth' (except for the fact of who currently believes what is true) - but what better measure of truth might there be than that for which there is a clear scientific consensus? And canonizer.com includes a historical mechanism (see the 'as of' control box on the side bar) so we can watch and rigorously document the various theories or 'camps' as they come and go as ever more scientific data comes in.
What do you all think?
Thanks
Brent Allsop
Your suggested text reads like a spam aimed at getting people to read about the issue on canoniser.com instead of Wikipedia.
=========================================================================
"John Smytheis is currently concisely stating, collaboratively developing, and definitively declaring his current beliefs on this issue in the Smythies-Carr Hypothesis camp on the Theories of Mind and Consciousness topic at canonizer.com. (User id: john lock) His beliefs also include that which is contained in and he has helped develop the Consciousness is Representational and Real camp, and all other parent camps above it. As ever more experts continue to contribute to this open survey on the best theories of consciousness the Representational and Real camp continues to extend its lead in the amount of scientific consensus it has compared to all other theories of consciousness. Though John is in the current consensus camp at this level, his particular valid theories about what qualia are and where they are located diverge from the majority. The Smythies-Carr Hypothesis camp is a competitor to the more well accepted Mind-Brain Identity Theory camp. The people in that camp believe the best theory is that qualia are something in our brain in a growing set of diverse, possible, and concisely stated ways. The people in John's camp believe qualia are a property of something causally connected to, yet contained in the higher dimensional space described in string theory."
=========================================================================
Fran
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Hi David,
Thanks for the supportive comments about systems other than wikipedia. We're doing all this via open source, and volunteer work, and all free, much like wikipedia. And those of us working on this system believe something like this is desperately needed today. And as I indicated earlier, the goals and niche for canonizer.com are completely separate from where Wikipedia is.
I'm surprised to here you say you think that canonizer.com is 'fundamentally a blog'. I believe blogs and the canonizer.com open survey system are diametrically apposed extremes.
There are millions of blogs. And for that matter, there are also tens of thousands of publications on the issue of consciousness as the now 20K and growing number of entries in Chalmers Bibliography on the mind proves. (see: http://consc.net/mindpapers) The problem is all these millions of blog postings and publications are all individual testimonials, all using their own unique terminology. There is no way any common man can achieve a significant understanding or survey of even a small portion of all this in one lifetime. And all these blog posts continue to get piled higher and deeper exponentially compounding the problem. We believe this is why this field of study is so mired in all this thick mud and so failing to make any significant progress. It is currently completely failing to communicate just how much consensus there really is on the important issues. Nobody can see the revolution some believe is taking place in this field as we speak, simply because nobody has yet attempted to rigorously survey and measure for such.
In the wikipedia article on qualia it appears there is just as many qualophobe experts, as there are qualophiles. And most of the world believes this. But many experts believe this is not only completely wrong and misleading, but a wave of people are converting to the qualophile camps - and that a scientific revolution is taking place as we speak on what is accepted by the experts as the best theory of consciousness.
Some people attempt to, by themselves, survey and summarize the various 'camps', and the problems they think exist with all other summarized competing theories to their own, but most of such, since it is done by an individual, can never be completely unbiased or an accurate survey of all points of view. Any attempts at such descriptions of the various camps must include all the diverse terminology that the various individuals use in a futile attempt to fully describe any competing camp - yet another problem making easy communication in this field near impossible. And any such attempt to summarize the various camps by individuals is certainly never quantitative - and never definitively showing how many experts, and who, and when are in each camp. The other big problem of the way all these blogs, forums, seminar presentations, publications and discussions is that they tend to all focus on what everyone disagrees on. Once any two people agree on anything, the conversation completely stops. They only talk about the minor differences in their beliefs or terminology, and so on - resulting in eternal yes it is, no it isn't back and forth forever - completely failing to communicate to anyone what is most important and where the consensus is. The best you get is some statements along the lines of some experts think one way, and some think another - with no indication of how many and who are in each camp, and which camp is significantly growing and revolutionizing the current 'thought' on any still theoretical issue.
Canonizer.com is designed to resolve all of this chaos, wiki edit wars, and disagreement and to make a quantum leap in the ability of diverse experts in diverse fields to easily communicate both amongst themselves, to different fields of study, and to the rest of the world - concisely, definitively and rigorously indicating what the current 'scientific consensus' is. The hierarchical 'camp' structure, and the way the system rules work encourage everyone to work together, cooperatively, and find what they all agree is most important. Everyone can concisely state this in the higher level camps where all that agree can support it - rigorously, quantitatively, openly, and in completely equal or unbiased ways for all theories.
The best terminology to use is always what communicates the ideas the best to the most people. Efficient survey systems, and the way canonizer.com is set up to encourage negotiation to win or convert others in their camp, is what is desperately needed to rigorously determine what is the best single terminology to use for any theoretical idea or doctrine.
Before canonizer.com, it always takes a huge amount of effort, and hours and hours of back and forth discussion to find out what another philosopher means by various terms and so on - before any good communication can even start. And in any one life time, you can only do this with a limited number of people. Now, with canonizer.com, you just say I am in the concisely stated XYZ camp on this issue and all camp members are working as a team on this theory. And suddenly communication between people in various diverse fields, and to the general population, becomes trivially easy. And this kind of communication is what is required for any complex and still theoretical field of study like this to get out of the mud and finally make any kind of significant progress.
I challenge anyone to find any place where even a few experts of this stature, of the kind that have been joining, supporting, diligently developing the 'Consciousness is Representational and Real camp' (see: http://canonizer.com/topic.asp/88/6) for several years now, can agree on anything. Show me any place where this many experts are able to definitively agree on anything as concise and usefully descriptive as the theories described in this camp statement - including the sub camp structure concisely and quantitatively representing the still diversity of beliefs about what the best theories about what, where, how, and why qualia are. Even when and individual gets an article to be published in a leading peer reviewed journal, requiring a year or more, passing a hand full of 'peer reviewers', you don't have near the amount of support and rigorous editing and negotiation that has gone on for many years now to develop this camp. And this camp (and all competing ones) are just getting started. As ever more experts participate, this camp continues to improve at an increasing rate, and to extend its lead compared to all other competing camps - all on a perfectly level and unbiased playing field.
Perhaps, going forward, some other camp will emerge as a new leader? Perhaps some new scientific evidence will show that some now minority camp is a much better way to think about things than the current consensus? If so, shouldn't we be rigorously watching, measuring and monitoring this consensus in a real time and in a historical way going forward? Must everyone that agrees on a camp or theory at any time publish similar papers stating such before anyone will count such?
Also, I assume what you mean by a 'non scientific survey' is something that has the goal of using statistics to find out what large numbers of people believe, based on small 'random' samples. Any time you use something other than a 'random' selection of survey takers, it is 'not scientific'. Although some scientific or rational information could be derived about all experts, from any subset of them participating in an open survey such as this, whether this subset is random or not, non of this is the goal of canonizer.com. The goal of canonize.com is simply to have a concise representation, and quantitative measure of what large groups of all participators think. If you have a hundred different blog posts, each with a thousand comments, all using slightly different terminology - that is completely useless to any one. But if all their points of view. especially the similar ones, are all unified and concisely stated and quantitatively measured, this is powerful information, and takes communication and debate about still controversial theoretical issues to an entirely new level.
Also, there are myriads of scientific problems with the traditional 'scientific surveys' you are talking about here. Such as once you ask the first person to answer a survey question, the statement or answer choice is locked in stone, and can never change. You have the same problem with signed petitions - once the first person signs, the statement cannot ever change. But the way canonizer.com is set up, competing camps continually develop, change, and progress as new arguments and scientific data continue to arrive. Also, a 'scientific survey' is just an instantaneous snapshot taken at a moment in time. Usually the same questions are completely irrelevant a year later after new scientific data comes in. Canonizer.com is a real time rigorous and quantitative measure of all the best theories and how their popularity grows and wanes as ever more scientific data comes in, causing members of disproved camps to jump to knew and improved camps.
And saying canonizer.com is a much less reliable representation of experts view than are their published works, I would disagree. I have had occasions where experts have claimed that David Chalmers no longer believes in his 'Principle of organization invariance' which he published a paper about long ago - as evidence by he hasn't published anything significant on this since. Amongst the growing number of participators in this topic on the best theories of consiocusness, the camp representing this idea is clearly the leading theory. (see: http://canonizer.com/topic.asp/88/8) But is David Chalmers after all these years, still in this camp? Many claim otherwise.
Also, the many diverse theories about consciousness can't all be right. All would agree that science will eventually prove which of the many theories, if any, is THE ONE, and the demonstrable scientific data will eventually prove which one it is. The definitive measure of this occurring being that everyone is forced, due to the scientific date, to convert to THE ONE camp.
The goal of canonizer.com is to rigorously measure this process in real time. If you wait for the years required for someone to get a retraction to their previous publications published, how many will even do this? Must everyone in the previous consensus camp publish a similar paper? At canonizer.com, you can watch all this happen easily, and in real time, and in a historically recorded way. If people stay in the 'wrong' camp to long, it can significantly damage their reputation, which people can rigorously measure for using reputation based canonizer algorithm in the future.
I would argue there is no better system to definitively define, and more importantly communicate concisely what someone currently believes, compared to all others, than the canonizer.com open survey system where people effectively 'sign' the dynamic petition stating what they currently believe (including the history of what 'camp' there were in in the past.).
I guess my question at this point would be: have I converted you to the canonizer.com can be a trusted reference definitively defining what and how many participating experts currently believe on still theoretical and controversial scientific issues such as this?
If anyone is still not in the so far unanimous 'yes' camp represented here:
http://canonizer.com/topic.asp/104/2
It would sure be great to get your POV, and reasons for such definitively 'canonized' so all can know why rigorously, concisely, unbiased, and quantitatively. And remember, when you join a camp, your reputation is definitely on the line. It will be very easy for people to come up with and use canonizer algorithms in the future that simply ignore people that were in the 'wrong' camps in the past for way too long. And those in the right camp, the soonest, before the herd, will as they deserve, be the ones with all the influence in the system in the future.
Identity and reputation is everything at canonizer.com.
Thanks!!
Brent Allsop
David Goodman wrote:
No reason why people should not use other sources than Wikipedia. Our job is to improve Wikipedia, not to discourage other projects.
But as far as including content from this source in Wikipedia is concerned the posts on what remains fundamentally a blog or non-scientific survey is a much less reliable representation of the considered view of experts than are their published works.
David Goodman, Ph.D, M.L.S. http://en.wikipedia.org/wiki/User_talk:DGG
Dear Brent,
There are too many things here to respond to at once. And that's the problem: people have a limited channel capacity. It can be high, but it remains limited. I can carry on 5 discussions of this sort, but not 50. Some few people can do 50, but even they can't do 500. All group processes work only as long as the number of people involve remains limited enough to permit individual pairwise discussion. It can extend to large numbers of people--but only if they remain observers. Plato shows Socrates talking to 1 or 2 people at a time, while a dozen stand around and watch--and Plato when he wrote it down (or composed it from scratch, as the case may be) knew that hundreds of others would read. It has scaled up to millions very easily: most observing, some starting separate dialogs of their own.
I don't think any fundamental revolution is taking place. I think what we are see is just the opening and expansion of the previous world of literate communities. You will probably answer that changing the scale to this extent is revolutionary, but I think it just implies a necessary separation into working units of a manageable size.
You have a major advantage over me in this discussion: I am not an academic expert in this, just someone looking for good ideas, and finding out how good they are by questioning them. But I have some minor advantages, too: librarianship is an empirical profession. We will do whatever works, and we are accustomed to deal with a range of subjects too wide for us to fully understand.
I'm not particularly interested in the theory of consciousness, so it's not the best example for me. I'm not particularly interested in any psychological theories. I'm a biologist & a reductionist one at that. To the extent that experiment & predicted observation supports a theory, it can be used as correct. Consensus has nothing to do with it, nor do surveys of opinion. We can all be wrong.
What we need consensus for is going about those practical things of life in which we must cooperate and live together. To remain a group, we have to agree enough to remain in it. And we higher primates have evolved so that our major activity is living in groups and watching each other and trying to be more clever than the rest.
There's one very good thing in canonizer, that shows you realize the same constraints as I do: its divided structure. But while you seem to think of it as atomizing the subjects to discuss, I see it as partitioning the participants.
David,
David Goodman, Ph.D, M.L.S. http://en.wikipedia.org/wiki/User_talk:DGG
On Sun, Aug 23, 2009 at 4:12 PM, Brent Allsopbrent.allsop@canonizer.com wrote:
Hi David,
Thanks for the supportive comments about systems other than wikipedia. We're doing all this via open source, and volunteer work, and all free, much like wikipedia. And those of us working on this system believe something like this is desperately needed today. And as I indicated earlier, the goals and niche for canonizer.com are completely separate from where Wikipedia is.
I'm surprised to here you say you think that canonizer.com is 'fundamentally a blog'. I believe blogs and the canonizer.com open survey system are diametrically apposed extremes.
There are millions of blogs. And for that matter, there are also tens of thousands of publications on the issue of consciousness as the now 20K and growing number of entries in Chalmers Bibliography on the mind proves. (see: http://consc.net/mindpapers) The problem is all these millions of blog postings and publications are all individual testimonials, all using their own unique terminology. There is no way any common man can achieve a significant understanding or survey of even a small portion of all this in one lifetime. And all these blog posts continue to get piled higher and deeper exponentially compounding the problem. We believe this is why this field of study is so mired in all this thick mud and so failing to make any significant progress. It is currently completely failing to communicate just how much consensus there really is on the important issues. Nobody can see the revolution some believe is taking place in this field as we speak, simply because nobody has yet attempted to rigorously survey and measure for such.
In the wikipedia article on qualia it appears there is just as many qualophobe experts, as there are qualophiles. And most of the world believes this. But many experts believe this is not only completely wrong and misleading, but a wave of people are converting to the qualophile camps - and that a scientific revolution is taking place as we speak on what is accepted by the experts as the best theory of consciousness.
Some people attempt to, by themselves, survey and summarize the various 'camps', and the problems they think exist with all other summarized competing theories to their own, but most of such, since it is done by an individual, can never be completely unbiased or an accurate survey of all points of view. Any attempts at such descriptions of the various camps must include all the diverse terminology that the various individuals use in a futile attempt to fully describe any competing camp
- yet another problem making easy communication in this field near
impossible. And any such attempt to summarize the various camps by individuals is certainly never quantitative - and never definitively showing how many experts, and who, and when are in each camp. The other big problem of the way all these blogs, forums, seminar presentations, publications and discussions is that they tend to all focus on what everyone disagrees on. Once any two people agree on anything, the conversation completely stops. They only talk about the minor differences in their beliefs or terminology, and so on - resulting in eternal yes it is, no it isn't back and forth forever - completely failing to communicate to anyone what is most important and where the consensus is. The best you get is some statements along the lines of some experts think one way, and some think another - with no indication of how many and who are in each camp, and which camp is significantly growing and revolutionizing the current 'thought' on any still theoretical issue.
Canonizer.com is designed to resolve all of this chaos, wiki edit wars, and disagreement and to make a quantum leap in the ability of diverse experts in diverse fields to easily communicate both amongst themselves, to different fields of study, and to the rest of the world - concisely, definitively and rigorously indicating what the current 'scientific consensus' is. The hierarchical 'camp' structure, and the way the system rules work encourage everyone to work together, cooperatively, and find what they all agree is most important. Everyone can concisely state this in the higher level camps where all that agree can support it
- rigorously, quantitatively, openly, and in completely equal or
unbiased ways for all theories.
The best terminology to use is always what communicates the ideas the best to the most people. Efficient survey systems, and the way canonizer.com is set up to encourage negotiation to win or convert others in their camp, is what is desperately needed to rigorously determine what is the best single terminology to use for any theoretical idea or doctrine.
Before canonizer.com, it always takes a huge amount of effort, and hours and hours of back and forth discussion to find out what another philosopher means by various terms and so on - before any good communication can even start. And in any one life time, you can only do this with a limited number of people. Now, with canonizer.com, you just say I am in the concisely stated XYZ camp on this issue and all camp members are working as a team on this theory. And suddenly communication between people in various diverse fields, and to the general population, becomes trivially easy. And this kind of communication is what is required for any complex and still theoretical field of study like this to get out of the mud and finally make any kind of significant progress.
I challenge anyone to find any place where even a few experts of this stature, of the kind that have been joining, supporting, diligently developing the 'Consciousness is Representational and Real camp' (see: http://canonizer.com/topic.asp/88/6) for several years now, can agree on anything. Show me any place where this many experts are able to definitively agree on anything as concise and usefully descriptive as the theories described in this camp statement - including the sub camp structure concisely and quantitatively representing the still diversity of beliefs about what the best theories about what, where, how, and why qualia are. Even when and individual gets an article to be published in a leading peer reviewed journal, requiring a year or more, passing a hand full of 'peer reviewers', you don't have near the amount of support and rigorous editing and negotiation that has gone on for many years now to develop this camp. And this camp (and all competing ones) are just getting started. As ever more experts participate, this camp continues to improve at an increasing rate, and to extend its lead compared to all other competing camps - all on a perfectly level and unbiased playing field.
Perhaps, going forward, some other camp will emerge as a new leader? Perhaps some new scientific evidence will show that some now minority camp is a much better way to think about things than the current consensus? If so, shouldn't we be rigorously watching, measuring and monitoring this consensus in a real time and in a historical way going forward? Must everyone that agrees on a camp or theory at any time publish similar papers stating such before anyone will count such?
Also, I assume what you mean by a 'non scientific survey' is something that has the goal of using statistics to find out what large numbers of people believe, based on small 'random' samples. Any time you use something other than a 'random' selection of survey takers, it is 'not scientific'. Although some scientific or rational information could be derived about all experts, from any subset of them participating in an open survey such as this, whether this subset is random or not, non of this is the goal of canonizer.com. The goal of canonize.com is simply to have a concise representation, and quantitative measure of what large groups of all participators think. If you have a hundred different blog posts, each with a thousand comments, all using slightly different terminology - that is completely useless to any one. But if all their points of view. especially the similar ones, are all unified and concisely stated and quantitatively measured, this is powerful information, and takes communication and debate about still controversial theoretical issues to an entirely new level.
Also, there are myriads of scientific problems with the traditional 'scientific surveys' you are talking about here. Such as once you ask the first person to answer a survey question, the statement or answer choice is locked in stone, and can never change. You have the same problem with signed petitions - once the first person signs, the statement cannot ever change. But the way canonizer.com is set up, competing camps continually develop, change, and progress as new arguments and scientific data continue to arrive. Also, a 'scientific survey' is just an instantaneous snapshot taken at a moment in time. Usually the same questions are completely irrelevant a year later after new scientific data comes in. Canonizer.com is a real time rigorous and quantitative measure of all the best theories and how their popularity grows and wanes as ever more scientific data comes in, causing members of disproved camps to jump to knew and improved camps.
And saying canonizer.com is a much less reliable representation of experts view than are their published works, I would disagree. I have had occasions where experts have claimed that David Chalmers no longer believes in his 'Principle of organization invariance' which he published a paper about long ago - as evidence by he hasn't published anything significant on this since. Amongst the growing number of participators in this topic on the best theories of consiocusness, the camp representing this idea is clearly the leading theory. (see: http://canonizer.com/topic.asp/88/8) But is David Chalmers after all these years, still in this camp? Many claim otherwise.
Also, the many diverse theories about consciousness can't all be right. All would agree that science will eventually prove which of the many theories, if any, is THE ONE, and the demonstrable scientific data will eventually prove which one it is. The definitive measure of this occurring being that everyone is forced, due to the scientific date, to convert to THE ONE camp.
The goal of canonizer.com is to rigorously measure this process in real time. If you wait for the years required for someone to get a retraction to their previous publications published, how many will even do this? Must everyone in the previous consensus camp publish a similar paper? At canonizer.com, you can watch all this happen easily, and in real time, and in a historically recorded way. If people stay in the 'wrong' camp to long, it can significantly damage their reputation, which people can rigorously measure for using reputation based canonizer algorithm in the future.
I would argue there is no better system to definitively define, and more importantly communicate concisely what someone currently believes, compared to all others, than the canonizer.com open survey system where people effectively 'sign' the dynamic petition stating what they currently believe (including the history of what 'camp' there were in in the past.).
I guess my question at this point would be: have I converted you to the canonizer.com can be a trusted reference definitively defining what and how many participating experts currently believe on still theoretical and controversial scientific issues such as this?
If anyone is still not in the so far unanimous 'yes' camp represented here:
http://canonizer.com/topic.asp/104/2
It would sure be great to get your POV, and reasons for such definitively 'canonized' so all can know why rigorously, concisely, unbiased, and quantitatively. And remember, when you join a camp, your reputation is definitely on the line. It will be very easy for people to come up with and use canonizer algorithms in the future that simply ignore people that were in the 'wrong' camps in the past for way too long. And those in the right camp, the soonest, before the herd, will as they deserve, be the ones with all the influence in the system in the future.
Identity and reputation is everything at canonizer.com.
Thanks!!
Brent Allsop
David Goodman wrote:
No reason why people should not use other sources than Wikipedia. Our job is to improve Wikipedia, not to discourage other projects.
But as far as including content from this source in Wikipedia is concerned the posts on what remains fundamentally a blog or non-scientific survey is a much less reliable representation of the considered view of experts than are their published works.
David Goodman, Ph.D, M.L.S. http://en.wikipedia.org/wiki/User_talk:DGG
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Thanks!
On Sun, Aug 23, 2009 at 3:46 PM, David Goodmandgoodmanny@gmail.com wrote:
Dear Brent,
There are too many things here to respond to at once. And that's the problem: people have a limited channel capacity. It can be high, but it remains limited. I can carry on 5 discussions of this sort, but not 50. Some few people can do 50, but even they can't do 500. All group processes work only as long as the number of people involve remains limited enough to permit individual pairwise discussion. It can extend to large numbers of people--but only if they remain observers. Plato shows Socrates talking to 1 or 2 people at a time, while a dozen stand around and watch--and Plato when he wrote it down (or composed it from scratch, as the case may be) knew that hundreds of others would read. It has scaled up to millions very easily: most observing, some starting separate dialogs of their own.
I don't think any fundamental revolution is taking place. I think what we are see is just the opening and expansion of the previous world of literate communities. You will probably answer that changing the scale to this extent is revolutionary, but I think it just implies a necessary separation into working units of a manageable size.
You have a major advantage over me in this discussion: I am not an academic expert in this, just someone looking for good ideas, and finding out how good they are by questioning them. But I have some minor advantages, too: librarianship is an empirical profession. We will do whatever works, and we are accustomed to deal with a range of subjects too wide for us to fully understand.
I'm not particularly interested in the theory of consciousness, so it's not the best example for me. I'm not particularly interested in any psychological theories. I'm a biologist & a reductionist one at that. To the extent that experiment & predicted observation supports a theory, it can be used as correct. Consensus has nothing to do with it, nor do surveys of opinion. We can all be wrong.
What we need consensus for is going about those practical things of life in which we must cooperate and live together. To remain a group, we have to agree enough to remain in it. And we higher primates have evolved so that our major activity is living in groups and watching each other and trying to be more clever than the rest.
There's one very good thing in canonizer, that shows you realize the same constraints as I do: its divided structure. But while you seem to think of it as atomizing the subjects to discuss, I see it as partitioning the participants.
David,
David Goodman, Ph.D, M.L.S. http://en.wikipedia.org/wiki/User_talk:DGG
On Sun, Aug 23, 2009 at 4:12 PM, Brent Allsopbrent.allsop@canonizer.com wrote:
Hi David,
Thanks for the supportive comments about systems other than wikipedia. We're doing all this via open source, and volunteer work, and all free, much like wikipedia. And those of us working on this system believe something like this is desperately needed today. And as I indicated earlier, the goals and niche for canonizer.com are completely separate from where Wikipedia is.
I'm surprised to here you say you think that canonizer.com is 'fundamentally a blog'. I believe blogs and the canonizer.com open survey system are diametrically apposed extremes.
There are millions of blogs. And for that matter, there are also tens of thousands of publications on the issue of consciousness as the now 20K and growing number of entries in Chalmers Bibliography on the mind proves. (see: http://consc.net/mindpapers) The problem is all these millions of blog postings and publications are all individual testimonials, all using their own unique terminology. There is no way any common man can achieve a significant understanding or survey of even a small portion of all this in one lifetime. And all these blog posts continue to get piled higher and deeper exponentially compounding the problem. We believe this is why this field of study is so mired in all this thick mud and so failing to make any significant progress. It is currently completely failing to communicate just how much consensus there really is on the important issues. Nobody can see the revolution some believe is taking place in this field as we speak, simply because nobody has yet attempted to rigorously survey and measure for such.
In the wikipedia article on qualia it appears there is just as many qualophobe experts, as there are qualophiles. And most of the world believes this. But many experts believe this is not only completely wrong and misleading, but a wave of people are converting to the qualophile camps - and that a scientific revolution is taking place as we speak on what is accepted by the experts as the best theory of consciousness.
Some people attempt to, by themselves, survey and summarize the various 'camps', and the problems they think exist with all other summarized competing theories to their own, but most of such, since it is done by an individual, can never be completely unbiased or an accurate survey of all points of view. Any attempts at such descriptions of the various camps must include all the diverse terminology that the various individuals use in a futile attempt to fully describe any competing camp
- yet another problem making easy communication in this field near
impossible. And any such attempt to summarize the various camps by individuals is certainly never quantitative - and never definitively showing how many experts, and who, and when are in each camp. The other big problem of the way all these blogs, forums, seminar presentations, publications and discussions is that they tend to all focus on what everyone disagrees on. Once any two people agree on anything, the conversation completely stops. They only talk about the minor differences in their beliefs or terminology, and so on - resulting in eternal yes it is, no it isn't back and forth forever - completely failing to communicate to anyone what is most important and where the consensus is. The best you get is some statements along the lines of some experts think one way, and some think another - with no indication of how many and who are in each camp, and which camp is significantly growing and revolutionizing the current 'thought' on any still theoretical issue.
Canonizer.com is designed to resolve all of this chaos, wiki edit wars, and disagreement and to make a quantum leap in the ability of diverse experts in diverse fields to easily communicate both amongst themselves, to different fields of study, and to the rest of the world - concisely, definitively and rigorously indicating what the current 'scientific consensus' is. The hierarchical 'camp' structure, and the way the system rules work encourage everyone to work together, cooperatively, and find what they all agree is most important. Everyone can concisely state this in the higher level camps where all that agree can support it
- rigorously, quantitatively, openly, and in completely equal or
unbiased ways for all theories.
The best terminology to use is always what communicates the ideas the best to the most people. Efficient survey systems, and the way canonizer.com is set up to encourage negotiation to win or convert others in their camp, is what is desperately needed to rigorously determine what is the best single terminology to use for any theoretical idea or doctrine.
Before canonizer.com, it always takes a huge amount of effort, and hours and hours of back and forth discussion to find out what another philosopher means by various terms and so on - before any good communication can even start. And in any one life time, you can only do this with a limited number of people. Now, with canonizer.com, you just say I am in the concisely stated XYZ camp on this issue and all camp members are working as a team on this theory. And suddenly communication between people in various diverse fields, and to the general population, becomes trivially easy. And this kind of communication is what is required for any complex and still theoretical field of study like this to get out of the mud and finally make any kind of significant progress.
I challenge anyone to find any place where even a few experts of this stature, of the kind that have been joining, supporting, diligently developing the 'Consciousness is Representational and Real camp' (see: http://canonizer.com/topic.asp/88/6) for several years now, can agree on anything. Show me any place where this many experts are able to definitively agree on anything as concise and usefully descriptive as the theories described in this camp statement - including the sub camp structure concisely and quantitatively representing the still diversity of beliefs about what the best theories about what, where, how, and why qualia are. Even when and individual gets an article to be published in a leading peer reviewed journal, requiring a year or more, passing a hand full of 'peer reviewers', you don't have near the amount of support and rigorous editing and negotiation that has gone on for many years now to develop this camp. And this camp (and all competing ones) are just getting started. As ever more experts participate, this camp continues to improve at an increasing rate, and to extend its lead compared to all other competing camps - all on a perfectly level and unbiased playing field.
Perhaps, going forward, some other camp will emerge as a new leader? Perhaps some new scientific evidence will show that some now minority camp is a much better way to think about things than the current consensus? If so, shouldn't we be rigorously watching, measuring and monitoring this consensus in a real time and in a historical way going forward? Must everyone that agrees on a camp or theory at any time publish similar papers stating such before anyone will count such?
Also, I assume what you mean by a 'non scientific survey' is something that has the goal of using statistics to find out what large numbers of people believe, based on small 'random' samples. Any time you use something other than a 'random' selection of survey takers, it is 'not scientific'. Although some scientific or rational information could be derived about all experts, from any subset of them participating in an open survey such as this, whether this subset is random or not, non of this is the goal of canonizer.com. The goal of canonize.com is simply to have a concise representation, and quantitative measure of what large groups of all participators think. If you have a hundred different blog posts, each with a thousand comments, all using slightly different terminology - that is completely useless to any one. But if all their points of view. especially the similar ones, are all unified and concisely stated and quantitatively measured, this is powerful information, and takes communication and debate about still controversial theoretical issues to an entirely new level.
Also, there are myriads of scientific problems with the traditional 'scientific surveys' you are talking about here. Such as once you ask the first person to answer a survey question, the statement or answer choice is locked in stone, and can never change. You have the same problem with signed petitions - once the first person signs, the statement cannot ever change. But the way canonizer.com is set up, competing camps continually develop, change, and progress as new arguments and scientific data continue to arrive. Also, a 'scientific survey' is just an instantaneous snapshot taken at a moment in time. Usually the same questions are completely irrelevant a year later after new scientific data comes in. Canonizer.com is a real time rigorous and quantitative measure of all the best theories and how their popularity grows and wanes as ever more scientific data comes in, causing members of disproved camps to jump to knew and improved camps.
And saying canonizer.com is a much less reliable representation of experts view than are their published works, I would disagree. I have had occasions where experts have claimed that David Chalmers no longer believes in his 'Principle of organization invariance' which he published a paper about long ago - as evidence by he hasn't published anything significant on this since. Amongst the growing number of participators in this topic on the best theories of consiocusness, the camp representing this idea is clearly the leading theory. (see: http://canonizer.com/topic.asp/88/8) But is David Chalmers after all these years, still in this camp? Many claim otherwise.
Also, the many diverse theories about consciousness can't all be right. All would agree that science will eventually prove which of the many theories, if any, is THE ONE, and the demonstrable scientific data will eventually prove which one it is. The definitive measure of this occurring being that everyone is forced, due to the scientific date, to convert to THE ONE camp.
The goal of canonizer.com is to rigorously measure this process in real time. If you wait for the years required for someone to get a retraction to their previous publications published, how many will even do this? Must everyone in the previous consensus camp publish a similar paper? At canonizer.com, you can watch all this happen easily, and in real time, and in a historically recorded way. If people stay in the 'wrong' camp to long, it can significantly damage their reputation, which people can rigorously measure for using reputation based canonizer algorithm in the future.
I would argue there is no better system to definitively define, and more importantly communicate concisely what someone currently believes, compared to all others, than the canonizer.com open survey system where people effectively 'sign' the dynamic petition stating what they currently believe (including the history of what 'camp' there were in in the past.).
I guess my question at this point would be: have I converted you to the canonizer.com can be a trusted reference definitively defining what and how many participating experts currently believe on still theoretical and controversial scientific issues such as this?
If anyone is still not in the so far unanimous 'yes' camp represented here:
http://canonizer.com/topic.asp/104/2
It would sure be great to get your POV, and reasons for such definitively 'canonized' so all can know why rigorously, concisely, unbiased, and quantitatively. And remember, when you join a camp, your reputation is definitely on the line. It will be very easy for people to come up with and use canonizer algorithms in the future that simply ignore people that were in the 'wrong' camps in the past for way too long. And those in the right camp, the soonest, before the herd, will as they deserve, be the ones with all the influence in the system in the future.
Identity and reputation is everything at canonizer.com.
Thanks!!
Brent Allsop
David Goodman wrote:
No reason why people should not use other sources than Wikipedia. Our job is to improve Wikipedia, not to discourage other projects.
But as far as including content from this source in Wikipedia is concerned the posts on what remains fundamentally a blog or non-scientific survey is a much less reliable representation of the considered view of experts than are their published works.
David Goodman, Ph.D, M.L.S. http://en.wikipedia.org/wiki/User_talk:DGG
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Wikipedia-l mailing list Wikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikipedia-l
Hi David,
Thanks for your responses, and for following this discussion.
We have recently just past 20,000 publications describing a diverse set of theories about consciousness as documented in David Chalmers bibliography:
Wouldn't it be valuable to have all the similar theories being argued for by everyone combined into one unified, concise survey, quantitatively measuring which were the most well accepted by the experts, and also how this is changing as ever more scientific data comes in? Something which non experts, such as yourself and myself (I don't have a PhD), could approach and understand without having to read 20,000 publications?
Some of us see the possibility that there may be a revolution taking place in the study of the mind as we speak. But, this is just a hunch, a theory, and we have no rigorous hard survey data to back this up so we can make such a claim. Hence, we're working on creating tools to do just this - rigorously measure just what the experts are now starting to accept as true. The results are still early, and the survey is far from comprehensive, but there are already some exciting results. The representational and real theory, as described in this camp:
http://canonizer.com/topic.asp/88/6
predicts that we are about to achieve the ability to 'eff' the ineffable which will demonstrably / scientifically prove the theories described there in to be true. And they are predicting that this, the discovery of the whats hows and wheres of the subjective mind, will be the greatest scientific discovery of all time.
You are right, we could be all wrong. But that is not the purpose of canonizer.com - to say who is wrong and who is right. The purpose of canonizer.com is to make it possible for the ones that know what is right and what is wrong, to better understand what everyone that is wrong believes, so they can better understand the mistakes everyone is making and to better help them to see why they are wrong, and what is really right. The experts in that camp are predicting that this effing demonstrable scientific proof will falsify all other theories of consciousness, and soon convince everyone to accept these theories as correct. Is there any better way to know who was all wrong, and when they all discovered such and why, than to rigorously survey for and measure such? Everyone being wrong certainly makes it hard for us to make any progress. We can't find something, if everyone is looking for it in the wrong place, or even worse, not looking for it at all. As this camp is starting to show, more and more leading experts believe this is exactly what is happening.
Brent Allsop
David Goodman wrote:
Dear Brent,
There are too many things here to respond to at once. And that's the problem: people have a limited channel capacity. It can be high, but it remains limited. I can carry on 5 discussions of this sort, but not 50. Some few people can do 50, but even they can't do 500. All group processes work only as long as the number of people involve remains limited enough to permit individual pairwise discussion. It can extend to large numbers of people--but only if they remain observers. Plato shows Socrates talking to 1 or 2 people at a time, while a dozen stand around and watch--and Plato when he wrote it down (or composed it from scratch, as the case may be) knew that hundreds of others would read. It has scaled up to millions very easily: most observing, some starting separate dialogs of their own.
I don't think any fundamental revolution is taking place. I think what we are see is just the opening and expansion of the previous world of literate communities. You will probably answer that changing the scale to this extent is revolutionary, but I think it just implies a necessary separation into working units of a manageable size.
You have a major advantage over me in this discussion: I am not an academic expert in this, just someone looking for good ideas, and finding out how good they are by questioning them. But I have some minor advantages, too: librarianship is an empirical profession. We will do whatever works, and we are accustomed to deal with a range of subjects too wide for us to fully understand.
I'm not particularly interested in the theory of consciousness, so it's not the best example for me. I'm not particularly interested in any psychological theories. I'm a biologist & a reductionist one at that. To the extent that experiment & predicted observation supports a theory, it can be used as correct. Consensus has nothing to do with it, nor do surveys of opinion. We can all be wrong.
What we need consensus for is going about those practical things of life in which we must cooperate and live together. To remain a group, we have to agree enough to remain in it. And we higher primates have evolved so that our major activity is living in groups and watching each other and trying to be more clever than the rest.
There's one very good thing in canonizer, that shows you realize the same constraints as I do: its divided structure. But while you seem to think of it as atomizing the subjects to discuss, I see it as partitioning the participants.
David,
David Goodman, Ph.D, M.L.S. http://en.wikipedia.org/wiki/User_talk:DGG
wikipedia-l@lists.wikimedia.org