this is a cut and paste from the London Financial Times on Thursday Morning:
Either buy the newspaper or work in a company that does :-(
Looking to Wikipedia for answers
By Thomas Malone
Published: November 5 2008 14:43 | Last updated: November 6 2008 20:09
To understand how large-scale work was organised during the past 100 years,
the best models were traditional hierarchical organisations such as General
Motors, IBM, and Wal-Mart.
But to understand how large-scale work will be organised in the future, we
need to look at newer examples such as Wikipedia, eBay, and Google.
[image: Prof. Thomas W. Malone]
In Wikipedia, for instance, thousands of people from across the globe have
collectively created a large and surprisingly high-quality intellectual
product – the world's largest encyclopaedia – and have done so with almost
no centralised control. Anyone who wants to can change almost anything, and
decisions about what changes are kept are made by a loose consensus of those
who care.
Wikipedia is a remarkable organisational invention that illustrates how new
forms of communication, such as the internet, are making it possible to
organise work in new and innovative ways.
Of course, new ways of organising work are not desirable everywhere. In many
cases, traditional hierarchies are still needed to capture economies of
scale or to control risks. But in an increasing number of cases, we can have
the economic benefits of large organisations without giving up the human
benefits of small ones – freedom, flexibility, motivation and creativity.
Escape the net - use your
feet<http://www.ft.com/cms/s/0/e1de7344-aa80-11dd-897c-000077b07658.html>
New forms of wireless and desk-less working sound great but isn't it just
easier to deal productively with someone when they sit next to you, says
Stefan Stern
These human benefits can provide decisive competitive advantages in
knowledge-based and innovation-driven work. During the coming decades, we
can expect to see such ideas in operation in more and more parts of the
economy.
These new practices have various names: radical decentralisation,
crowd-sourcing, peer production, and wikinomics. But the phrase I find most
useful is "collective intelligence".
In our work at the Massachusetts Institute of Technology Center for
Collective Intelligence, this phrase has inspired us to ask the provocative
question:
"*How can people and computers be connected so that – collectively – they
act more intelligently than any person, group, or computer has ever done
before?*"
What if we could have any number of people and computers connected to, for
instance, care for patients in a hospital? Or design cars. Or sell retail
products.
We might that find the best way to do a task that today is done by five
full-time people would be to use one part-time employee and a host of
freelance contractors each working for a few minutes a day.
One important type of collective intelligence is "crowd intelligence", where
anyone who wants to can contribute.
Sometimes, as in the case of Wikipedia or video sharing website YouTube,
people contribute their work for free because they get other benefits such
as enjoyment, recognition, or opportunities to socialise with others. In
other cases, such as online retailer eBay, people get paid to do so.
Anyone can become an eBay seller and most of the key decisions about product
mix, pricing, and advertising are made not by managers at eBay, but by the
collective intelligence of the eBay sellers themselves.
A few years ago, eBay managers were surprised to find members successfully
selling automobiles on the site, so they added additional support for this
product line.
Sometimes, crowd intelligence can even operate inside the boundaries of a
single company. Google, Microsoft, and Best Buy have all used internal
"prediction markets" to tap the collective intelligence of people throughout
their organisations. In these prediction markets, people buy and sell
"shares" of predictions about future events such as revenue levels. If their
predictions are correct, they are rewarded (either with real money or with
points).
Microsoft has used prediction markets to estimate completion dates for
internal products. When it launched one of the first of these markets, the
share price for a product scheduled to be finished three months later
declined within minutes to a price indicating only a 1.2 per cent
probability it would be completed on time. The managers in charge of the
project had thought it was on schedule, but when they saw these results they
investigated further and found problems. The product was eventually released
three months late.
Here was a case where knowledge about the project's problems was available
inside an organisation but it took a prediction market to bring it to the
attention of people who could do something about it.
Another important type of collective intelligence is "cyber-human
intelligence", where computers do not just connect people to each other,
they provide their own "intelligence" as well.
Google harvests the intelligence of millions of people who create web pages
and link them to each other, but its sophisticated algorithms also rank the
pages based on how many links exist to a given page.
Electronically connected forms of crowd intelligence and large-scale forms
of cyber-human intelligence have never before existed. Yet these examples
are just the beginning, and it is very likely that innovative organisations
will discover more ways to radically change existing industries or create
new ones.
These changes will not happen overnight, but the rate of change is
accelerating and business people a hundred years from now may find the
pervasive corporate hierarchies of today as quaint as we find the feudal
farming system of an earlier era.
*Thomas W. Malone is the Patrick J. McGovern Professor of Management at the
MIT Sloan School of Management and the founding director of the MIT Center
for Collective Intelligence*
Copyright <http://www.ft.com/servicestools/help/copyright> The Financial
Times Limited 2008
Anybody know where on-wiki the current survey is being discussed?
I've got a thing or two to say. (Message I just sent to
info(a)wikipediastudy.org appended.)
* * *
From: Steve Summit <scs(a)eskimo.com>
Date: Sat, 01 Nov 2008 10:16:41 -0400
To: info(a)wikipediastudy.org
Subject: your survey has problems
I just completed the survey at http://survey47.wikipediastudy.org/
survey.php. I'm sorry to be harsh and blunt. It's terrible.
You can't use my results accurately -- they're wrong.
I doubt you can use anyone's results accurately.
This survey could only be completed accurately by someone:
* with nothing to do / too much time on their hands
* who never makes mistakes
* who can anticipate future questions before they're asked
* who can be bothered to search for his country and language
(several times) in strictly-alphabetical lists of every single
country and language in the world
* who knows the 2-character ISO code for the languages he knows,
even when they're not obvious (e.g. DE for German)
* who knows the 3-character ISO code for the currency he uses
The survey told me I couldn't use my browser's Back and Forward
buttons, but had to use its own. That's rude.
The survey then failed to provide Back buttons on all pages.
That's incompetent.
The survey then asked me questions like "How many hours do
you spend contributing to Wikipedia, per week?", followed by
"How many hours to you spend administering Wikipedia?", followed by
"How many hours do you spend supporting Wikipedia in technical ways?"
And that ended up being profoundly insulting. Here's why.
The administrative and technical work I do on Wikipedia feels
like "contributions" to me, so (not knowing the next questions
were coming up) I included those hours in my first answer.
And the technical work I do feels like "administration", so
(not knowing the next question was coming up) I included that
in my second answer. Therefore, if (as I suspect) you're
assuming those three categories are disjoint, and since my major
contributions lately have all been technical, I've inadvertently
overstated my overall contributions in this survey by a factor
of three.
And those particular survey pages were among those without
Back buttons, so I couldn't fix my mistake. Do you know how
incredibly frustrating that is, to have wanted to spend time
contributing to a survey, to know I've contributed false
information, and to not be able to fix it?
Also, the survey took *way* too long. And there was no
information given up-front about how long it might take.
The progress bar in the upper right-hand corner was a clue
and a nice touch, but it came too late.
The survey also took too long in relationship to the impression
of the data likely to be gleaned from it. Short, tightly-focused
surveys give the surveyee the impression that some well-thought-out,
concise questions are being addressed by the surveyer. Long,
scattershot surveys give the impression that the surveyers aren't
quite sure what they're looking for, are trying to ask everything
they can think of, and are imagining that they'll mine the data
later for interesting results later. But, with poorly-defined
surveys, that task often ends up being difficult or impossible.
So I'm left begrudging the time I spent filling out the survey,
because it feels like the ratio of time investment (by me) to
useful information which can be gleaned (by you) is not good.
The survey asked me to specify things like "approximate number of
articles edited" and "percentage of time spent translating" using
drop-down selection boxes -- and with an increment of 1 between
the available choices! That's just silly. (I dreaded how long I
was going to have to scroll down to find my article edit count --
1196 -- and was both relieved and annoyed to discover that, after
500 entries, the drop-down list ended with "more than 500".)
The survey's categories were too-bluntly taken from existing
lists. For example, the list I had to choose my employment from
was apparently taken from one of those dreadful Department of
Commerce categorizations, that I have just as much trouble
finding my job in when I fill out my tax forms.
At the very end, the survey asked if I wanted to submit my
results, or fix any mistakes. But the provided way to fix
mistakes was to use the Back button -- perhaps several dozen
times -- which I wouldn't have felt like doing even if the chain
of Back buttons were complete.
The survey was clearly designed by someone who was thinking about
the data they wanted to collect, and in a scattershot way. The
survey was clearly not designed with the person completing it in
mind. The survey was clearly not designed or vetted by anyone
who knew anything about designing good surveys.
I probably had more complaints to list, but I shouldn't waste as
much time on this letter as I already wasted taking the survey,
so I'll stop here.
Bottom line: Please use the results of this survey with extreme
care, if at all. The results are going to be heavily, heavily
biased by the inadvertent selection criteria involved in the
survey's hostility towards its participants. If you conduct a
survey like this again, please find someone to assist in the
process who knows something about real-world survey work.
Where did this new donation banner in the edit window come from?
It's friggin' huge! I'm not sure I like it- it's kind of distracting. Maybe
I can get used to it, though...
--
Elias Friedman A.S., EMT-P ⚕
elipongo(a)gmail.com
http://en.wikipedia.org/wiki/User:Elipongo
On Thu, Nov 6, 2008 at 4:18 PM, Gordon Joly <gordon.joly(a)pobox.com> wrote:
> At 23:22 +0000 5/11/08, Thomas Dalton wrote:
>>2008/11/5 Giacomo M-Z <solebaciato(a)googlemail.com>:
>>> obviously has money to spare - what more point would you like?
>>
>>1) That was 2 years ago, what relevance does it have now?
>>2) We use Freenode a lot, it makes sense to support them. It wasn't
>>just giving money away randomly because they had more than they knew
>>what to do with.
>
> My guess is that some people who might donate in the future would not
> donate (at all) unless they felt that their money would go to the
> Foundation only. It is a given that the Foundation donated 5,000 USD
> to a third party, and hence might choose to repeat that donation or a
> similar donation in the future.
It is, of course, a no-brainer that the Foundation should be able to
support other organisations if it supports its mission. I'm not going
to get into the debate about whether this particular donation is
reasonable, but to say that donations to third parties are wrong per
se is, I think, wrong-headed. The real question is about whether the
donation supports, directly or indirectly, the Foundation's mission.
--
Sam
PGP public key: http://en.wikipedia.org/wiki/User:Sam_Korn/public_key
Hello all,
I would appreciate it if you could copy this message to the relevant
pages on en.wp.
First: Does Wikimedia have a funding crisis, does it need to ask
people for money?
The Wikimedia Foundation is a non-profit organization, and the vast
majority of its funding comes from fundraising and grants. It operates
more than 300 servers which keep Wikipedia alive, the associated
hosting and bandwidth, and the staff needed to support it. Its annual
expenses for the current fiscal year amount to approximately $6
million. We have already raised $2 million of that, which leaves us
with a gap of $4 million which we need to raise through donations
small and large.
If we fail to meet our budgeted revenue goals during and past this
fundraiser, we will eventually have to lay off staff and reduce our
capacity planning for servers and bandwidth, both of which will
directly affect your experience of Wikipedia. If you feel, for
example, that developers are often slow to respond to requests, well,
imagine how much worse it will be if we have to lay off some of them.
If you feel that editing is often slow, imagine how much worse it will
be if we cannot pay for additional servers or needed software
improvements.
As you know, the world is in economic crisis. At this point in time,
we do not know what the impact of this economic crisis will be on the
Wikimedia Foundation. We need to meet our targets to continue to
operate Wikipedia.
I realize that having a banner on the site you read and/or edit every
day is not convenient. We plan to create a smaller (plaintext) version
of the banner for signed in users. But we do not plan to reduce the
banner size of the standard banner, at least not until we have a
better idea of what the online fundraiser revenue will be for this
year. We are doing some systematic A/B testing of different banners
with different messages, and as we learn more, we will iterate the
banners further.
We do need to raise the funds to operate Wikipedia, so that you can
continue to use it, both as a reader and a contributor. Rather than
invite antagonism, I want to invite your collaboration in refining and
developing the banners. We are open to community suggestions here --
please feel free to post mock-ups on Meta Wiki at this page:
http://meta.wikimedia.org/wiki/Talk:Fundraising_2008/design_drafts
However, if we feel (or measure) that a banner will significantly
reduce the number of donations received, we will not use it. So if
your primary goal is to make the banner smaller or less "obtrusive",
rather than making it more effective or at least retaining its current
level of effectiveness, I don't think we'll come up with something
that can replace the current banners.
Please let's remember that the Wikimedia Foundation exists to support
this project's continued existence, and the Wikimedia movement
internationally. This means we need to create awareness for the fact
that we need to raise money, just like any other charity. This
fundraising drive will run until January 15. Until then, I ask for
your patience and cooperation in making it work.
Please feel free to contact me: erik(at)wikimedia(dot)org.
Thanks, Erik Möller 22:10, 5 November 2008 (UTC)
--
Erik Möller
Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
Thanks for the advice, Tim. I thought about raising the bug when I noticed
it, but thought no harm in asking first... it seems odd that no script devs
who use those JS libraries (ajax.js, etc.) have ever checked for skin
cross-compatibility before!
- H
Tim Starling [tstarling(a)wikimedia.org] wrote:
> Use bugzilla: http://bugzilla.wikimedia.org/
>
> -- Tim Starling
This is undoubtedly entirely the wrong list to be posting to, but I imagine
that there will be at least one or two people who know what's going on, or
who have experienced a similar phenomenon.
Checking for cross-skin compatibility on a script which utilises
MediaWiki's AJAX functions in ajax.js, I was surprised to see that the
scripts were throwing errors due to the functions not being defined,
despite the fact that ajax.js was definitely being run. Turns out that in
Monobook, the user script (User:<x>/monobook.js) is the very last element
of the document head, whereas in other skins it is called *before* the
newer "sexier" MediaWiki JS scripts (ajax.js, ajaxwatch.js, mwsuggest.js,
centralnotice.js), not allowing use of the functionality of those scripts
without a contrived workaround.
I apologise if I've made that incredibly unclear, but here's how it looks:
an excerpt from the source of a page viewed in Classic rather than
Monobook.
<script type="text/javascript"
src="/skins-1.5/common/wikibits.js?182"></script>
<script type="text/javascript"
src="/w/index.php?title=-&action=raw&smaxage=0&gen=js&usesk
in=standard"><!-- site js --></script>
<script type="text/javascript"
src="/w/index.php?title=User:Haza-w.debug/standard.js&action=raw&ct
ype=text/javascript"></script> <script type="text/javascript"
src="/skins-1.5/common/ajax.js?182"></script>
<script type="text/javascript"
src="/skins-1.5/common/ajaxwatch.js?182"></script>
<script type="text/javascript"
src="/skins-1.5/common/mwsuggest.js?182"></script>
Compare with Monobook, where the site and user JS scripts are only called
right at the end of the document head:
<script type="text/javascript"
src="/w/index.php?title=-&action=raw&smaxage=0&gen=js&usesk
in=monobook"><!-- site js --></script>
<script type="text/javascript"
src="/w/index.php?title=User:Haza-w.debug/monobook.js&action=raw&ct
ype=text/javascript"></script>
</head>
Does anyone know why this occurs, and how I could go about getting the
discontinuity addressed?
- H