Business 2.0 has an article about venture capitalists that are looking
for business opportunites in specific areas. One in particular (below)
sounds like a wiki. Anyone who wants join with me to make a prototype,
write a business plan and pitch it to Norwest Venture Partners contact
me off list (paul(a)yellowikis.org).
http://www.business2.com/b2/web/articles/print/0,17925,1096807,00.html
<Ctrl-c>
$3M SOCIAL NETWORKS MEET THE TOWN CRIER
WHO: Jim Lussier, Norwest Venture Partners, Palo Alto
WHO HE IS: A former exec at Accenture and Beyond.com, Lussier handles
software and services for Norwest, which manages $1.8 billion in
venture funds and won big with investments in PeopleSoft (PSFT) and
Tivoli Systems. Lussier led recent deals with travel-services company
G2 Switchworks and P2P radio broadcaster Mercora.
WHAT HE WANTS: A kind of souped-up Craigslist for every neighborhood,
everywhere. Just type in a zip code, and this website will present not
just garage sale listings and classified ads, but headlines and photos
from dozens of local news sites run by busybodies willing to provide
free content and keep it constantly updated.
WHY IT'S SMART: The idea taps into the same social-networking allure
that helped make Flickr a $35 million Yahoo (YHOO) acquisition -- that
is, user-driven content, organized so that all visitors get a slick
stage to showcase their stuff. In this case, the stuff isn't just
photos but local knowledge: updates on a rash of burglaries, say, or a
ranking of preschools. (The most popular contributors might receive a
small percentage of ad revenue.) The site could also offer a
marketplace for everything from baby-sitting to tree trimming. Sure, a
number of regional websites already offer something similar, but this
site would aggregate the content under one umbrella and provide a
platform to scale it across the nation. If all goes as planned,
Lussier says, paid ads could bring in as much as $100 million a year.
WHAT HE WANTS FROM YOU: A website template finished within six months
on a $500,000 budget. (Lussier estimates that a team of two or three
can do the job.) If it shows promise, Norwest will provide another
$2.5 million to tackle the real challenge: developing a core group of
users who can light the viral fire. "Building that community happens
when people get value out of the service," Lussier says. "They need to
feel compensated for their time by the information they get back out."
SEND YOUR PLAN TO: jlussier(a)nvp.com
<Ctrl-v>
--
Yellowikis is to Yellow Pages, as Wikipedia is to The Encyclopedia Britannica
>>> Zipping is only done as a quick way to determine download was
>>> succesful, without need for md5sum.
>I think this is silly. The 32 bit CRC used in ZIP is even weaker than MD5.
>Zipping the files puts a burden on the generator (takes time)
Yep, 5 seconds
> and the user (takes time, space)
Time: 5 seconds
Space: never fill your PC up till the last Gb, same is true for all
wikimedia downloads.
> and in this case actually increases the download.
Yep 0.1%
>Plus the determination whether the download was successfull needs to be
>done without ZIP or MD5sum, anyway, because these could be easily forged.
I'm talking transfer errors only.
>Digitally sign the files, and put up instruction on how to verify the
>signature.
I have other things to do.
Erik Zachte
I am writing in regards to the files at
http://download.wikimedia.org/tomeraider/current_tr3/
I am not sure who would be best to send this message to. dbenbenn
suggested this address from the village pump. Please pass this along
to whoever handles these files.
Tome Raider files are already compressed, and zipping them actually
increases the file size. For the English edition the size increases
from 763MB to 764MB. Also, unpacking that file requires an additional
763MB until the zipped version can be deleted. I would request that
whoever is in charge consider releasing future copies without zipping
them.
Thank you
If anyone's curious, I've whipped up a quick XML dump filter tool, which
can take a MediaWiki XML backup dump and filter it by title, based on a
regular expression match or a file with a list of pages to take.
I used it to split up the Wikisource dump to move pages with their edit
histories from the old Wikisource wiki to the new language-specific
subdomain wikis (import of these dumps going on right now).
It's in C# (gasp horror) and runs on Mono. I haven't tested it on
Microsoft .NET, but it probably runs there.
Source is available in the mwdumper module in CVS. No binaries or
documentation at this time, but I expect to continue working on it for
use in creating republisher-friendly dumps in the future.
-- brion vibber (brion @ pobox.com)
Hi,
ist {{CURRENTTIME}} the only variable to work with? My local settings are
up to date but if we make a Date in a site we becam ÚTC and noch UTC+2.
Only in last changes are the time correct.
If we sa "Welcome, its {{CURRENTTIME}} at
{{CURRENTDAYNAME}},{{CURRENTDAY}}...." we lost 2 hours :(
Regards
Bonjour,
Nous utilisons MediaWiki 1.4.1 et 1.4.7.
Nous avons créé des liens RSS et, ô surprise, lorsque les titres des articles s'affichent sur une page web, le titre suivant apparaît:
Wiki - Nouvelles pages [fr]
Un article de Wiki, l'encyclopéde libre.
fr
MediaWiki 1.4.1
Thu, 25 Aug 2005 12:14:07 GMT
ATTENTION: L'ENCYCLOPEDE LIBRE.
Il manque un "I", non?
Comment est-il possible de modifier cela? Quel fichier modifier?
Est-il possible aussi de modifier toute la phrase?
Merci d'avance et meilleures salutations.
PRoth
Hello,
I recently added some namespaces to my wiki. After adding an article in one of
the new namespaces the corresponding row in recentchanges is linked to another
custum namespace (a talk namespace) and no supprise, when following that link I
get an empty page.
When I look in the cur table I find the correct id in the field cur_namespace.
In the table recentchanges I find a incorrect namespace id.
Anybody any idea?
Tnx,
Arjen Wildemans