Hi. I'm considering mediawiki for a large scale application (more then 2000 users).
Main points of the application are:
1) there must be a herarchical users structure such as:
GOD > ANGEL1 > REVEREND1
GOD > ANGEL1 > REVEREND2
GOD > ANGEL2 > REVEREND3
REVERENDS can see and edit ONLY their personal contents. ANGELS can see and edit ONLY their personal contents plus the contents of their subordinates (for exemple: ANGEL1 can manage the contents of REVEREND1 and REVEREND2).
2) The structure of the point 1 changes very often. Also users data change very often too. I cannot manage them manually from mediawiki webpages: it costs too much time in costant manteinance. A server process must be developed to daily import and update users data and permissions in background from an excel, a csv or a webservice.
Some exemples of changes are:
- a new REVEREND such as REVEREND4 appears under ANGEL2
- REVEREND2 boss change from ANGEL1 to ANGEL2
- ANGEL2 disappear
- REVEREND2 diasappear
- REVEREND3 became ANGEL4
- all the herarchical structure totally change
Only created and authorized users form the server process import can login to the applcation. A user cannot register himeself because otherwise it isn't possible to place him in the herarchical structure by default and to check if he has got the correct system login nickname.
3) For point 1 and 2 I checked PermissionsACL extension but it seems this one requires a lot of administrator manual mantainance. For exemple, have I got to create a namespace for every one of my 2000 users? Have I got to update every time LocalSetttings.php? Have I got to create a group for every user? Have I got to write into Localsettings.php all the things a single user can see on mediawiki because PermissionACL protects all? Is there a solution to automate all of this from the server process of the point 2?
Considering SQL Injection etc. etc, can PermissionsACL guarantee me a proteceted application where a user can see only his own data in the structure of the point 1.
4) In the structure of the point 1 is possible to redirect a REVEREND user directly to his personal contents after the login?
5) And if I got to import, from excel csv or webservice, some user contents to manage the fact in this way he is not forced to retype it?
6) I've got a development machine and an application machine. When I create on the development machine a release n.2, n.3 etc. etc. of my application: how can I migrate it to the application server considering the fact, on the application server, there are real data and on the development machine data are obviously fake? Have I got to do it manually?
7) is there a way to backup only data or have I got to backup all my easyphp folder to be sure I can restore the application? What have I got to backup to be sure I can restore my mediawiki application?
8) considering more the 2000 users, passwords cannot be changed manually if a user forget it. Too much manteinance. Is it possible to use Windows Authentication? Or is there a way to enable the structure of the point 1 to manage subordiante users passwords (for exemple ANGEL1 can reset the password of the REVEREND1) ?
9) if I've got to export data daily in background not manually (such as REVERENDS contents) to some other system: is it possible to do it considering data into mediawiki database are in mediawiki syntax?
10) considering more then 1500 REVERENDS: they can write their personal stuff in different ways. Is there a way to fix it in a common way so mediawiki can manage links?
11) the idea is to develope the product in my language. It is not eneglish because not all the users actually know english. Apart user imputed data, will be possibile to update my mediawiki application in english one day or to manage it multilanguage? Have I got to re-implement my application from the beginning?
Mediawiki is a good product but in my opinion is not the solution for my needs because they are out of mediawiki contents and for what I can see using mediawiki it costs me a lot in regular daily mantainance.
Please be honest, is mediawiki the right solution for me? Is there a solution for my points using mediawiki? Or is it better if I look for some other solution more customizable for what I have got to do and less expensive in manteinace and development?
------------------------------------------------------------
Scale
Track your weight loss! Click now for precision scales.
http://tagline.excite.com/fc/FgElN1gvzpW29HK9NI6Gn6raKyb1pXuFRHGzsFNNmNRc8P…
Hello,
Is there a special page that lists pages which are completely blank? I was
looking for it and couldn't find it.
In user space it is usually not important, but in Category, Image or Main
space it is definitely important - Categories should have a parent, Images
should have a license and a category, and articles should have content.
There's "Short pages", but no "Blank pages". Was it ever considered? Is
there, maybe, some bot that creates such a list? I accidentally found quite
a lot of pages of this kind while analyzing a dump for a different reason
and wondered about it.
--
Amir Elisha Aharoni
http://aharoni.wordpress.com
"We're living in pieces,
I want to live in peace." - T. Moore
I'd like to share some exciting news with you all... After four awesome
years working for the Wikimedia Foundation full-time, next month I'm
going to be starting a new position at StatusNet, leading development on
the open-source microblogging system which powers identi.ca and other sites.
I've been contributing to StatusNet (formerly Laconica) as a user, bug
reporter, and patch submitter since 2008, and I'm really excited at the
opportunity to get more involved in the project at this key time as we
gear up for a 1.0 release, hosted services, and support offerings.
StatusNet was born in the same free-culture and free-software community
that brought me to Wikipedia; many of you probably already know founder
Evan Prodromou from his longtime work in the wiki community, launching
the awesome Wikitravel and helping out with MediaWiki development on
various fronts. The "big idea" driving StatusNet is rebalancing power in
the modern social web -- pushing data portability and open protocols to
protect your autonomy from siloed proprietary services... People need
the ability to control their own presence on the web instead of hoping
Facebook or Twitter always treat you the way you want.
This does unfortunately mean that I'll have less time for MediaWiki as
I'll be leaving my position as Wikimedia CTO sooner than originally
anticipated, but that doesn't mean I'm leaving the Wikimedia community
or MediaWiki development!
Just as I was in the MediaWiki development community before Wikimedia
hired me, you'll all see me in the same IRC channels and on the same
mailing lists... I know this is also a busy time with our fundraiser
coming up and lots of cool ongoing developments, so to help ease the
transition I've worked out a commitment to come into the WMF office one
day a week through the end of December to make sure all our tech staff
has a chance to pick my brain as we smooth out the code review processes
and make sure things are as well documented as I like to think they are. ;)
We've got a great tech team here at Wikimedia, and we've done so much
with so little over the last few years. A lot of really good work is
going on now, modernizing both our infrastructure and our user
interface... I have every confidence that Wikipedia and friends will
continue to thrive!
I'll start full-time at StatusNet on October 12. My key priorities until
then are getting some of our key software rollouts going, supporting the
Usability Initiative's next scheduled update and getting a useful but
minimally-disruptive Flagged Revisions configuration going on English
Wikipedia. I'm also hoping to make further improvements to our code
review process, based on my experience with our recent big updates as
well as the git-based workflow we're using at StatusNet -- I've got a
lot of great ideas for improving the CodeReview extension...
Erik Moeller will be the primary point of contact for WMF tech
management issues starting October 12, until the new CTO is hired. I'll
support the hiring process as much as I can, and we're hoping to have a
candidate in the door by the end of the year.
-- brion vibber (brion @ wikimedia.org)
CTO, Wikimedia Foundation
San Francisco
IMHO the question, in this case, is not "how to build the perfect
wikitext grammar/parser", but how to ease editing of wikitext through
editor enhancements. For that, it seems sufficient to cover the vast
majority of cases instead of writing a perfect solution, as long as it
falls back to "ugly" wikitext when in doubt. Having 1% of templates
appear in all their ugliness is better than 100%.
To that end, I have just written some proof-of-concept JavaScript:
http://en.wikipedia.org/wiki/User:Magnus_Manske/tmpl.js
It is not a solution, but shows a possible approach (example text to
use: [[Picopict]]):
* On edit start, it finds all templates in wikitext
* It does so only in namespace 0, so no template variables need to be
considered (otherwise, move the damned text out of article namespace!
;-)
* It will not touch templates that have no parameter - {{reflist}} is
easy enough
* It will replace all other templates with strings like
##TEMPLATEnumber:name##, e.g., ##TEMPLATE1:Infobox VG##
* These will be replaced with the original text again on save/preview/diff
* Double-clicking on, say, TEMPLATE1 will select that entire word. If
it is a template placeholder, an action is performed. For this demo,
it only shows the original text of the template (note that nested
templates are left as wikitext).
Obviously, this can be expanded to call up an editor of some kind. I
don't want to write one now that JQuery is around the corner :-)
And yes, I am introducing new syntax. That's not the point here; could
just as well be some other, pretty placeholder in a wysiwyg editor.
As to the issue of getting possible template variable names: Why not
* load the wikitext of the template in the background
* remove all nowiki, noinclude, etc
* get everything that looks like "{{{NAME|" or "{{{NAME}}}"
* remove known magic words
* Profit!
Cheers,
Magnus
I've been spending much of the last few work days tidying up an update to
our deployed codebase, which has been several weeks behind development for
most components.
I'll want to start deploying this in the morning (Pacific time), so we'll
have most of the day to poke around and fix up any problems on the sites...
it's gonna be fun!
This'll primarily bring a lot of under-the-hood improvements, which'll let
us start rolling out other fun things over the coming days/weeks including:
* Daily updates of interface translations (LocalisationUpdate)
* Updated PDF collection/book UI
* major cleanup of maintenance scripts
* foundation work for improved uploading support
* foundation work for new UI stuff (JS2 / add media wizard / cool stuff
from Usability team)
For those of you poking at the code, I've got the pre-deployment code
currently sitting in the wmf-deployment-work branch; this'll get folded
back over wmf-deployment when we're ready to go.
I believe all custom hacks from the current wmf-deployment branch have
been either copied over or generalized and merged via trunk... Note I've
held back updates on ProofreadPage and OggHandler for the moment, and we
won't deploy the new JS2 code yet until we've shaken things down some more.
If there are any *critical* trunk fixes from the last few days (since
r55160 trunk branch point) that need to be forward-ported before we start,
or any other surprises, let's make sure we know about it soon. :)
-- brion
My attachment did not make it into the JS2 design thread... and that
thread is in summary mode so here is a new post around the html output
question. Which of the following constructions are easier to read and
understand. Is there some tab delimitation format we should use to make
the jquery builder format easier? Are performance considerations
relevant? (email is probably a bad context for comparison since tabs
will get messy and there is no syntax highlighting)
Tim suggested that in security review context "dojBuild" type html
output is more strait forward to review.
I think both are useful and I like jquery style building of html since
it gives you direct syntax errors rather than html parse errors which
are not as predictable across browsers. But sometimes performance wise
or from a quick "get it working" perspective its easier to write out an
html string. Also I think tabbed html is a bit easier on the eyes for
someone that has dealt a lot with html.
Something thats not fun about jquery style is there are many ways to
build that same html string using .wrap or any of other dozen jquery
html manipulation functions ... so the same html could be structured
very differently in the code. Furthermore jquery chain can get pretty
long or be made up of lots of other vars, potentially making it tricky
to rearrange things or identify what html is coming from where.
But perhaps that could be addressed by having jquery html construction
conventions (or a wrapper that mirrored our php side html construction
conventions? )
In general I have used the html output style but did not really think
about it a-priori and I am open to transitioning to more jquery style
output.
here is the html: you can copy and paste this in... on my system Firefox
nightly str builder hovers around 20ms while jquery builder hovers
around 150ms (hard to say what would be a good target number of dom
actions or what is a fair test...) ...jquery could for example output to
a variable instead of direct to dom output shaving 10ms or so and many
other tweaks are possible.
<html>
<head>
<title>Jquery vs str buider</title>
<script type="text/javascript"
src="http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"></script>
<script type="text/javascript">
var repetCount = 200;
function runTest( mode ){
$('#cat').html('');
var t0 = new Date().getTime();
if( mode =='str'){
doStrBuild();
}else{
dojBuild();
}
$('#rtime').html( (new Date().getTime() - t0) + 'ms');
}
function doStrBuild(){
var o = '';
for(var i =0 ;i < repetCount;i++){
o+=''+
'<span id="' + escape(i) + '" class="fish">' +
'<p class="dog" rel="foo" >' +
escape(i) +
'</p>' +
'</span>';
}
$('#cat').append(o);
}
function dojBuild(){
for(var i =0 ;i < repetCount;i++){
$('<span/>')
.attr({
'id' : i,
'class' :'fish'
})
.append( $('<p/>')
.attr({
'class' :'dog',
'rel' : 'foo'
})
.text( i )
).appendTo('#cat');
}
}
</script>
</head>
<body>
<h3>Jquery vs dom insert</h3>
Run Time:<span id="rtime"></span></div><br>
<a onClick="javascript:runTest('str');" href="#">Run Str</a><br>
<a onClick="javascript:runTest('dom');" href="#">Run Jquery</a><br>
<br>
<div id="cat"></div>
</body>
</html>
--michael
In http://wikireality.ru (very useful site about Russian WIkipedian
and wiki projects) began to appear regularly party giving to the
userpages of bad external links, probably with the bot. Can I stop
this?
(MediaWiki 1.15 with Abuse Filter & Title Blacklist)