Hi,
This is my first time posting so please bear with me (if this isn't the
right email list to post to, please direct me to the correct one). I
just want to say that I'm a big fan of Wikipedia and I think the
developers of MediaWiki did an amazing job.
I have recently been trying to set up MediaWiki on my website but so far
have been unsuccessful. I got to the configuration screen and filled out
the required information about users and my database. Afterwards, it
told me that "LocalSettings.php" and that I should move it to the parent
directory. However, when I checked "LocalSettings.php" there was no text
in it (0KB size). However, I still moved it to the parent directory and
then tried to run the wiki.
Instead of the main screen, I received an error that it could not log
into my database with root@localhost (which isn't what I specified). So
my hunch is that the installation didn't create "LocalSettings.php"
properly.
Thank you very much for reading this.
Cheers,
Kevin Li
2A Systems Design Engineering
University of Waterloo
Warning: Long and depressing text follows. Don't read it at home, save
it for work instead. Better spend a nice evening with your girlfriend.
(Then again, this list is probably like slashdot, so forget about the
imaginary girlfriend and continue reading ;-)
I thought I had it all figured out.
I created a demo version for data entry in a wiki-like fashion. It uses
a "one-table-fits-all" SQL schema, which some of you had worries about.
No problem. If someone else write a better data entry mechanism, I'm all
for it. As far as it concerns me, the WikiData site should be like a
black box to the outside, serving data to wikipedias and everyone else
who wants it. What's going on inside is only for those who enter the data.
Today, I finished creating a rough draft for the query (the wikipedia)
side of the bargain. Istead of creating Yet Another Wikimarkup [{(like
this)}] I figured out that we should separate the query and the display
part, and hide the query part within the template system. Goes like this:
{{speciesdata:Foobus Barus}}
in the article; [[Template:Speciesdata]] looks like this:
<data>
<query database="wikispecies" result="r1">Some sort of XQuery or SQL
query for wikispecies for {{{1}}}</query>
Some species data table using <r1>latin_name</r1>, <r1>name_en</r1>,
<r1>family</r1> etc.
</data>
For creating lists (like "all species within the family 'Foobus'"), a
<foreach> element could be used.
The <data> thingy would be a plugin ("plugins GOOD!"), but one that
returns wikitext to be parsed further. It would handle the <query> and
<foreach> tags etc.
So, we'd have *one* ugly m..........r of a <data><query> kind template,
which would, once created, not be edited again a lot. All the powerful,
functional uglyness that could scare newbies away would be hidden
through the template.
Yes, I got it all figured out.
Then it hit me.
As good "wiki-fiddlers" (thanks so much, Register!) we would like to see
every change in WikiData on the wikipedia pages real soon. Like, now.
So the information that something changes, and what changed, has to pass
from the data site to the display site. There are two ways to do that:
push or pull.
PUSH means the data site will notify the display site that something has
changed, and the display needs to be updated. For that, the data site
has to know which pages of the display site are affected by which
change. Then, it has to notify the display site of this. Bad things:
* Needs basically a cache of *all* queries *ever* asked of the data
site, as well as their results
* Has to recalculate *all* of these after *every* change to find which
queries produce different results
* Won't work if the display site is offline
* Won't work well with non-wikipedias
That can't be it.
PULL means the display site asks the data site if anything has changed,
which basically means rerunning a query. Which means, doing this for
*every* pageview, even for anons. Which means, all caching variants,
including squids, are going bye-bye. Additionally, for every page view,
the display site has to wait for the data site to complete the query.
Think wikipedia is slow today? Think again...
That can't be it, either.
Oh, sure, we can cache the queries with results on the display site, or
only update the data once a day/week, but then we won't be wiki (=quick)
anymore, no? Will this be the price to pay?
I think I'll have that autumn-depression now, please...
Magnus
Some time ago, I introduced alternative names for some of the SQL
wrapper functions, in an attempt to atone for the horrible naming scheme
I set up. It was derived from the previous non-OO naming scheme, and it
really shouldn't have been. But I didn't make it clear that I intended
the old names to be obsolete, and I didn't systematically update the
code. I'm now attempting to correct that. I've removed the old names
completely and updated all the code, in HEAD.
The changes are as follows:
getField -> selectField
getArray -> selectRow
insertArray -> insert
updateArray -> update
The "array" qualifier was to distinguish from functions which took raw
SQL clauses instead of other data. Those functions are obsolete. The
current functions encapsulate that functionality by detecting the
variable type.
The following SQL wrappers are unchanged:
select
insertSelect
replace
delete
deleteJoin
The set() function is obsolete.
-- Tim Starling
Klingon interwiki-links have been decided not to work. I have always
hated this so-called compromise. But now I even find that what happens
is that they DO work, but are shown as normal internal links rather
than interwiki links. Isn't this contrary to the intentions of this?
Now they are even MORE visible than normal interwiki links.
Furthermore, as I have said before, I find this so-called compromise
worse than both of the things it is supposed to be a compromise
before.
All in all, could we please get to treat Klingon interwiki links the
normal way again?
Andre Engels
Is the Go button really necessary?
I think it is highly confusing and cluttering. I would prefer to have just
Search. I have seen several MW newbies trying to figure out whether they
should click Go or Search.
If you think it should stay, it's label must change to something more
intuitive. But the best thing would be to get rid of it in 1.3.8 or 1.4.
--
NSK
Admin of http://portal.wikinerds.org
Project Manager of http://www.nerdypc.org
Project Manager of http://www.adapedia.org
Project Manager of http://maatworks.wikinerds.org
Hi,
tonight at 4:00 am (MESZ) we fetched a "cur_table.sql.bz2" from
http://download.wikimedia.org/archives/de/ which contained only approx.
97'000 articles. The previous version contained 154'000 articles in the
cur-table.
Meanwhile - six hous later - the file completly disappeard from the site
(Error 404).
I rolled back our mirror (http://lexikon.rhein-zeitung.de/) to the
previous version from Oct 13. What happened with the archive? Does this
relate to "slave out of sync" problem, which is discussed above?
Sincerely
jo
Are you considering adding a "History Pruning" feature in MW1.4? Doing it with
SQL isn't very easy.
History sometimes becomes very big and uses too much disk space. Why not get
rid of it?
In NerdyPC and Adapedia the authors' names and the changelog are included in
the article text, so that we can delete the history and free up disk space
whenever we want.
I just think this scheme is more efficient.
--
NSK
Admin of http://portal.wikinerds.org
Project Manager of http://www.nerdypc.org
Project Manager of http://www.adapedia.org
Project Manager of http://maatworks.wikinerds.org
Hello ...
I am wondering if there are any plans or discussions about enhanced
the security of the wiki in a user/groups based, for example to let an
editor protect his pages from view/modify other users that the ones he
designate (with a special tag or something) ... I think that this is a
feature not very useful at wikipedia sites, but yes on my wikis used
mainly for projects development .
thanks, regards
--
Problems with Windows - Reboot.
Problems with Linux - Be root.
Vic (aKa CoffMan)
vic(a)wickle.com
wickle(a)gmail.com