Since the update to version 1.19.2 links to our local files (in the form: file:///) do not work anymore (no error message, the browser (IE)) does not start any action) - although the line $wgUrlProtocols[] = "file:///" is in the LocalSettings. What can I do?
I installed the svn Cite files and added the "require_once" command in the
LocalSettings.php file. I tried a simple reference in Sandbox, but nothing
seemed to happen. The <ref> tags are still there and <references /> is
empty.
What do I need to do?
--
D. E. (Steve) Stevenson
(Almost emeritus) Associate Professor
Director, Institute for Modeling and Simulation Applications.
Clemson University
steve at clemson dot edu
Anyone who has ever looked into the glazed eyes of a soldier dying on the
battlefield will think hard before starting a war. -Otto von Bismarck,
statesman (1815-1898)
I have a wiki running an old installation of version 1.16.4. I'd very
much like to be able to track releases using Git as described in the
manual, however, I'm confused about how to do this. The manual says
to "If using Git, export the files into a clean location. Replace all
existing files with the new versions, preserving the directory
structure. The core code is now up to date.".
What does "export the files" mean? Does this mean to clone the core
repo somewhere first?
Is it really as simple as replacing files with new versions? What
about the hidden files that designate my directory as a clone of core,
do I copy those too?
I'm sure I'm making this more complicated than it really is, but I
really want to be able to upgrade in the future with a simple git
pull.
Thanks
Bill
According to phpmyadmin, my mysql server which only run Mediawiki got
Handler_read_rnd 280.3 k
Handler_read_rnd_next 367.8 M
in two days.
How to reduce these two value?
Hello Friends at Media Wiki,
We have a web site using MediaWiki code 1.19.1 where one can look up words.
We have entered test words in Sanskrit to the database. A few use the
letter 'c' as a first letter ( for lookup purposes) then the Sanskrit
letters of a word follow. Some are only Sankrit letters. When i query the
database for words starting with 'c' it gives me the records and displays
the si_title field which includes the test words with Sanskrit characters
that start with the letter 'c' and the rest of the letters are Sanskrit.
The query is as follows:
$result = mysql_query(
"SELECT si_page, si_title FROM saa_searchindex WHERE left(si_title,1) =
'" . $_GET['letter'] . "' order by si_title ");
I put a Sanskrit letter on the page ( code shown below) as a link to run
the same query as above and it picks up and displays all words starting
with that Sanskrit letter.
printf('<a href="http://192.168.0.248:8087/skins/AlphaSearch.php?letter=' .
"\xE0\xA4\x85" .'" class="myclass">%1$s</a> ', "\xE0\xA4\x85");
When i do either of these queries all the words found are put in a table
on the web page as links.
while ( $row = mysql_fetch_array($result) ) {
$word = $row["si_title"];
$page = $row["si_page"];
printf('<td><a href="http://192.168.0.248:8087/index.php?search=' . $word .
' " &go=GO HTTP/1.1">%s</a></td> ', $word);}
Then by clicking on the link that word is looked up and displayed. In
English this works fine. For Sanskrit words the link displays all in
Sanskrit characters but when clicking on the link it never can find the
Sanskrit word in the database. So the Sanskrit Character i put on the page
as a link was put in by utf8 char code. When a user types in a Sanskrit
letter or word or when the Sanskrit word is displayed as above it can't be
found by the media wiki query.
Any ideas on how to solve this.
And can anyone tell me where the actual code line of the sql query
(index.php?search=) is in the media wiki code files?
Thank you so much for your help, Markandeya
Hi,
Lots of extensions require me to issue a shell command:
php maintenance/update.php
...but I don't have shell access to my mediawiki installation; I only
have FTP access.
So far I haven't needed to deploy any extensions that required this,
but now I am wanting to deploy several extensions that have this
requirement in their installation procedure.
How do people typically deal with this?
Why does this script need to be run from the shell and not from an
administrative UI?
Thanks.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Dear Friends of MediaWiki,
Problem: can't get any search results from my Sanskrit word look up using
index.php as below.
printf('<a href="http://192.168.0.248:8087/index.php?search=' . $word . ' "
&go=GO HTTP/1.1">%s</a> ', $word);
It queried from the same display of Sanskrit words as described below.
Using my own query described below it works. Using index.php and its search
it does not return any results.
I found i could make a query that worked with the displayed Sanskrit words
looked up in the database. That look up got the records by searching words
starting with a specific letter that i gave the UTF8 code for, i.e
"\xE0\xA4\x85" for a Sanskrit letter. So clicking on the displayed Sanskrit
words i could write a query that returned the word and its definition.
"SELECT si_page, si_title FROM saa_searchindex WHERE si_title = '" .
$_GET['search'] . "' order by si_title ");
$search comes from
printf('<tr><td><a href="
http://192.168.0.248:8087/skins/mySearch.php?search=' . $word . ' " &go=GO
HTTP/1.1">%s</a></td>', $word);
$word comes from the query searching words starting with a specific letter.
while ( $row = mysql_fetch_array($result) ) {
$word = $row["si_title"];
So, rationalwiki.org has been *much* faster and more usable with a
coupla squids and a load-balancer in front of the
Apache/Lucene/database node. (We could probably cope with just one
squid, but Trent wanted to experiment.) The nodes are all Ubuntu 10.04
Linodes, the software manually kept up to date.
Our problem now is that the Apache box sometimes ... just goes nuts,
fills memory with Apache processes, then it goes into swap, then the
oom-killer comes out to play and we have to work out what it's killed
or (quicker) reboot the node.
We have had occasional load spikes - where the load-balancer sees
someone or something hammering it at 300 hits/sec or so - but they
*don't* always coincide with Apache going nuts. The squids don't show
any excess load during Apache going nuts either.
If we happen to catch it when it's in swap but before oom-killer comes
out, apache2ctl restart brings things back to normality.
The Apache node has 4GB memory, about 3GB of that being free/cache in
normal operation.
We have NO IDEA what or why this is happening. Last happened around
three days ago. Since then it's been lovely, but it always is until it
falls over. Clues welcomed.
- d.