hello,
all clusters (s1, s2 and s3) are now available. a new column, 'server', has
been added to the toolserver.wiki table, saying which cluster each database
is on. you can then connect to sql-s# (e.g. sql-s2 for s2) to use that
database.
the toolserver database and user databases are still on 'sql'.
some people have asked to replicate user databases or commons between the two
servers. i'll be looking next at whether this is feasible.
- river.
Hello all,
as I read in the actuall Debian-news-email, the debian-project plans to remove
php4 from testing and unstable in the future [1]. Because hemlock runs
testing, php4 will be removed then too.
I have no idea, how long that take (normaly, debian is more slow). So if one
(or more) of your tools use php4 still, this is a good timestamp to port it
to php5.
Sincerly,
DaB.
[1] http://www.us.debian.org/News/weekly/2007/06/index.en.html
hello,
enwiki (s1) is now available on the new db server. the hostname you can use
to connect is "sql-s1" (i have also added aliases for sql-s2 and sql-s3).
user/password is the same; your existing account should work. please tell me
if there are any problems.
later today, i will reinstall zedler and reimport s2 and s3. (user databases
will be backup up first). the files in hemlock:/mnt/aux0 will not be backed
up. during that time, these clusters will be completely unavailable, but
given the current state, i don't think that's so bad. hopefully this will
only take a few hours in total.
- river.
Hello all,
I'm in the process of building my first tool on ts, a text copyright
violation checker similar to copyscape.com. However, it is currently so
resource intensive I'm worried I'll crash the toolserver with it. Basically,
it is a PHP script doing the following:
- 7 includes from disk (this could be reduced to three)
- 12 HTTP requests to external servers (although all requests are
fully cached with 24 hour expiration)
- 12 disk reads (of above requests) regardless of caching
- 300-350 (approx) regexps (mostly preg_replace() calls)
- A whole lot of iteration.
A script execution fully cached can run in about 1.5 seconds, but the moment
the script has to go back to the web that increases to anywhere from 10 to
15 seconds. On one of my tests, I even got a "*Fatal error*: Allowed memory
size of 16777216 bytes exhausted (tried to allocate 3492940 bytes)".
Compared to many of the other tools on the server, I was afraid this could
be simply too resource intensive. Can anyone tell me if this would be
appropriate for the toolserver, or if I should host it somewhere else?
-- Draicone
**