Jimmy Wales wrote:
Lars Aronsson wrote:
We did discuss this before, and someone was
guessing that
appending a line to a file would take too much time, but I don't think
anybody tested this. The append to file could be conditioned so only
accesses that took more than 2 (or 5) seconds were logged.
In my opinion, this would work just fine. At bomis, which is run on
fastcgi/perl scripts cruder than you can possibly imagine, I
frequently hand-tune things by writing logfiles in this fashion. At
least in perl/fastcgi, appending loglines to a file is fast enough
that it plays only the very tiniest role in the overall performance of
the site.
It's a very useful tool for getting a mental picture of what's going on.
--Jimbo
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)ross.bomis.com
http://ross.bomis.com/mailman/listinfo/wikitech-l
I agree: logfile writes will normally be buffered in disk cache RAM by
the OS unless a file is explicitly sync'd, so the overhead is perhaps
one disk hit per 5 seconds, at an average cost of a seek, half a spin,
and a single block write: perhaps 15 ms in 5s, or 0.3% of total disk
performance.
Using Apache's log rotation system is good too: having, say, 14 logs and
rotating them daily will keep two weeks of logs. At 100,000 hits/day and
80 chars/log entry, each logfile is 8 Mbytes, and the entire set of logs
will only take 112 Mbytes on disk: hardly noticeable on modern systems
with tens of gigabytes of spare disk space.
Neil