Hi,
I just uploaded a logo with the
new wiki-sphere for the Volapük
wiki. How do we make it go to
the corner, not that it's misbehavin'
or anything like that. It's now just
hanging at the bottom of the page...
The Latin wiki also has the new
"VicipaediA" logo, but as I found
out the page is locked and I don't
have Admin priviledges there. I've
uploaded this to my own domain
http://bowks.net/wiki/la/vicipaedia.png
where there are other images that
belong in the wiki database also...
Could someone grant me adminship
to la.wiki? I've been there since the
wiki-dot-com days as User:ILVI and
it was me who did the look of the
present Hompage at la.wiki.
With thanks and regards,
Jay B.
"Timwi" <timwi(a)gmx.net> schrieb:
> Andre Engels wrote:
>
> > I checked, and on de: there are now over 100
> > indefinitely blocked IPs blocked by Proxy blocker
>
> Then you should probably unblock them. I had unblocked all the
> indefinitely-blocked on en after it was switched to banning them only
> for 7 days. I would have thought the other Wikipedias would do the same.
Who, me? I'm not a sysop on de:. And to expect people on all those Wikipedias
to conclude that they should do unblocking because a mailing list message has
said the way of blocking is going to be changed, is rather naive. There's
enough Wikipedias where the admins don't even know what Proxy blocker is, let
alone that its workings have changed, let alone that the change of working
forces them to take action.
Andre Engels
Geoffrin is physically installed and presumably will go into service
when the developers (Brion, esp.) get the time. Brion had mentioned
possibly this weekend, but of course *I* think he should take his
time. :-)
So now we have a pretty sweet setup, but let me know: what are our
next needs? When should I start thinking about shopping? What will
we want to get?
Random guesses are discouraged -- I think the most productive
recommendations will be based on specific information of bottlenecks
or points-of-failure that have formed or will form soon based on
actual empirical evidence.
I know coronelli has a stability problem, and we have money in the
bank. Perhaps we could take coronelli out of rotation?
--Jimbo
I noticed another problem with Proxy blocker: It seems that every time
it blocks an IP, a new userid is created - just check
http://de.wikipedia.org/w/wiki.phtml?title=Spezial:Listusers&limit=250&offs….
Apart from that it would be good to have proxy blocker unblock all addresses
it has blocked 'indefinitely'. Those are now still in existence on many
Wikipedias. Proxy blocker is doing the great majority of blockings at the
moment, by the way. Again looking at de:, I find:
* over 100 indefinite blocks by Proxy blocker
* 66 one-week blocks by Proxy blocker
* 1 normal sysop block (for 24 hours).
Andre Engels
"Timwi" <timwi(a)gmx.net> schrieb:
> Jens Frank wrote:
>
> > Does it make sense to block these permanently?
>
> At least on en, Proxy Blocker no longer blocks anyone permanently. They
> are only blocked for 7 days. After that period, the IP is re-assessed
> when it tries to make its next edit.
>
> This also means that someone who has a dynamic dial-up IP and is blocked
> by Proxy Blocker, can simply dial in again after securing (or disabling)
> their proxy, and they will have a different IP that won't be blocked.
>
> Personally, I think this should be reduced to 24 hours, as dynamic IPs
> can be recycled pretty soon, and Wikipedia is growing enormously in
> popularity.
I think it would be good to automatically unblock all those 'idefinite'
blocks by proxy blocker. I checked, and on de: there are now over 100
indefinitely blocked IPs blocked by Proxy blocker (plus 66 one week blocks
by Proxy blocker plus 1 one-day block by a normal sysop, which might
well be considered out of balance, by the way)
Andre Engels
I just tried the category feaature, and found it broken - again. This
must be*at least* the third time I left intact code and found it broken.
Please, be a little more careful next time, OK?
I fixed it and put it back in the CVS - working, but probably ugly.
Also, I found that "brokenlink" seems to amass multiple identical links
each time one saves a page with broken links. I have DISTINCT the
queries for now, but we'll have to do someting about that.
Magnus
Hello.
It seems that one of our trusted users was blocked by proxy blocker even
though his is IPs are not open proxies. IPs I was informed of by the user
were as follows:
220.146.24.126
220.146.22.87
220.146.22.10
I will unblock these addresses, but is it really effective if I do that? I
am afraid that the blocker will re-block those addresses as soon as he start
editing. Can I do anything? Or is there anything the user should do? I would
appreciate any suggestion.
Thanks for your attention,
Tomos
_________________________________________________________________
Get tax tips, tools and access to IRS forms all in one place at MSN Money!
http://moneycentral.msn.com/tax/home.asp
geoffrin will be coming up very soon on *.237.
226-234 are the 'real' numbers for suda,bart-zwinger
235 and 236 are being used as aliases, right?
237 looks empty to me.
"Brion Vibber" <brion(a)pobox.com> schrieb:
> On Apr 2, 2004, at 00:53, Jimmy Wales wrote:
> > It shouldn't run more than once per day at first. I'm not sure what
> > their goals are with respect to how often they would *like* to receive
> > it, but daily is a fine start.
>
> It would take hours just to run a complete dump, which would be the
> equivalent of a sizeable fraction of our total daily page views. (Best
> case might be 100ms per page for 240,000 pages =~ 6 hours 40 minutes)
>
> If we're going to run something like this daily, some sort of
> incremental updates are a must, though we can probably get away with
> stuffing the saved data per page in a database or such and slurping it
> back out fairly quickly.
What about having a table daily of all pages that are changed, removed
or new? In that case, we could read that table when the new version is
made, and only those pages need to be in the XML dump (for slower search
engines, we should then keep them for a while so they can get a number of
days if they download them not every day). A search engine would then have
to do a complete spidering (either by itself or through XML feed) once,
but after that the XML feed can be much smaller.
As another issue, what do we do with the international aspect? My proposal
would be to have XML feeds for the larger Wikipedias, and a single one for
the whole of the smaller ones; the cut-off being determined by the size
of the files in the feed.
Andre Engels