On Tue, Sep 9, 2014 at 8:00 AM, Daniel Kinzler <daniel(a)brightbyte.de> wrote:
Am 09.09.2014 13:45, schrieb Nikolas Everett:
All those options are less good then just
updating the cache I think.
Indeed. And that *sounds* simple enough. The issue is that we have to be
sure to
update the correct cache key, the exact one the OutputPage object in
question
was loaded from. Otherwise, we'll be updating the wrong key, and will read
the
incomplete object again, and try to update again, and again, on every page
view.
Sadly, the mechanism for determining the parser cache key is quite
complicated
and rather opaque. The approach Katie tries in I1a11b200f0c looks fine at a
glance, but even if i can verify that it works as expected on my machine,
I have
no idea how it will behave on the more strange wikis on the live cluster.
Any ideas who could help with that?
No, not really. My only experience with the parser cache was accidentally
polluting it with broken pages one time.
I suppose one option is to be defensive around reusing the key. I mean, if
you could check the key used to fetch from the parser cache and you had a
cache hit then you know if you do a put you'll be setting _something_.
Another thing - I believe uncached calls to the parser are wrapped in pool
counter acquisitions to make sure no two processes spend duplicate effort.
You may want to acquire that to make sure anything you do that is heavy
doesn't get done twice.
Once you start talking about that it might just be simpler to invalidate
the whole entry.....
Another option:
Kick off some kind of cache invalidation job that _slowly_ invalidates the
appropriate parts of the cache. Something like how the varnish cache is
invalidated on template change. That gives you marginally more control
than randomized invalidation.
Nik