Hello, we have a running rendering-queue but Mapnik rendering nothing and we have no queries on postgresql.
The problem seems to come after rendering some tiles of powermap in Zoom-level 6, (can not work in my eyes) .
I only monitoring this because I push 17.000 tiles in the queue to render this systematically, but this seems not the reason for the problems we have now.
Greetings Tim
Solved. I made everything like at TS-1133 (shared memory) and restart Tirex.
Now it works really fast.
Greetings Tim
Am 08.08.2011 22:40, schrieb Tim Alder:
Hello, we have a running rendering-queue but Mapnik rendering nothing and we have no queries on postgresql.
The problem seems to come after rendering some tiles of powermap in Zoom-level 6, (can not work in my eyes) .
I only monitoring this because I push 17.000 tiles in the queue to render this systematically, but this seems not the reason for the problems we have now.
Greetings Tim
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Am 09.08.2011 21:55, schrieb Tim Alder:
Solved. I made everything like at TS-1133 (shared memory) and restart Tirex.
Now it works really fast.
That usually means that tirex crashed really hard and had no possibility to clean up. Looking at the syslog and the munin memory graphs would help in locating the reason for these crashes.
Peter
Hello, During this night we rendered 23.000 Metatiles. Speed: 10.000 tiles in 3.5h. I'm happy with that. The "buffers hits" were really well at this time.
This morning I restart the db-updating process (diff-import incl. expire.rb). This process can take a while. We are not seeing on munin the db-replag, so we are flying blind. :-(
Greetings Tim
Zitat von Peter Körner osm-lists@mazdermind.de:
Am 09.08.2011 21:55, schrieb Tim Alder:
Solved. I made everything like at TS-1133 (shared memory) and restart Tirex.
Now it works really fast.
That usually means that tirex crashed really hard and had no possibility to clean up. Looking at the syslog and the munin memory graphs would help in locating the reason for these crashes.
Peter
Am 10.08.2011 13:07, schrieb Tim Alder:
Hello, During this night we rendered 23.000 Metatiles. Speed: 10.000 tiles in 3.5h. I'm happy with that. The "buffers hits" were really well at this time.
Nice!
This morning I restart the db-updating process (diff-import incl. expire.rb). This process can take a while. We are not seeing on munin the db-replag, so we are flying blind. :-(
We're now seeing it here: http://toolserver.org/~mazder/tirex-replag/
Peter
Am 10.08.2011 13:42, schrieb Peter Körner:
We're now seeing it here: http://toolserver.org/~mazder/tirex-replag/
I also opened access to the log-files:
http://toolserver.org/~mazder/tirex-replag/logs/
You can see that it is still importing:
http://toolserver.org/~mazder/tirex-replag/logs/run.log
Peter
Can we switch to hourly updates? We win today 3 hours, but have 20 days of replags in front of us.
Additionally we can also stop rendering or dirty-rendering until we are up-to-date.
Greetings Tim
Zitat von Peter Körner osm-lists@mazdermind.de:
Am 10.08.2011 13:42, schrieb Peter Körner:
We're now seeing it here: http://toolserver.org/~mazder/tirex-replag/
I also opened access to the log-files:
http://toolserver.org/~mazder/tirex-replag/logs/
You can see that it is still importing:
http://toolserver.org/~mazder/tirex-replag/logs/run.log
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Am 10.08.2011 18:07, schrieb Tim Alder:
Can we switch to hourly updates? We win today 3 hours, but have 20 days of replags in front of us.
I switched the maxInterval from 6 hours to 24 hours. I don't see any point in switching to daily diff files, because downloading and merging only takes 30 seconds for the 6 hours worth of diffs.
Additionally we can also stop rendering or dirty-rendering until we are up-to-date.
We could certainly do that. I tried tirex-rendering-control but it seems to hang:
osm@ptolemy:~$ tirex-rendering-control --config tirex/etc/tirex/ --debug --stop Buckets cmdline: (), all: (missing, dirty, bulk, background), live: (missing), want: (dirty, bulk, background) sending: bucket=dirty id=tirex-rendering-control.13304 type=stop_rendering_bucket
So I think I'll better kill tirex and disable the watchdog.
Peter
On 08/10/2011 10:07 AM, Tim Alder wrote:
Can we switch to hourly updates? We win today 3 hours, but have 20 days of replags in front of us.
May I suggest to update osm2pgsql. Currently running is revision 21201, which is over a year old and a bunch of stuff has changed since then. Although most of it is irrelevant to the toolserver, there are at least two commits that might help speed up things on the diff processing front considerably.
r25070: Make use of prepared geometries from geos-3.1+. These dramatically speed up the polygon tests used by the advanced multipolygon handling code.
r26292: Implement pgsql_nodes_get_list to speed up way and relation processing
If I remember correctly the speed up in the multipolygon handling did help quite a bit.
At the current rate it looks like it will take over a week to catch up
Kai
Additionally we can also stop rendering or dirty-rendering until we are up-to-date.
Greetings Tim
Zitat von Peter Körnerosm-lists@mazdermind.de:
Am 10.08.2011 13:42, schrieb Peter Körner:
We're now seeing it here: http://toolserver.org/~mazder/tirex-replag/
I also opened access to the log-files:
http://toolserver.org/~mazder/tirex-replag/logs/
You can see that it is still importing:
http://toolserver.org/~mazder/tirex-replag/logs/run.log
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Am 12.08.2011 17:37, schrieb Kai Krueger:
On 08/10/2011 10:07 AM, Tim Alder wrote:
Can we switch to hourly updates? We win today 3 hours, but have 20 days of replags in front of us.
May I suggest to update osm2pgsql. Currently running is revision 21201, which is over a year old and a bunch of stuff has changed since then. Although most of it is irrelevant to the toolserver, there are at least two commits that might help speed up things on the diff processing front considerably.
I tried so and got it to compile, but it doesn't run properly:
osm@ptolemy:~/src/osm2pgsql-intarray$ ./osm2pgsql ld.so.1: osm2pgsql: fatal: libpq.so.5: open failed: No such file or directory Killed
Some idea? Peter
Looks like the wrong pgsql development headers and libs are installed.
2011/8/15 Peter Körner osm-lists@mazdermind.de
Am 12.08.2011 17:37, schrieb Kai Krueger:
On 08/10/2011 10:07 AM, Tim Alder wrote:
Can we switch to hourly updates? We win today 3 hours, but have 20 days of replags in front of us.
May I suggest to update osm2pgsql. Currently running is revision 21201, which is over a year old and a bunch of stuff has changed since then. Although most of it is irrelevant to the toolserver, there are at least two commits that might help speed up things on the diff processing front considerably.
I tried so and got it to compile, but it doesn't run properly:
osm@ptolemy:~/src/osm2pgsql-intarray$ ./osm2pgsql ld.so.1: osm2pgsql: fatal: libpq.so.5: open failed: No such file or directory Killed
Some idea? Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Hi
Am 12.08.2011 17:37, schrieb Kai Krueger:
r25070: Make use of prepared geometries from geos-3.1+. These dramatically speed up the polygon tests used by the advanced multipolygon handling code.
r26292: Implement pgsql_nodes_get_list to speed up way and relation processing
we're now running r26543 (intarray branch), let's see if it is faster.
Peter
On 08/15/2011 12:58 PM, Peter Körner wrote:
Hi
Am 12.08.2011 17:37, schrieb Kai Krueger:
r25070: Make use of prepared geometries from geos-3.1+. These dramatically speed up the polygon tests used by the advanced multipolygon handling code.
r26292: Implement pgsql_nodes_get_list to speed up way and relation processing
we're now running r26543 (intarray branch), let's see if it is faster.
Unfortunately, the first batch didn't look to promising that it was any faster. :-( The majority of the time is still spent in processing relations it seems, with a single relation taking multiple seconds to process. I will try and see if I can at some point figure out what stage of processing relations takes so long, although that would presumably require to rebuild osm2pgsql with some extra instrumentation.
The second batch just crashed some 4 hours into processing and has restarted, so that won't make it any faster either...
Going over pending ways WARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. pending_ways failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. (7) Error occurred, cleaning up
Somehow the postgresql server crashed.
Earlier, when connecting directly to the database and trying to run some test queries, I noticed that I got some odd error messages. I can't remember the exact wording, but it was something along the line that it failed to write a temporary transaction log file. I am not sure how reproducible that error is, or if it is connected to the crash in osm2pgsql, but it may give some hints.
Kai
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Am 16.08.2011 12:04, schrieb Kai Krueger:
On 08/15/2011 12:58 PM, Peter Körner wrote:
Hi
Am 12.08.2011 17:37, schrieb Kai Krueger:
r25070: Make use of prepared geometries from geos-3.1+. These dramatically speed up the polygon tests used by the advanced multipolygon handling code.
r26292: Implement pgsql_nodes_get_list to speed up way and relation processing
we're now running r26543 (intarray branch), let's see if it is faster.
Unfortunately, the first batch didn't look to promising that it was any faster. :-( The majority of the time is still spent in processing relations it seems, with a single relation taking multiple seconds to process. I will try and see if I can at some point figure out what stage of processing relations takes so long, although that would presumably require to rebuild osm2pgsql with some extra instrumentation.
I don't see problems with that. We're free to add debug output and recompile.
Earlier, when connecting directly to the database and trying to run some test queries, I noticed that I got some odd error messages. I can't remember the exact wording, but it was something along the line that it failed to write a temporary transaction log file. I am not sure how reproducible that error is, or if it is connected to the crash in osm2pgsql, but it may give some hints.
I guess the postgress logs could tell us about that, but unfortunately I don't know where to look for the logs (hint: it's not var/log ;)
Peter