Hello all,
like already announced after the last maintenance, the next maintenance will
be at
Wednesday, 7 December between 19:00 and 1:00 UTC.
The roots will collect what they will do at [1] until Sunday night. If you
have something for us to do (like a software-update) please open a bugreport
at JIRA and make sure to add the label "maintaince-window" until Sunday noon.
The roots plan to finish the configuration of Apache until Wednesday too, but
there will be no switch yet to give you all some time for testing (I will send
a eMail with more details when the time is right).
Sincerly,
DaB.
[1] https://wiki.toolserver.org/view/Admin:Next_maintenance
--
Userpage: [[:w:de:User:DaB.]] — PGP: 2B255885
Hi
My shell/python script that extracts the location of parking icons
from the database does no longer work:
/home/kayd/parkingicons/generate-parking-icons.sh
The error message is:
psycopg2.InternalError: transform: couldn't project point (2.79772e+06
8.45105e+06 0): failed to load NAD27-83 correction file (-38)
HINT: PostGIS was unable to transform the point because either no grid
shift files were found, or the point does not lie within the range for
which the grid shift is defined. Refer to the ST_Transform() section of
the PostGIS manual for details on how to configure PostGIS to alter this
behaviour.
To narrow down the problem please try this sql statement:
osm_mapnik=> SELECT
ST_Y(ST_Transform(ST_line_interpolate_point(way,0.5),4326)) FROM
planet_line WHERE osm_id=27720543;
ERROR: transform: couldn't project point (1.12026e+06 6.38855e+06 0):
failed to load NAD27-83 correction file (-38)
HINT: PostGIS was unable to transform the point because either no grid
shift files were found, or the point does not lie within the range for
which the grid shift is defined. Refer to the ST_Transform() section of
the PostGIS manual for details on how to configure PostGIS to alter this
behaviour.
Of course this query worked before and it seems to be an installation
problem of the new postgres/postgis version:
http://gis.stackexchange.com/questions/13696/postgis-st-transform-failed-to…
Kind regards,
Kay
Hello everyone,
With the osm rendering database up-to-date again, and no selective
expiry of tiles at the moment, I would like to do a global expiry of all
tiles at some point. However, unfortunately, we are still experiencing
some performance problems, particularly with low zoom rendering.
These issues are likely to be two fold. a) The performance of the
database is still not ideal, and b) stylesheets are being less efficient
with database requests as they should be.
As an example of a combination of a) and b), one of the queries from the
hikebike map at zoom level 7 took over 18 minutes to complete for just a
single query and many queries are necessary to render a tile. This
perhaps explains why a lot of lowzoom tiles don't successfully complete.
I am still trying to see if anything can be done about a), but given
that these issues have been around since pretty much the beginning, I am
not sure how much we can really improve things here.
With respect to b), we will need the help of all style sheet authors, to
try and optimize their style sheets as much as possible to limit
unnecessary database access. The hikebike map style as an example
unfortunately seems particularly problematic, but I think other
stylesheets are somewhat suffering similar issues. For example the
hikebike tiles at zoom levels below 9 are still timing out, despite
having increasing the rendering timeout now to a non sustainable 1 hour
period. With single tiles taking over an hour to render, rendering 200+
styles is clearly not going to work, so we need to try and optimize
these style sheets.
In general, one should try and limit the number of database access as
much as possible and also try and filter out as much data as one can on
the db side. In addition there are a few tips one can perhaps follow.
At low zooms, the biggest problems are access to the planet_line and
planet_polygon tables.
- At zoom levels 7 - 8 and lower, accessing the planet_line table should
probably be avoided at all costs, as it is simply to slow. All (or
hopefully most) of the information needed at low zooms should be in the
planet_roads table. Queries on planet_roads should be much faster, as it
is only 3.6GB in size rather than the 26Gb of the planet_lines table
- There is no equivalent of the planet_roads for polygons. However, I
have now created a conditional spatial index on the planet_polygons
table for the condition of "where building is null", that semi
partitions the table. Currently nearly 80% of all polygons are building
polygons. At a total size of 18Gb for the planet_polygon, limiting the
search to non buildings can significantly help speed up performance. In
order for styles to take advantage of this index, however, they will
need to add the condition of "where building is null and ([previous
where condition]). If all style sheet authors could check if they can
add this where clause to any low-zoom access to the polygon table that
would be very helpful.
- At very low zooms, e.g. something like 0 - 4, one might not want to
access the osm data at all and instead use the shape file data like the
coast lines and the natural earth[1,2] shape files that live in
~osm/data/world_boundaries/ directory
- Potentially it might be better to combine several database layers into
a single query at low zoom levels, to reduce the load on the database.
Other optimizations to the styles would of cause also be helpful and
once these styles are improved, it is hopefully possible to do an expiry
again.
Kai
P.S. I had to disable a bunch of styles from tirex as they were causing
troubles including regular crashes of mapnik. These styles are all of
the qa/qai styles, the germany style, the powermap, shapenames,
surveillance and the bw-noicon style. Once they are fixed, they can be
reactivated.
[1] http://en.wikipedia.org/wiki/Natural_Earth
[2] http://www.naturalearthdata.com/
Hi.
I'm working on the look-and-listen-map (toolserver project lalm), a web
map portal for the Blind and Visually Impaired.
That is not a rendering application like the several mapnik stylesheets
running on the toolserver, but I agreed with the Admins, that it might
be worth in respect of capacities of the toolserver to use the mapnik
style nevertheless.
Today I finally got database access and looked deeper into the schema.
Most queries in my application rely on the osm_id present in all tables
(and index, too), so that's not a problem.
Some queries will later on require geometric queries, but here I can
combine the postprocessed tables (planet_nodes, planet_line,
planet_polygon...) for filtering and the non-processed ones for fetching
the values - so that's okay, too.
My problem lies, where mapnik does ugly tricks for rendering: I have a
node-id and want to get all ways this node is part of.
That would be possible, querying planet_ways, as planet_ways contains a
field nodes, that is a integer-array; but as far as I know there is no
performant way to query an array-field of a database on containing a
particular element.
Currently my code is based on a different database schema, but I have a
query with the following characteristics, that I need with the
osm_mapnik-scheme:
IN: node_id (int)
OUT: a list of way_id, where the corresponding way contains the node
with id node_id.
A second question would be, if errors would occur, when I combine e.g.
planet_ways with planet_line in a query, and a matching row in
planet_ways is pending. Usually the intermediate tables are only
accessed by the import tool, so I would do unconventional things here,
but am I right? If the planet_ways-row is pending, the data of this
table is not in sync with the data in planet_line.
regards
Peter
Hello everyone,
as you all probably know, the osm_mapnik database is not being updated
anymore since a while. About two months ago the database seems to have
become corrupted, which in turn has unfortunately prevented diffs from
being applied.
In order to get things up-to-date again and working more smoothly than
before, the database needs to be freshly imported. During this re-import
period the database will not be available for use.
As a full import will likely take several days plus an additional couple
of days for the diffs to catch up, there will unfortunately be a bit of
an extended downtime for the database. During this time postgresql will
likely be updated and I would also like to use it to see if the database
can be optimized further to give better performance than before.
To try and minimize disruption, I would like to ask who, or which tools,
are currently using the osm_mapnik database? Is the downtime going to be
an issue for anyone?
In return, we will hopefully have an up-to-date database again that can
be kept up-to-date.
Kai
Scripto is an alternative to the ProofreadPage extension used
by Wikisource. It is based on Mediawiki but also on OpenLayers,
the software used to zoom and pan in OpenStreetMap.
The only website I have seen that uses Scripto is the U.K.
War Department papers, and in many ways it is more clumsy
than ProofreadPage. But there might be a few ideas that could
be worth picking up. Take a look.
The software is described at http://scripto.org/
As for reference installations, they mention
http://wardepartmentpapers.org/transcribe.php
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Hi everyone,
as mentioned in my previous mail, we were going to re-import the osm
planet database for mapnik rendering, after it stopped updating in
September. This is now complete and the database is up-to-date again
with a lag of typically around 5 minutes behind the main osm database (
http://munin.toolserver.org/OSM/ptolemy/replication_delay2.html ).
At the same time there was a switch to postgresql 9.1, but all tools and
renderings should be working fine with the new database.
If anybody notices anything that doesn't work as intended, please let us
know to see if it can be fixed.
Not everything is working perfectly yet, but hopefully at least no worse
than before. The things currently not working correctly yet are:
1) Low-zoom rendering tiles. These are still very slow to render and
many of the low-zoom tiles still time out, even though I have set the
timeout up to 25 minutes. Not only does this clog up one of 3 rendering
slots for 25 minutes, it also results in that the tile isn't rendered
fresh and still considered "dirty", so the system puts those tiles into
the queue for rendering again immediately, clogging up the queue once
more. The current status of the queues can be seen either at
http://toolserver.org/~mazder/tirex-status/?short=1&extended=0&refresh=1
or http://munin.toolserver.org/OSM/ptolemy/tirex_status_queued_requests.html
2) Tile expiry. Although the db is updated every 5 minutes, tiles are
not currently expired. For one, this is because there are some bugs in
the software (osm2pgsql) used to identify the tiles to expire. But also,
it is not yet clear how performance will be on trying to expire 250+
styles. We'll need to see what is feasible, but I suspect it will be
something like expiring the 10 or so most used styles with the db update
and then periodically expire the other styles with however much capacity
is still available.
We'll have to see how stable everything is and then move on from there.
Kai