Hi,
I'm sorry about the state of the OSM Toolserver. I said I would fix it, but I didn't. However, I will finally be having a look at this tomorrow (or possibly Thursday). Before I do that, there's a couple of things I'd like to address.
Firstly, PostgreSQL. On cassini, some users had either root access or access to the postgres user. I don't feel comfortable doing this on ptolemy (and it would also mean using separate database passwords, etc. just for that server). However, I can give out full (owner) access to the 'osm' database.
Rather than give this to a single user, I think it makes sense to create a multi-maintainer project to handle OSM databases. An MMT[0] is a role account that multiple users have access to. The 'osm' MMT will have owner access to the osm database on ptolemy. Since this does not confer any additional privileges, and the OSM data is not private, we can easily add people to the MMT (Peter, Tim?), and these people will be responsible for ensuring the database is available.
This is better than the current situation where only I have access, since I'm not particularly familiar with either OSM or the PostgreSQL extensions it uses. However, final responsibility for the OSM database (as well as software installation/changes) will still lie with me.
If this sounds acceptable to everyone, I will drop the current outdated osm database on ptolemy, and create a new, empty database and the MMT. People who want access to the MMT should let me know (by replying to this mail with your TS username). Those people can then start on the process of importing the database and setting up replication.
Secondly: renderd/mod_tile. I'm open to suggestions here. Currently, I plan to run renderd and mod_tile on ptolemy, which means only I will have access. However, there are quite a few portability issues with the OSM software which I've had to fix. I would appreciate it if someone with commit access to OSM's SVN repository would be able to work with me to integrate these patches, to make things easier in future.
I will wait until PostgreSQL is working well before starting on rendered.
Regards, River.
[0] https://wiki.toolserver.org/view/Multi-maintainer_projects
On 02/02/2010 05:21 PM, River Tarnell wrote:
Hi,
I'm sorry about the state of the OSM Toolserver. I said I would fix it, but I didn't. However, I will finally be having a look at this tomorrow (or possibly Thursday). Before I do that, there's a couple of things I'd like to address.
Great, good to see things moving along again :-)
...
Secondly: renderd/mod_tile. I'm open to suggestions here. Currently, I plan to run renderd and mod_tile on ptolemy, which means only I will have access. However, there are quite a few portability issues with the OSM software which I've had to fix. I would appreciate it if someone with commit access to OSM's SVN repository would be able to work with me to integrate these patches, to make things easier in future.
I can try and see if I can help here. I have OSM svn commit access and I have done some patches to mod_tile and renderd in the past. It however also wouldn't be hard to get OSM commit access your self if you want, as as far as I know, Tom Hughes (the osm admin) hands out an account to anyone who asks. I don't have any experience with anything other than Linux though, so I don't know how much help I can be with portability issues.
Kai
I will wait until PostgreSQL is working well before starting on rendered.
Regards, River.
[0] https://wiki.toolserver.org/view/Multi-maintainer_projects
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Great, good to see things moving along again :-)
Hey ho, let's go! :D
If this sounds acceptable to everyone, I will drop the current outdated osm database on ptolemy, and create a new, empty database and the MMT. People who want access to the MMT should let me know (by replying to this mail with your TS username). Those people can then start on the process of importing the database and setting up replication.
I'm "mazder" and I'd like to import the dump and implement the replication. It would be cool, it so. with more experience with the render-tools would setup the render chain. Maybe Kai?
River Tarnell wrote:
Secondly: renderd/mod_tile. I'm open to suggestions here. Currently, I plan to run renderd and mod_tile on ptolemy, which means only I will have access.
We'll need a workflow on how toolserver user can test their stylesheets (I'd suggest doing this on the login servers. We'd need mapnik on these servers - i can supply you with a very easy interface to the rendering stack [1]) and how they can be distributed to the toolserver (maybe via jira?)
However, there are quite a few portability issues with the OSM software which I've had to fix. I would appreciate it if someone with commit access to OSM's SVN repository would be able to work with me to integrate these patches, to make things easier in future.
Kai Krueger schrieb:
I can try and see if I can help here. I have OSM svn commit access and I have done some patches to mod_tile and renderd in the past. It however also wouldn't be hard to get OSM commit access your self if you want, as as far as I know, Tom Hughes (the osm admin) hands out an account to anyone who asks. I don't have any experience with anything other than Linux though, so I don't know how much help I can be with portability issues.
Would you also like to do the setup of renderd/mod_tile? Do you have experience with tile expiration?
This was a big problem with this on cassini, because we got over 200 styles, so the expiration runs 200 times as long as on the osm live server -- which is longer than the time until next diff import (1 minute).
Peter
[1]http://svn.toolserver.org/svnroot/mazder/mapnik-in-a-box/tools/osm-render
On 02/03/2010 09:01 AM, Peter Körner wrote:
Great, good to see things moving along again :-)
Hey ho, let's go! :D
If this sounds acceptable to everyone, I will drop the current outdated osm database on ptolemy, and create a new, empty database and the MMT. People who want access to the MMT should let me know (by replying to this mail with your TS username). Those people can then start on the process of importing the database and setting up replication.
I'm "mazder" and I'd like to import the dump and implement the replication. It would be cool, it so. with more experience with the render-tools would setup the render chain. Maybe Kai?
I can at least try and answer questions arising from the render-tools and help set them up. But as it needs root for setting up mod_tile, I suspect River will want to do that?
River Tarnell wrote:
Secondly: renderd/mod_tile. I'm open to suggestions here. Currently, I plan to run renderd and mod_tile on ptolemy, which means only I will have access.
We'll need a workflow on how toolserver user can test their stylesheets (I'd suggest doing this on the login servers. We'd need mapnik on these servers - i can supply you with a very easy interface to the rendering stack [1]) and how they can be distributed to the toolserver (maybe via jira?)
However, there are quite a few portability issues with the OSM software which I've had to fix. I would appreciate it if someone with commit access to OSM's SVN repository would be able to work with me to integrate these patches, to make things easier in future.
Kai Krueger schrieb:
I can try and see if I can help here. I have OSM svn commit access and I have done some patches to mod_tile and renderd in the past. It however also wouldn't be hard to get OSM commit access your self if you want, as as far as I know, Tom Hughes (the osm admin) hands out an account to anyone who asks. I don't have any experience with anything other than Linux though, so I don't know how much help I can be with portability issues.
Would you also like to do the setup of renderd/mod_tile? Do you have experience with tile expiration?
Tile expiration is where it gets a bit murky. There are several different ways currently to do the tile expiration, all of which seem to have different merits and issues. So it might be necessary to play around a bit to see what works best for this setup.
This was a big problem with this on cassini, because we got over 200 styles, so the expiration runs 200 times as long as on the osm live server -- which is longer than the time until next diff import (1 minute).
The 200 styles might be a problem at the moment, as I don't think any of the current scripts were designed for that many styles. This would need to stat/touch a lot of files which might take a while. On the other hand, I suspect that most of the 200 styles won't have that many tiles in them, so there might be a bunch of optimizations that can be done using the meta-tile filepath hierarchy.
Kai
Peter
[1]http://svn.toolserver.org/svnroot/mazder/mapnik-in-a-box/tools/osm-render
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Peter Körner:
I'm "mazder" and I'd like to import the dump and implement the replication.
Okay. I've started reinstalling ptolemy with our current Solaris image. Once that is done, I'll drop the existing database and create a new one owned by 'osm'. You should have access to the 'osm' MMT; see https://wiki.toolserver.org/view/Multi-maintainer_projects for details on how to use it. Once this is done you can start importing the database.
We'll need a workflow on how toolserver user can test their stylesheets
I know very little about renderd, so I suggest someone else comes up with a specific proposal for this, which I can implement.
- river.
On 02/04/2010 12:16 PM, River Tarnell wrote:
Okay, the database is now empty and ready to be imported.
Great!
Peter, would it be possible to add a few more tags to the osm2pgsql style?
addr:postcode, population, height and maxspeed might be useful tags to have in addition. Also perhaps some more of the wikipedia links might be useful such as wikipedia:de wikipedia:en wikipedia:es, and wikipedi:fr
I assume you will otherwise use the style at https://svn.toolserver.org/svnroot/mazder/planet-import/wikimedia.extended.s... ?
Thanks,
Kai
- river.
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Also add tags related to crisis mapping like earthquake, refugee, and the humanitarian osm tags which cover health facilities and such
With the tags we can make maps for Wikipedia pages about Haiti and such
Aude
Sent from my iPhone
On Feb 4, 2010, at 7:35 AM, Kai Krueger kakrueger@gmail.com wrote:
On 02/04/2010 12:16 PM, River Tarnell wrote:
Okay, the database is now empty and ready to be imported.
Great!
Peter, would it be possible to add a few more tags to the osm2pgsql style?
addr:postcode, population, height and maxspeed might be useful tags to have in addition. Also perhaps some more of the wikipedia links might be useful such as wikipedia:de wikipedia:en wikipedia:es, and wikipedi:fr
I assume you will otherwise use the style at https://svn.toolserver.org/svnroot/mazder/planet-import/wikimedia.extended.s... ?
Thanks,
Kai
- river.
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Hello, the usage of more than one Wikipedia language would be cool. The connection of Wikipedia and OSM should be a central point on our servers.
We have a big collection of Wikipedia POIs (>600.000) and I have the idea that we should transfer it half-automatic to OSM-objects. There are some reasons why I think that a automatical solution seems not possible: *WP collects only point coordinates. It's not possible to transfer it to line and area objects in OSM. *we would get duplicate entries if we would copy the points to OSM. *there are legal concerns because it is no secret that WP gets some coordinates from google Earth images.
So a tool or a JOSM plugin would be nice to drag the wp-Coordinates to existing OSM object easily. So you wouldn't take the coords directly (legal concerns). This sound like a lot of manual,stupid,double work but the usage of the datas would be than more easily. We couldn't stop OSM to link to wikipedia, so we should work that it's going in the right direction.
I believe that it would be not useful to overflow the OSM database with links to all wp-languages, the english and a alternative WP links should be enough. But on our servers we should have a database with all languages filled up with the help of interwikilinks. So we could provide a map with Wikipedia objects in each language and we could perhaps also link from Wikipedia to this objects which is interesting for long objects like rivers.
If a column for each of the >200 languages would be to much, I would say that the 20 most popular languages would be nice: en,de,nl,ru,fr,it,ja,es,ca,pl,pt,sv,da,cs,fi,no,eo,zh,sk,tr.
Greetings Kolossos
Zitat von Kai Krueger kakrueger@gmail.com:
On 02/04/2010 12:16 PM, River Tarnell wrote:
Okay, the database is now empty and ready to be imported.
Great!
Peter, would it be possible to add a few more tags to the osm2pgsql style?
addr:postcode, population, height and maxspeed might be useful tags to have in addition. Also perhaps some more of the wikipedia links might be useful such as wikipedia:de wikipedia:en wikipedia:es, and wikipedi:fr
I assume you will otherwise use the style at https://svn.toolserver.org/svnroot/mazder/planet-import/wikimedia.extended.s... ?
Thanks,
Kai
- river.
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
If a column for each of the >200 languages would be to much, I would say that the 20 most popular languages would be nice: en,de,nl,ru,fr,it,ja,es,ca,pl,pt,sv,da,cs,fi,no,eo,zh,sk,tr.
I think it's worth to try it :) I added the requested tags and the file is getting huge now..
https://svn.toolserver.org/svnroot/mazder/planet-import/wikimedia.extended.s...
I'll start the import during the day and keep you updated here.
Peter
On 02/04/2010 03:41 PM, Tim Alder wrote:
Hello, the usage of more than one Wikipedia language would be cool. The connection of Wikipedia and OSM should be a central point on our servers.
We have a big collection of Wikipedia POIs (>600.000) and I have the idea that we should transfer it half-automatic to OSM-objects. There are some reasons why I think that a automatical solution seems not possible: *WP collects only point coordinates. It's not possible to transfer it to line and area objects in OSM. *we would get duplicate entries if we would copy the points to OSM. *there are legal concerns because it is no secret that WP gets some coordinates from google Earth images.
So a tool or a JOSM plugin would be nice to drag the wp-Coordinates to existing OSM object easily. So you wouldn't take the coords directly (legal concerns). This sound like a lot of manual,stupid,double work but the usage of the datas would be than more easily. We couldn't stop OSM to link to wikipedia, so we should work that it's going in the right direction.
I believe that it would be not useful to overflow the OSM database with links to all wp-languages, the english and a alternative WP links should be enough. But on our servers we should have a database with all languages filled up with the help of interwikilinks. So we could provide a map with Wikipedia objects in each language and we could perhaps also link from Wikipedia to this objects which is interesting for long objects like rivers.
I think in that respect it might be worth mentioning some efforts people have recently done to link OpenStreetMap objects to external pages such as wikipedi, in case it is not already known. OpenLinkMap ( http://olm.openstreetmap.de/ ) has taken the wikipedia= and url= tags from osm objects and presented those as html links on a slippy map.
There they have also used the wikipedia:de, wikipedia:fr, ... links as far as I know. And on the german osm mailing list ( http://lists.openstreetmap.org/pipermail/talk-de/2010-January/062166.html and following topic ), there have been some discussions of how to link the various different language articles, and if it should use inter wiki links or otherwise.
Currently there seem to be somewhere around 50.000 wikipedia= tags in the OSM database and another 1 - 2 thousand for wikipedia:de, wikipedia:fr wikipedia:es, and a lot less for other languages.
I am not sure if the use of those tags has fully stabilized yet, as the openlinkmap I think is the first larger effort to use those tags which was only presented a few days ago. So there is potential for wikipedia users to chip into the discussion to see what makes most sense from their point of view.
It might be easier to link from OSM to wikipedia than the other way round, as the textual keys of the wikipedia articles probably lend them selves better than the changing numeric IDs of OSM objects for cross referencing.
Kai
If a column for each of the>200 languages would be to much, I would say that the 20 most popular languages would be nice: en,de,nl,ru,fr,it,ja,es,ca,pl,pt,sv,da,cs,fi,no,eo,zh,sk,tr.
Greetings Kolossos
Zitat von Kai Kruegerkakrueger@gmail.com:
On 02/04/2010 12:16 PM, River Tarnell wrote:
Okay, the database is now empty and ready to be imported.
Great!
Peter, would it be possible to add a few more tags to the osm2pgsql style?
addr:postcode, population, height and maxspeed might be useful tags to have in addition. Also perhaps some more of the wikipedia links might be useful such as wikipedia:de wikipedia:en wikipedia:es, and wikipedi:fr
I assume you will otherwise use the style at https://svn.toolserver.org/svnroot/mazder/planet-import/wikimedia.extended.s... ?
Thanks,
Kai
- river.
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Kai Krueger schrieb:
I am not sure if the use of those tags has fully stabilized yet, as the openlinkmap I think is the first larger effort to use those tags which was only presented a few days ago. So there is potential for wikipedia users to chip into the discussion to see what makes most sense from their point of view.
I'm also not sure what's the absolutely right way is link both systems in 2 directions. Let's use the OSM-Wiki for this discussion: http://wiki.openstreetmap.org/wiki/Talk:Key:wikipedia#Another proposed syntax (the better one)
Some questions that I have: Perhaps is one link to one wikipedia language really enough. Or not? Or should we mark all pure Wikipedia-tags as a bug because we couldn't know to what wikipedia it links. The definition that it links to english wikipedia is also only one week old and comes from me [1]. What can we do with wikipedia articles that contain more than one coordinate? Like e.g. [2]
It might be easier to link from OSM to wikipedia than the other way round, as the textual keys of the wikipedia articles probably lend them selves better than the changing numeric IDs of OSM objects for cross referencing.
I don't want to use the OSM IDs in the wikipedia, but if we have the Wikipedia-Tags in the OSM-DB we can easily link to this objects. Perhaps if could use aa template and the pagename and than use our database which is fulfilled with the interwikilinks.
Greetings Kolossos
[1]http://wiki.openstreetmap.org/index.php?title=Key:wikipedia&diff=prev&am... [2]http://de.wikipedia.org/wiki/Liste_der_Brunnen_und_Wasserspiele_in_Dresden
River Tarnell schrieb:
Okay, the database is now empty and ready to be imported.
The import has started on willow, reading in planet file as of 2010/02/03. Also, the diff import will run on willow, pushing data to ptolemy. I used 2GB of RAM as cache (we used 10GB on cassini) -- i hope this is ok on these public machines. I'll keep you updated on the process of the import / diffs.
Peter
Hi,
On 02/08/2010 11:34 AM, Peter Körner wrote:
River Tarnell schrieb:
Peter Körner:
I used 2GB of RAM as cache (we used 10GB on cassini) -- i hope this is ok on these public machines.
If you use more than 1GB, you will probably find slayerd will kill your process.
I re-started the process with 1GB cache.
It looks like this is not entirely ideal. Although I don't know what the optimal cache size is, 1GB seems far too little. My guess would be more like 10 - 15 GB for the full planet. On the OSM tile server an import takes about 8 - 15 hours if I am not mistaken, whereas the current import on ptolemy has been running for nearly 3 days, despite the fact that ptolemy is probably a faster db server than the osm tile server which only has four disks.
Would it perhaps be possible to have osm2pgsql run on ptolemy instead (and then increase the cache size)? I think it would kind of make sense too, given it is part of the "database replication" process, although I guess it would shift the burden of setting everything up to River.
Kai
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Kai Krueger schrieb:
Hi,
On 02/08/2010 11:34 AM, Peter Körner wrote:
River Tarnell schrieb:
Peter Körner:
I used 2GB of RAM as cache (we used 10GB on cassini) -- i hope this is ok on these public machines.
If you use more than 1GB, you will probably find slayerd will kill your process.
I re-started the process with 1GB cache.
It looks like this is not entirely ideal.
With 1GB the process was killed, so I had to run it with 512 MB.. ^^
Although I don't know what the optimal cache size is, 1GB seems far too little. My guess would be more like 10 - 15 GB for the full planet. On the OSM tile server an import takes about 8 - 15 hours if I am not mistaken, whereas the current import on ptolemy has been running for nearly 3 days, despite the fact that ptolemy is probably a faster db server than the osm tile server which only has four disks.
But we got over 556 additional tags (name:xx and wikipedia:xx for all the 278 wikipedia languages). That's the main reason why the import takes this much longer. On cassini, where we only imported the name:xx tags and used to have 10GB cache, it took around 2 days.
Peter
On 02/11/2010 09:07 AM, Peter Körner wrote:
Kai Krueger schrieb:
Hi,
On 02/08/2010 11:34 AM, Peter Körner wrote:
River Tarnell schrieb:
Peter Körner:
I used 2GB of RAM as cache (we used 10GB on cassini) -- i hope this is ok on these public machines.
If you use more than 1GB, you will probably find slayerd will kill your process.
I re-started the process with 1GB cache.
It looks like this is not entirely ideal.
With 1GB the process was killed, so I had to run it with 512 MB.. ^^
Although I don't know what the optimal cache size is, 1GB seems far too little. My guess would be more like 10 - 15 GB for the full planet. On the OSM tile server an import takes about 8 - 15 hours if I am not mistaken, whereas the current import on ptolemy has been running for nearly 3 days, despite the fact that ptolemy is probably a faster db server than the osm tile server which only has four disks.
But we got over 556 additional tags (name:xx and wikipedia:xx for all the 278 wikipedia languages). That's the main reason why the import takes this much longer. On cassini, where we only imported the name:xx tags and used to have 10GB cache, it took around 2 days.
Although I don't have much fact to base this on, my guess would be that the extra fields don't cause too much extra time, as simply writing out database fields to disk wouldn't seem like the bottle neck. The problem I think is building the geometry of ways and relations. As the OSM data only has node references in the ways section, for each node in a way it will need to query the database to retrieve the lat/lon pair for the node to build linestrings. This is where the osm2pgsql cache comes in which stores this information in a very efficient way saving on a lot of db access, if the hit ratio is reasonable. With the db being on a different machine, the extra latency won't help either. With 500Mb, the hit ratio is probably fairly low, as even with much smaller extracts such as the UK, you need 1 - 2GB of cache to achieve reasonable performance.
How far along has it come? Osm2pgsql should give the number of nodes, ways and relations it has processed so far. If it is nearly done, then we don't have to worry too much about this.
Kai
Peter
How far along has it come? Osm2pgsql should give the number of nodes, ways and relations it has processed so far. If it is nearly done, then we don't have to worry too much about this.
It's already in the processing of relations, which is of course the most expensive part. You may monitor the progess with
tail -c 100 -f /home/project/o/s/m/osm/import.log
on any of the login servers. It's currently at Processing: Node(541873k) Way(40262k) Relation(235k)
Peter
On 02/11/2010 11:07 AM, Peter Körner wrote:
How far along has it come? Osm2pgsql should give the number of nodes, ways and relations it has processed so far. If it is nearly done, then we don't have to worry too much about this.
It's already in the processing of relations, which is of course the most expensive part. You may monitor the progess with
tail -c 100 -f /home/project/o/s/m/osm/import.log
on any of the login servers. It's currently at Processing: Node(541873k) Way(40262k) Relation(235k)
OK, that isn't too bad then. At a max of 397k relations (maximum relation id), it should finish sometime today. It might still be worth thinking about moving osm2pgsql for a future import, but best to get everything working first the way it is.
Kai
Peter
It might still be worth thinking about moving osm2pgsql for a future import, but best to get everything working first the way it is.
It's more important to think about where the diff-import should be running from. I'd suggest trying it from willow but keep thinking about maybe shifting it to ptolemy..
Peter
Peter Körner:
It's more important to think about where the diff-import should be running from. I'd suggest trying it from willow but keep thinking about maybe shifting it to ptolemy..
Does importing one minute's diffs really take so much memory than 1GB isn't enough? From what I remember of osm2pgsql, it took at least several minutes to use that much memory.
- river.
River Tarnell schrieb:
Peter Körner:
It's more important to think about where the diff-import should be running from. I'd suggest trying it from willow but keep thinking about maybe shifting it to ptolemy..
Does importing one minute's diffs really take so much memory than 1GB isn't enough? From what I remember of osm2pgsql, it took at least several minutes to use that much memory.
No i don't think it will. As I said I'd try it from willow but I can imagine, that eg. expiring tiles won't work this easy when osm2pgsql does not run on the same machine as mod_tile.
Peter
On 02/11/2010 12:52 PM, Peter Körner wrote:
River Tarnell schrieb:
Peter Körner:
It's more important to think about where the diff-import should be running from. I'd suggest trying it from willow but keep thinking about maybe shifting it to ptolemy..
Does importing one minute's diffs really take so much memory than 1GB isn't enough? From what I remember of osm2pgsql, it took at least several minutes to use that much memory.
My guess would be that for diffs, it won't need that much memory, as the cache won't be effective during imports anyway. The cache stores the lat/lon values of nodes in a blocked-sparse array at the id-th entry in the table. When building ways and relations, Osm2pgsql will either query the each node in O(1) from the cache or require a SQL query over the network. It is heavily optimized for initial import and on diffs, the cache will mostly not be populated anyway. Therefore I guess the memory doesn't matter in this case, but the extra latency of the network vs local process might. On the other hand on a local lan, that will presumably be less than a single disk seek, so not really an issue either.
No i don't think it will. As I said I'd try it from willow but I can imagine, that eg. expiring tiles won't work this easy when osm2pgsql does not run on the same machine as mod_tile.
Expiring tiles is a different matter, but osm2pgsql doesn't expire the tiles directly so shouldn't effect where it has to run. Instead it spits out a textual list of tiles that should be considered expired, that then needs to be handled separately. The actual expiry script will most likely need to be run on the same machine as the (meta)tiles stored though, as it does require to touch and stat quite a lot of files. Especially with the 200 different language styles. But if the meta tile directory is writable from the login servers, we can experiment with what setup works best first before moving it over to ptolemy.
Kai
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
On Thu, Feb 11, 2010 at 13:46, Kai Krueger kakrueger@gmail.com wrote:
No i don't think it will. As I said I'd try it from willow but I can imagine, that eg. expiring tiles won't work this easy when osm2pgsql does not run on the same machine as mod_tile.
Expiring tiles is a different matter, but osm2pgsql doesn't expire the tiles directly so shouldn't effect where it has to run. Instead it spits out a textual list of tiles that should be considered expired, that then needs to be handled separately. The actual expiry script will most likely need to be run on the same machine as the (meta)tiles stored though, as it does require to touch and stat quite a lot of files. Especially with the 200 different language styles. But if the meta tile directory is writable from the login servers, we can experiment with what setup works best first before moving it over to ptolemy.
Note also that osm.org doesn't use osm2pgsql for expiring tiles, they have a custom script that doesn't look at relations. The reason for it is that some edits to large relations can expire a lot of tiles IIRC.
I can't recall where the script is but it's some ruby script in OSM SVN.
Note also that osm.org doesn't use osm2pgsql for expiring tiles, they have a custom script that doesn't look at relations. The reason for it is that some edits to large relations can expire a lot of tiles IIRC.
I can't recall where the script is but it's some ruby script in OSM SVN.
Yes, i already adopted it for out multi-style-environment on cassini and placed it at [1], but on cassini it was very slow with this much styles.
Peter
[1] https://svn.toolserver.org/svnroot/mazder/diff-import/tile_expiry/
On 02/11/2010 05:26 PM, Peter Körner wrote:
Note also that osm.org doesn't use osm2pgsql for expiring tiles, they have a custom script that doesn't look at relations. The reason for it is that some edits to large relations can expire a lot of tiles IIRC.
I can't recall where the script is but it's some ruby script in OSM SVN.
Yes, i already adopted it for out multi-style-environment on cassini and placed it at [1], but on cassini it was very slow with this much styles.
Do you know what the bottle neck was? Was it DB access to generate the list of tiles, cpu speed running the ruby script, or filesystem performance to touch a huge number of files? Presumably ptolemy is running a different filesystem, so the latter might behave quite differently. At first we can just touch the global planet-import timestamp though every couple of days expiring all tiles at once while we get everything else running reliably. I suspect there are still some optimizations possible that might be sufficient, but we will need to see what performance is like on ptolemy first.
Kai
Peter
[1]https://svn.toolserver.org/svnroot/mazder/diff-import/tile_expiry/
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Hello, it seems that the ptolemy import is ready now. fine.
But if I test it, with the following comand: "select osm_id,name,ST_asKML(ST_Transform(way,4326)) from planet_osm_point where way && ST_Transform(ST_SetSRID(ST_MakeBox2D(ST_Point(12.717,50.73),ST_Point(13.117,50.93)),4326),900913) AND "amenity" like 'fountain' order by name LIMIT 1000;"
I got the following error: "ERROR: permission denied for relation spatial_ref_sys CONTEXT: SQL statement "SELECT proj4text FROM spatial_ref_sys WHERE srid = 900913 LIMIT 1"
I hope somebody can repair this permission problem. Thanks. Kolossos
River Tarnell schrieb:
Tim Alder
I hope somebody can repair this permission problem.
This is fixed, but the query still doesn't work. This looks like a PostGIS issues that someone who knows more about it will need to investigate.
Sure? I stil get the error. I guess it has sth. to do with the ownership of the spatial_ref_sys table. Is it set to osm?
echo "ALTER TABLE spatial_ref_sys OWNER TO osm;" |\ psql -d osm_mapnik -h sql-mapnik -U postgres
must be executed as the current owner of spatial_ref_sys. The Permissions of geometry_columns are already correct:
osm_mapnik=> \z spatial_ref_sys Access privileges for database "osm_mapnik" Schema | Name | Type | Access privileges --------+-----------------+-------+------------------- public | spatial_ref_sys | table | (1 row)
osm_mapnik=> \z geometry_columns Access privileges for database "osm_mapnik" Schema | Name | Type | Access privileges --------+------------------+-------+------------------------- public | geometry_columns | table | {osm=arwdxt/osm,=r/osm} (1 row)
Peter
Peter Körner:
Sure? I stil get the error. I guess it has sth. to do with the ownership of the spatial_ref_sys table. Is it set to osm?
Okay, this is fixed again. I previously fixed it, then it got unfixed when I recreated the table for TS-427.
- river.
River Tarnell schrieb:
Peter Körner:
Sure? I stil get the error. I guess it has sth. to do with the ownership of the spatial_ref_sys table. Is it set to osm?
Okay, this is fixed again. I previously fixed it, then it got unfixed when I recreated the table for TS-427.
The query works now. I'll setup the diff import with 1 hour windows today. It will then take some days to catch up, I'll then lower the window size to 5 minutes and then down to one minute.
We should start a new thread about setting up Mapnik, mod_tile & Co.
Peter
Peter Körner schrieb:
River Tarnell schrieb:
Peter Körner:
Sure? I stil get the error. I guess it has sth. to do with the ownership of the spatial_ref_sys table. Is it set to osm?
Okay, this is fixed again. I previously fixed it, then it got unfixed when I recreated the table for TS-427.
The query works now. I'll setup the diff import with 1 hour windows today. It will then take some days to catch up, I'll then lower the window size to 5 minutes and then down to one minute.
To give you just a rough estimation: it takes ~2 hours to import a 6 hour window. We are now at 2010/02/02 00:00, so to catch up until today it will take around 5 day. to catch those 5 days it will take another 2 days. so all cumulated we'll have an up2date database in 8 or 9 days.
Peter
On 02/15/2010 05:59 PM, Peter Körner wrote:
Peter Körner schrieb:
River Tarnell schrieb:
Peter Körner:
Sure? I stil get the error. I guess it has sth. to do with the ownership of the spatial_ref_sys table. Is it set to osm?
Okay, this is fixed again. I previously fixed it, then it got unfixed when I recreated the table for TS-427.
The query works now. I'll setup the diff import with 1 hour windows today. It will then take some days to catch up, I'll then lower the window size to 5 minutes and then down to one minute.
To give you just a rough estimation: it takes ~2 hours to import a 6 hour window. We are now at 2010/02/02 00:00, so to catch up until today it will take around 5 day. to catch those 5 days it will take another 2 days. so all cumulated we'll have an up2date database in 8 or 9 days.
Great, but again it seems rather slow. I have no direct experience with dealing with a full planet ( I don't have powerful enough hardware myself), but from what I have heard from others, it should more be like 1 - 2 hours to apply a daily diff not 8 hours. Especially not with such a fast db server as ptolemy is. So it looks like there might be some oportunity for tuning? May it be worth moving osm2pgsql over to run on the same host to see if that helps in any way? It would perhaps be good to see what the performance bottle neck is though if there is any way to find out.
Kai
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
So we're nearly done, we're sth. around one hour behind the main DB.
Great, but again it seems rather slow.
The problem is, that the "Going over pending ways / relations" phase takes too long for small window sizes.
Importing 2 Minutes of changes takes around 15 seconds for the "Processing: Node(k) Way(k) Relation(k)" phase and 5 minutes for the "Going over pending.." phase. That's just too long.
I'm constantly adjusting the window size to find the best match between runtime and window size but as I'd like to have a 1 minute window, as we have it on cassini, we'll need to tweak a little somewhere.
Peter
Peter Körner schrieb:
So we're nearly done, we're sth. around one hour behind the main DB.
You can use /home/project/o/s/m/osm/tools/replag -h
to get the current replag behind the main db (it just compares the value in the state.txt with the current time, so no API or magic involved).
It's currently somewhere between 15 and 2 minutes.
Peter
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Körner:
You can use /home/project/o/s/m/osm/tools/replag -h
% ~osm/tools/replag -h date: illegal option -- utc date: illegal option -- date usage: date [-u] mmddHHMM[[cc]yy][.SS] date [-u] [+format] date -a [-]sss[.fff] date: illegal option -- utc usage: date [-u] mmddHHMM[[cc]yy][.SS] date [-u] [+format] date -a [-]sss[.fff] /home/project/o/s/m/osm/tools/replag: line 9: - : syntax error: operand expected (error token is "- ") /home/project/o/s/m/osm/tools/replag: line 13: [: -gt: unary operator expected /home/project/o/s/m/osm/tools/replag: line 14: [: -gt: unary operator expected second(s) % PATH=/opt/ts/gnu/bin:$PATH ~osm/tools/replag -h 2 minute(s) %
You might not want to rely on a default $PATH in shell scripts ;-)
- river.
You might not want to rely on a default $PATH in shell scripts ;-)
No, I might not. I'm still learning.
The import is unbelievable fast now. What a difference!
P.S. Peter, is there a reason you dropped the maxInterval number in the osmosis configuration down to 15 minutes? It can probably stay larger, so that in case the diff import ever falls way behind it can catch up in larger steps. Also, it is probably possible to drop the interval of calling osmosis down to one minute now if you wish.
I dropped the maxInterval to test hot the speed of the import correlates to the size of the chunks (ie the factor between the interval size and the runtime). I lowert the calling interval to once a minute now and returned the maxInterval to it's original value of 6 hours.
Peter
On 22/02/10 12:39, Peter Körner wrote:
So we're nearly done, we're sth. around one hour behind the main DB.
Great, but again it seems rather slow.
The problem is, that the "Going over pending ways / relations" phase takes too long for small window sizes.
Importing 2 Minutes of changes takes around 15 seconds for the "Processing: Node(k) Way(k) Relation(k)" phase and 5 minutes for the "Going over pending.." phase. That's just too long.
The "Going over pending.." phase amongst others executes a SELECT id FROM planet_osm_ways WHERE pending; which seems to require a seq_scan on planet_osm_ways.
However, a single seq scan on ways seems to take on the order of 8 minutes in my test.
Likewise, seq_scans on the other tables also take very long. A sec_scan on planet_osm_point takes about a minute, on planet_osm_polygon about 3 minutes and on planet_osm_line it seems to take over ten minutes for a single seq_scan. Given that for low zoom levels (before the spatial indices kick in), there are several queries per tile requiring full table scans, this doesn't look good for rendering performance. I don't have any comparison numbers from other servers that have the full planet loaded to see if that is normal, but perhaps we can do some comparisons with cassini?
Kai
P.S. there does seem to be an index on "pending" but trying to run some queries from psql, it didn't seem to be used.
I'm constantly adjusting the window size to find the best match between runtime and window size but as I'd like to have a 1 minute window, as we have it on cassini, we'll need to tweak a little somewhere.
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Kai Krueger:
The "Going over pending.." phase amongst others executes a SELECT id FROM planet_osm_ways WHERE pending; which seems to require a seq_scan on planet_osm_ways.
Can this not be fixed by just creating an index on pending?
- river.
The "Going over pending.." phase amongst others executes a SELECT id FROM planet_osm_ways WHERE pending; which seems to require a seq_scan on planet_osm_ways.
Can this not be fixed by just creating an index on pending?
Kai wrote P.S. there does seem to be an index on "pending" but trying to run some queries from psql, it didn't seem to be used.
Peter
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Körner:
The "Going over pending.." phase amongst others executes a SELECT id FROM planet_osm_ways WHERE pending; which seems to require a seq_scan on planet_osm_ways.
Can this not be fixed by just creating an index on pending?
Kai wrote P.S. there does seem to be an index on "pending" but trying to run some queries from psql, it didn't seem to be used.
Has the table been ANALYZEd since it was imported?
- river.
P.S. there does seem to be an index on "pending" but trying to run some queries from psql, it didn't seem to be used.
Has the table been ANALYZEd since it was imported?
Only if the osm2pgsql tool did it. It contains a function, pgsql_analyze, for this, but I'm unsure if it got / when it gets called.
Peter
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Körner:
Has the table been ANALYZEd since it was imported?
Only if the osm2pgsql tool did it. It contains a function, pgsql_analyze, for this, but I'm unsure if it got / when it gets called.
osm_mapnik=# explain analyze select id from planet_osm_ways where pending; QUERY PLAN - -------------------------------------------------------------------------------------------------------------------------------- Seq Scan on planet_osm_ways (cost=0.00..1662990.31 rows=22160666 width=4) (actual time=496448.422..496448.422 rows=0 loops=1) Filter: pending Total runtime: 496448.455 ms
This suggests that the table statistics are very out of date (22m rows expected vs 0 actual). I suggest ANALYZEing the table and trying the diff import again.
- river.
On 22/02/10 17:10, River Tarnell wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Körner:
Has the table been ANALYZEd since it was imported?
Only if the osm2pgsql tool did it. It contains a function, pgsql_analyze, for this, but I'm unsure if it got / when it gets called.
osm_mapnik=# explain analyze select id from planet_osm_ways where pending; QUERY PLAN
Seq Scan on planet_osm_ways (cost=0.00..1662990.31 rows=22160666 width=4) (actual time=496448.422..496448.422 rows=0 loops=1) Filter: pending Total runtime: 496448.455 ms
This suggests that the table statistics are very out of date (22m rows expected vs 0 actual). I suggest ANALYZEing the table and trying the diff import again.
Osm2PgSql should do an analyze during initial import, though not during diff imports. From those numbers it looks like (I'd need to check the code) the analyze was done between the import part and the going over pending ways part, which would explain the high number of pending ways it expects to see. At the end of going over pending ways, I presume there shouldn't be any pending ways left, which is why you would see an actual cost of 0. The diff imports (which create and then clear pending ways again) are presumably run in a transaction, so you can't see these with a simple query outside the transaction (which may well have distorted my testing too) . But a minutely diff shouldn't be close to 22m rows.
It is worth running another analyze though to see if that helps.
Kai
- river.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAkuCuqIACgkQIXd7fCuc5vLX/ACcCKAfD+/X197OOMOikdylNunG Q2oAnAsOHcVefUENUEjGd1lzEtd1l0nR =o2RI -----END PGP SIGNATURE-----
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
osm_mapnik=# explain analyze select id from planet_osm_ways where pending; QUERY PLAN
Seq Scan on planet_osm_ways (cost=0.00..1662990.31 rows=22160666 width=4) (actual time=496448.422..496448.422 rows=0 loops=1) Filter: pending Total runtime: 496448.455 ms
After an ANALYZE it does an Index Scan, but this is not really faster: QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using planet_osm_ways_idx on planet_osm_ways (cost=0.00..9.82 rows=1 width=4) (actual time=587600.044..587600.044 rows=0 loops=1)
Total runtime: 587600.088 ms (2 rows)
Peter
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Körner:
After an ANALYZE it does an Index Scan, but this is not really faster:
I've replaced the partial index with a standard index on pending, for both planet_osm_ways and planet_osm_rel. This seems to improve the query speed:
osm_mapnik=# explain analyze select id from planet_osm_ways where pending; QUERY PLAN - ------------------------------------------------------------------------------------------------------------------------------------------ Index Scan using planet_osm_ways_pending on planet_osm_ways (cost=0.00..12.84 rows=1 width=4) (actual time=0.248..0.248 rows=0 loops=1) Index Cond: (pending = true) Filter: pending Total runtime: 0.263 ms (4 rows)
Does this improve the diff import time at all?
- river.
On 22/02/2010 21:00, River Tarnell wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Körner:
After an ANALYZE it does an Index Scan, but this is not really faster:
I've replaced the partial index with a standard index on pending, for both planet_osm_ways and planet_osm_rel. This seems to improve the query speed:
osm_mapnik=# explain analyze select id from planet_osm_ways where pending; QUERY PLAN
Index Scan using planet_osm_ways_pending on planet_osm_ways (cost=0.00..12.84 rows=1 width=4) (actual time=0.248..0.248 rows=0 loops=1) Index Cond: (pending = true) Filter: pending Total runtime: 0.263 ms (4 rows)
Does this improve the diff import time at all?
Looking a little closer at the timing of log statements appearing, it appears as if the the pending calls are no longer the problem. Never the less, talking to Peter and Jon Burgess on irc last night, it appears as if we are still seeing some general performance issues. At least on some queries it seems as if there is up to a factor of 10 difference under some conditions in comparison between ptolemy and yevaud. As an example the query "explain analyze select count(*) from planet_osm_roads where route='ferry';" took 55 seconds on yevaud compare to 150 seconds on ptolemy. In the cache hot case, i.e. running the statement immediately again, the time on yevaud went down to 8 seconds, whereas for ptolemy it still stayed at 105 seconds. So there appears to be a much smaller effect of caching. There were other queries that showed similar results too.
One potential large difference between the two setups is the 500+ extra columns introduced by the name:* and wikipedia:* tags in the style. But at least the raw table sizes don't show that much difference. Tables on Ptolemy are about 10-20% larger in size. The planet_osm_roads table used in the query above for example is 1.3Gb compared to 1.1Gb in size.
Jon wanted to post the postgres config of yevaud at some point to see if there are any of the tunings done on yevaud that might help performance on ptolemy.
Kai
P.S. Peter, is there a reason you dropped the maxInterval number in the osmosis configuration down to 15 minutes? It can probably stay larger, so that in case the diff import ever falls way behind it can catch up in larger steps. Also, it is probably possible to drop the interval of calling osmosis down to one minute now if you wish.
- river.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAkuC8IAACgkQIXd7fCuc5vIXuwCdHd1UgVjXPqilZgqMe7cnHOZw NJgAniq9ioIuI3lNp6mABESHFmp2R/ps =MMch -----END PGP SIGNATURE-----
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Kai Krueger:
As an example the query "explain analyze select count(*) from planet_osm_roads where route='ferry';" took 55 seconds on yevaud compare to 150 seconds on ptolemy. In the cache hot case, i.e. running the statement immediately again, the time on yevaud went down to 8 seconds, whereas for ptolemy it still stayed at 105 seconds.
Okay, I found the cause of this: the /sql filesystem is mounted with 'cio' enabled. This massively increases performance for MySQL, so we normally use it everywhere, but it seems that it disables the OS page cache, which obviously kills PostgreSQL performance. I remounted the /sql filesystem without it, and performance seems much better:
osm_mapnik=# explain analyze select count(*) from planet_osm_roads where route='ferry'; QUERY PLAN - ----------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=259824.01..259824.02 rows=1 width=0) (actual time=31596.203..31596.203 rows=1 loops=1) -> Seq Scan on planet_osm_roads (cost=0.00..259824.00 rows=1 width=0) (actual time=1143.908..31596.075 rows=24 loops=1) Filter: (route = 'ferry'::text) Total runtime: 31597.265 ms (4 rows)
osm_mapnik=# explain analyze select count(*) from planet_osm_roads where route='ferry'; QUERY PLAN - -------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=259825.23..259825.24 rows=1 width=0) (actual time=1989.096..1989.097 rows=1 loops=1) -> Seq Scan on planet_osm_roads (cost=0.00..259825.23 rows=1 width=0) (actual time=65.551..1988.990 rows=24 loops=1) Filter: (route = 'ferry'::text) Total runtime: 1989.513 ms (4 rows)
(First is cold, second is hot.)
I kept cio on /sql/pg_xlog because the page cache shouldn't be needed there.
- river.
I remounted the /sql filesystem without it, and performance seems much better
Anyway we're at a replag of 1 minute (current achievable minimum) and it's staying there. The rumtime for a 60 second interval is ~9 seconds, so the replication-issue seems solved for now. Next thing is rendering.
From the error-messages (see below) I'd guess that mapnik is out of date.
nightshade: UserWarning: Unknown child node in 'Map'. Expected 'Style' or 'Layer' but got 'FontSet'
willow: UserWarning: Failed to parse CSS parameter 'stroke-width'. Expected type float but got '1' in LineSymbolizer in style 'highway-area-casing'
River, would it be possible to install a svn HEAD version of mapnik and see if this works? Or at least the current release 0.7 (on willow 0.6.1 is installed, for nightshade i'm unable to tell)
osm@willow:~/tools/render$ python
import mapnik mapnik.mapnik_version()
601
Peter
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Körner:
willow: UserWarning: Failed to parse CSS parameter 'stroke-width'. Expected type float but got '1' in LineSymbolizer in style 'highway-area-casing'
I had this error when I set up renderd on ptolemy last time. I'm sure I fixed it though, because I actually had it working at some point. I'll have a look at my renderd source and see if I can find the fix.
River, would it be possible to install a svn HEAD version of mapnik and see if this works?
It's possible to install it (file a request in JIRA), but I don't think it will fix this error.
- river.
On 23/02/10 12:18, Peter Körner wrote:
I remounted the /sql filesystem without it, and performance seems much better
Anyway we're at a replag of 1 minute (current achievable minimum) and it's staying there. The rumtime for a 60 second interval is ~9 seconds, so the replication-issue seems solved for now. Next thing is rendering.
From the error-messages (see below) I'd guess that mapnik is out of date.
nightshade: UserWarning: Unknown child node in 'Map'. Expected 'Style' or 'Layer' but got 'FontSet'
Mapnik on nightshade seems to be some form of 0.5 which is indeed to old for the current osm style. 0.5 I think doesn't support the xml includes.
willow: UserWarning: Failed to parse CSS parameter 'stroke-width'. Expected type float but got '1' in LineSymbolizer in style 'highway-area-casing'
Yes, that is exactly the error I was seeing too. I "solved" the expected float error with adding white space around it, although I don't know if it solved the error or just suppressed the warning. But that doesn't seem like the correct fix. It might give a hint at what is wrong though.
But if River has figured it out before, then we should be able to get it working again. So with the performance issues solved and other things being set up, it looks like we are making some good progress :-)
River, would it be possible to install a svn HEAD version of mapnik and see if this works? Or at least the current release 0.7 (on willow 0.6.1 is installed, for nightshade i'm unable to tell)
Eventually it will probably be useful to move to 0.7. I have heard that there are some plans to move the osm tile server to 0.7 at some point to then use features in the style file that require 0.7, but at the moment I think it is still running 0.6.1. So I don't think it is a priority to move to 0.7 just now.
Kai
osm@willow:~/tools/render$ python
import mapnik mapnik.mapnik_version()
601
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
willow: UserWarning: Failed to parse CSS parameter 'stroke-width'. Expected type float but got '1' in LineSymbolizer in style 'highway-area-casing'
I can't find the string "Expected type float" in the Mapnik source, neither in 0.6.1 nor in trunk, so i'd guess it comes from some external lib (some boost parser / lexer lib i'd guess, so it would be good to look at this a little closer.
Maybe compare the versions with cassini's ones may help? Although cassini does not run the current osm-stylesheet.
Peter
Kai Krueger schrieb:
Yes, that is exactly the error I was seeing too.
Okay, so rendering simple Shapefiles works (on willow):
mazder@willow:~$ OSM=/home/project/o/s/m/osm/ $OSM/tools/render \ --style $OSM/data/mapnik/shape.xml
but even with a very small stylesheet (shapefiles & place names) it stops working:
mazder@willow:~$ OSM=/home/project/o/s/m/osm/ $OSM/tools/render \ --style $OSM/data/mapnik/simple.xml
rendering bbox (-180, -85, 180, 85) in style /home/project/o/s/m/osm//data/mapnik/shape-names.xml to file map which is of type png in size 800x600
Traceback (most recent call last): File "/home/project/o/s/m/osm//tools/render", line 128, in <module> main() File "/home/project/o/s/m/osm//tools/render", line 81, in main mapnik.load_map(m, style) UserWarning: Could not create datasource. No plugin found for type 'postgis' in layer 'placenames-large' mazder@willow:~$
It seems mapnik was compiled before postgis was available and so doesn't know about the postgis plugin. Maybe a simple recompile would solve this issue. On this run we should add the cairo bindings to support rendering to pdf, svg & co.
Peter
Hello,
may I ask what the current status on the rendering stack on the toolserver is?
From a discussion on irc a while back, I heard that the problem with the style file not being accepted by mapnik has been solved and that mod_tile was successfully rendering tiles on ptolemy that could access under e.g. http://toolserver.org/tiles/6/31/19.png (which indeed worked for me for a while). So that all sounds great. However unfortunately at the moment it is no longer accessible and trying to contact that address just hangs and eventually times out. Accessing the mod_tile status page on ptolemy also just hangs, which may suggest that something happened to the apache process. Is this a one off problem (or even intended?) and a simple restart will fix it, or are there compatibility problems with the current mod_tile and solaris?
Otherwise, if it is stable, it would be good to be able to test that performance of the current setup is comparable to other mod_tile setups, or if more tuning is necessary. Otherwise moving some of the maps ( such as e.g. http://cassini.toolserver.org/tiles/hikebike/ ) that are still on cassini over to ptolemy might be a good idea, especially as cassini seems to be suffering some sever performance problems of its own. (s. http://cassini.toolserver.org/munin/toolserver.org/cassini.toolserver.org.ht...) Where a performance of 1 - 2 metatiles per second would be expected rather than the ~0.01 - 0.1 that it currently achieves.
Thanks,
Kai
Btw, the mapnik on willow still seems to suffer the original problem of not accepting the style files and break with the errors on saying it expects floats.
On 02/24/2010 10:30 AM, Peter Körner wrote:
Kai Krueger schrieb:
Yes, that is exactly the error I was seeing too.
Okay, so rendering simple Shapefiles works (on willow):
mazder@willow:~$ OSM=/home/project/o/s/m/osm/ $OSM/tools/render \ --style $OSM/data/mapnik/shape.xml
but even with a very small stylesheet (shapefiles& place names) it stops working:
mazder@willow:~$ OSM=/home/project/o/s/m/osm/ $OSM/tools/render \ --style $OSM/data/mapnik/simple.xml
rendering bbox (-180, -85, 180, 85) in style /home/project/o/s/m/osm//data/mapnik/shape-names.xml to file map which is of type png in size 800x600
Traceback (most recent call last): File "/home/project/o/s/m/osm//tools/render", line 128, in<module> main() File "/home/project/o/s/m/osm//tools/render", line 81, in main mapnik.load_map(m, style) UserWarning: Could not create datasource. No plugin found for type 'postgis' in layer 'placenames-large' mazder@willow:~$
It seems mapnik was compiled before postgis was available and so doesn't know about the postgis plugin. Maybe a simple recompile would solve this issue. On this run we should add the cairo bindings to support rendering to pdf, svg& co.
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Kai Krueger schrieb:
may I ask what the current status on the rendering stack on the toolserver is?
To me it seems that getting osm into the wikipedia is not desired.
I opened tickets for all this a while back and i don't see anything happening.
https://jira.toolserver.org/browse/TS-558 https://jira.toolserver.org/browse/TS-559
Too sad to have these really powerful servers unused.
Peter
Btw, the mapnik on willow still seems to suffer the original problem of not accepting the style files and break with the errors on saying it expects floats.
I recall River trying to work on this a long while back on #mapnik irc. It seems to be a problem specific to boost property_tree ('ptree') on solaris.
Mapnik uses boost ptree for building an object tree during XML parsing. To make debugging this harder, Mapnik keeps a copy of boost property_tree in its source directory and builds against this locally. The reason for this is that Mapnik has been using ptree since before it was formally available in the boost sources.
In Mapnik trunk (aka Mapnik2, which will likely be the Mapnik 0.8.0 release), we have removed the local copy and are using the ptree available within official boost sources. This required a number of changes in Mapnik, as ptree function names changed a bit once it was included. I recall River may have tried building Mapnik trunk, but I'm not sure if it solved this problem on Solaris.
If you guys are interested in going the Mapnik2 route on willow to try to get past this ptree problem, just be warned that Mapnik2/trunk is potentially introducing backward incompatible changes in XML parsine. Consider me a resource to soften this issue, and follow http://trac.mapnik.org/wiki/Mapnik2 for more details.
Dane
On 02/24/2010 10:30 AM, Peter Körner wrote:
Kai Krueger schrieb:
Yes, that is exactly the error I was seeing too.
Okay, so rendering simple Shapefiles works (on willow):
mazder@willow:~$ OSM=/home/project/o/s/m/osm/ $OSM/tools/render \ --style $OSM/data/mapnik/shape.xml
but even with a very small stylesheet (shapefiles& place names) it stops working:
mazder@willow:~$ OSM=/home/project/o/s/m/osm/ $OSM/tools/render \ --style $OSM/data/mapnik/simple.xml
rendering bbox (-180, -85, 180, 85) in style /home/project/o/s/m/osm//data/mapnik/shape-names.xml to file map which is of type png in size 800x600
Traceback (most recent call last): File "/home/project/o/s/m/osm//tools/render", line 128, in<module> main() File "/home/project/o/s/m/osm//tools/render", line 81, in main mapnik.load_map(m, style) UserWarning: Could not create datasource. No plugin found for type 'postgis' in layer 'placenames-large' mazder@willow:~$
It seems mapnik was compiled before postgis was available and so doesn't know about the postgis plugin. Maybe a simple recompile would solve this issue. On this run we should add the cairo bindings to support rendering to pdf, svg& co.
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Kai Krueger:
may I ask what the current status on the rendering stack on the toolserver is?
The software, i.e. mod_tile and renderd is working. It's not online at the moment, because it was only a test version running under my account. I'm currently considering the best way to make it available permanently (probably running under the OSM account).
As there are a couple of people here in Berlin who are interested in OSM, I'd like to organise some kind of discussion/workshop tomorrow to discuss where the OSM/Toolserver stuff is going. At least Avar and Tim (Kolossos) expressed an interest in that... if anyone else here would like to join in, please find me tonight (on the boat) or tomorrow morning at the venue so we can sort something out.
- river.
On 04/14/2010 09:07 PM, River Tarnell wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Kai Krueger:
may I ask what the current status on the rendering stack on the toolserver is?
The software, i.e. mod_tile and renderd is working. It's not online at the moment, because it was only a test version running under my account. I'm currently considering the best way to make it available permanently (probably running under the OSM account).
Thanks for the update and together with the tickets in JIRA about ptolemy that I hadn't seen before, it makes much clearer whats going on and where things stand.
As there are a couple of people here in Berlin who are interested in OSM, I'd like to organise some kind of discussion/workshop tomorrow to discuss where the OSM/Toolserver stuff is going.
That would be great. A good understanding of what the core objectives are and thus which parts are needed would be very helpful in focusing efforts.
Just as ideas, but somethings that might be interesting to start with would be to e.g. move over the embeded OSM map in the german geohack from using the OSMF servers to wikimedias servers. As far as I know, the load generated on the OSM tile server from the geohack isn't particularly large, so I don't think it is a priority from OSMF's point of view, but it might still be a good test for the toolserver setup and it would be nice if other country's geohacks would follow and also directly include a OSM map. Another potentially nice use would be to include the OSM maps in the Wiki Mini Atlas as an additional source.
Both of those might be easier first steps than getting the maps plugin into a wikipedia.
But it will be interesting to hear what conclusions people will come to during the workshop.
Kai
At least Avar and Tim
(Kolossos) expressed an interest in that... if anyone else here would like to join in, please find me tonight (on the boat) or tomorrow morning at the venue so we can sort something out.
- river.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAkvGIJcACgkQIXd7fCuc5vJA+wCfQ6EjTx+aesTIPMWH66KJ09nM FlEAoLHSTjQOsxBYxHqoMfk0QGaqsYn6 =DDY0 -----END PGP SIGNATURE-----
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
directly include a OSM map. Another potentially nice use would be to include the OSM maps in the Wiki Mini Atlas as an additional source.
Yeah, I've been thinking about that for quite a while now, and even had a test version of WMA. The main issue is the different map projection (Mercator rather than lat-lon), which makes it necessary to modify my label server.
On 02/15/2010 08:59 AM, Peter Körner wrote:
We should start a new thread about setting up Mapnik, mod_tile& Co.
Now that the initial import is finished, it might be good to get the ball rolling on the rendering side. The fact that it still hasn't fully caught up shouldn't hopefully matter too much, as later it will also constantly be updating to keep up with the minutely diffs.
I guess there are three things that would be good to get working initially. mod_tile, the static export script, and the osm-render tool for testing styles on the login-servers.
In all three cases mapnik is needed. It looks like mapnik is already installed on nightshade, so potentially the osm-render tool might already work with that? It looks like the version installed is 0.5 though and I think the current version of the OSM style sheets need a newer version, possibly 0.6.1. I don't know how far back you would have to go though for them to be compatible with version 0.5 again. Never-the-less, it might be nice, if possible, to update to version 0.6.1 or 0.7.
The export script[1], which is used for the static images in the Maps wiki plugin is a simple cgi script in python, so hopefully should be fairly easy to get working, as long as the python mapnik bindings are installed on ptolemy. Later on, the export script will presumably need to have a proxy in front of it (it should set the correct http headers to support that already), as unlike mod_tile, it doesn't have any own caching built into it and renders all requests on the fly.
That leaves mod_tile and renderd. Will it require a lot of work to port it over to to run on ptolemy? What can be done to help with that?
Kai
[1] http://svn.wikimedia.org/viewvc/mediawiki/trunk/tools/osm-tools/cgi-bin/
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Kai Krueger:
In all three cases mapnik is needed. It looks like mapnik is already installed on nightshade, so potentially the osm-render tool might already work with that? It looks like the version installed is 0.5 though and I think the current version of the OSM style sheets need a newer version, possibly 0.6.1.
willow and ptolemy have 0.6.1. I would suggest not using nightshade, as its software and configuration is quite likely to be different to the Solaris systems (and the import is already running on willow). It's also easier to install/upgrade to specific versions of software on Solaris.
- river.
On 02/20/2010 05:35 PM, River Tarnell wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Kai Krueger:
In all three cases mapnik is needed. It looks like mapnik is already installed on nightshade, so potentially the osm-render tool might already work with that? It looks like the version installed is 0.5 though and I think the current version of the OSM style sheets need a newer version, possibly 0.6.1.
willow and ptolemy have 0.6.1. I would suggest not using nightshade, as its software and configuration is quite likely to be different to the Solaris systems (and the import is already running on willow). It's also easier to install/upgrade to specific versions of software on Solaris.
Wonderful.
I am trying to see how far I can get with generating some images on willow then. osm-render seems to require the pthyon cairo module that it can't find though. generate_image.py[1] looks like it works better, but gives me some errors not liking the style sheet. Not sure where that comes from yet, but I'll figure it out :-)
One more thing that I currently noticed though is that the coastlines used with mapnik are not stored in the postgis database but instead uses a separate set of shapfiles[2]. As these are 500Mb in size, it brought me over my quota limit. Is there a data directory where these could be put so that everyone can access them?
Thanks,
Kai
[1] http://trac.openstreetmap.org/browser/applications/rendering/mapnik/generate... [2] http://wiki.openstreetmap.org/wiki/Mapnik#World_Boundaries
- river.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAkuAHWwACgkQIXd7fCuc5vJifgCeN1YWBzUvclwA/wu91jegYmt7 fWgAoIJUikNB0RNwYbhkEVffi9560KZt =WxWl -----END PGP SIGNATURE-----
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Kai Krueger:
osm-render seems to require the pthyon cairo module that it can't find though.
You can file requests to have additional software installed in JIRA[0].
Is there a data directory where these could be put so that everyone can access them?
Shared data like this should probably be placed in the osm project directory, so everyone can access it.
- river.
[0] https://jira.toolserver.org/browse/TS
One more thing that I currently noticed though is that the coastlines used with mapnik are not stored in the postgis database but instead uses a separate set of shapfiles[2]. As these are 500Mb in size, it brought me over my quota limit. Is there a data directory where these could be put so that everyone can access them?
They're already there, stored in /home/project/o/s/m/osm/data/world_boundaries
Peter
On 02/20/2010 08:02 PM, Peter Körner wrote:
One more thing that I currently noticed though is that the coastlines used with mapnik are not stored in the postgis database but instead uses a separate set of shapfiles[2]. As these are 500Mb in size, it brought me over my quota limit. Is there a data directory where these could be put so that everyone can access them?
They're already there, stored in /home/project/o/s/m/osm/data/world_boundaries
Is it possible to make /home/project/o/s/m/osm wold readible/searchable? I can access the files, but I couldn't browse through the directory to have a look if they are already available.
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Is it possible to make /home/project/o/s/m/osm wold readible/searchable? I can access the files, but I couldn't browse through the directory to have a look if they are already available.
Are you not working as "osm"? try ttyallow osm && become osm on willow (on nightshade ttyallow does not work, see [1])
Peter
[1] https://wiki.toolserver.org/view/Multi-maintainer_projects
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Peter Körner:
Is it possible to make /home/project/o/s/m/osm wold readible/searchable?
Are you not working as "osm"?
If there's data there that's useful to other users, it should probably be world readable anyway.
- river.
I am trying to see how far I can get with generating some images on willow then. osm-render seems to require the pthyon cairo module that it can't find though. generate_image.py[1] looks like it works better, but gives me some errors not liking the style sheet. Not sure where that comes from yet, but I'll figure it out :-)
osm-render is just a modification of generate_image.py but we shoud get the cairo bindings running any way to support pdf & svg generation.
The export script[1], which is used for the static images in the Maps wiki plugin is a simple cgi script in python, so hopefully should be fairly easy to get working, as long as the python mapnik bindings are installed on ptolemy.
Should it be running on ptolemy or better on wolfsbane?
Peter
Kai Krueger schrieb:
I am trying to see how far I can get with generating some images on osm-render seems to require the pthyon cairo module
i fixed /home/project/o/s/m/osm/tools/render/osm-render to work without the cairo module but it complains about errors in the style xml, located in /home/project/o/s/m/osm/data/osm_mapnik/osm.xml.
mazder@willow:~$ /home/project/o/s/m/osm/tools/render/osm-render rendering bbox (-180, -85, 180, 85) in style /home/project/o/s/m/osm/data/osm_mapnik/osm.xml to file map which is of type png in size 800x600
/home/project/o/s/m/osm/data/osm_mapnik/inc/entities.xml.inc:9: parser warning : PEReference: %layers; not found %layers; ^ Traceback (most recent call last): File "/home/project/o/s/m/osm/tools/render/osm-render", line 128, in <module> main() File "/home/project/o/s/m/osm/tools/render/osm-render", line 81, in main mapnik.load_map(m, style) UserWarning: XML document not well formed: Entity 'layer-admin' not defined in file '/home/project/o/s/m/osm/data/osm_mapnik/osm.xml' at line 6503 mazder@willow:~$
I don't have the time to fix this atm, maybe you can. If all goes well, it should produce a map.png in your working-dir, containing a world-overview. I think, the paths to the shoreline-files still has to be changed.
Peter
On 23/02/2010 09:59, Peter Körner wrote:
Kai Krueger schrieb:
I am trying to see how far I can get with generating some images on osm-render seems to require the pthyon cairo module
i fixed /home/project/o/s/m/osm/tools/render/osm-render to work without the cairo module but it complains about errors in the style xml, located in /home/project/o/s/m/osm/data/osm_mapnik/osm.xml.
mazder@willow:~$ /home/project/o/s/m/osm/tools/render/osm-render rendering bbox (-180, -85, 180, 85) in style /home/project/o/s/m/osm/data/osm_mapnik/osm.xml to file map which is of type png in size 800x600
/home/project/o/s/m/osm/data/osm_mapnik/inc/entities.xml.inc:9: parser warning : PEReference: %layers; not found %layers; ^ Traceback (most recent call last): File "/home/project/o/s/m/osm/tools/render/osm-render", line 128, in
<module> main() File "/home/project/o/s/m/osm/tools/render/osm-render", line 81, in main mapnik.load_map(m, style) UserWarning: XML document not well formed: Entity 'layer-admin' not defined in file '/home/project/o/s/m/osm/data/osm_mapnik/osm.xml' at line 6503 mazder@willow:~$
I don't have the time to fix this atm, maybe you can. If all goes well, it should produce a map.png in your working-dir, containing a world-overview. I think, the paths to the shoreline-files still has to be changed.
If you are using the current style sheet of OpenStreetMap, then they use xml includes to specify the location of the database, the font directory and the directory containing the world boundaries. In the osm svn (mapnik subdirectory), you will find the inc directory in addition to the osm.xml. This contains the includes. You will need to copy over all of the *.template.inc to *.inc and substitute those values with the appropriate ones for the environment. E.g. the database host being sql-mapnik.
I had a go at this over the weekend, to adopt the style file to the toolserver environment. So far without luck though, as mapnik kept on complaining that the style sheet is broken. The issues I saw there were that it wasn't parsing the style sheet correctly. Giving errors like CssParameter expects value of type float but found '1.0' or CssParameter expected to see a value of [ round, something, something else ] but found 'round'.
From what I have seen, float values require a white space before and after the value. Colour values of the form #999 must not have white space characters before or after. And I haven't found the magical whitespace combination yet to get the enumartions working at all.
This is very strange though, as I don't see any of this on my linux box and makes me wonder if the solaris mapnik / boost libraries are handling parsing differently in any way.
I'll have another look at it though later.
Kai
Peter
Maps-l mailing list Maps-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/maps-l
Yes, i already adopted it for out multi-style-environment on cassini and placed it at [1], but on cassini it was very slow with this much styles.
Do you know what the bottle neck was? Was it DB access to generate the list of tiles, cpu speed running the ruby script, or filesystem performance to touch a huge number of files?
I guess it was the filesystem, because the script only queries the DB once, no matter how much styles you'll expire.
Presumably ptolemy is running a different filesystem, so the latter might behave quite differently. At first we can just touch the global planet-import timestamp though every couple of days expiring all tiles at once while we get everything else running reliably. I suspect there are still some optimizations possible that might be sufficient, but we will need to see what performance is like on ptolemy first.
Yay, we'll to this. Maybe it would be enough to touch the lower 4 or 5 zoom levels on a per-minute or per-5-minute basis and leave the rest for a weekly expire-all event.
Peter
On 02/15/2010 08:17 AM, Peter Körner wrote:
Yes, i already adopted it for out multi-style-environment on cassini and placed it at [1], but on cassini it was very slow with this much styles.
Do you know what the bottle neck was? Was it DB access to generate the list of tiles, cpu speed running the ruby script, or filesystem performance to touch a huge number of files?
I guess it was the filesystem, because the script only queries the DB once, no matter how much styles you'll expire.
Even on yevaud, the osm-tile server, that only has a single style, the tile expiry might be causing some issues. Although not definite, the high system (rather than user) load seen on that server during "editing rush-hour" (i.e. when a lot of tiles need expiring), is probably caused by those scripts. The combination of using ruby and spawning an external program (touch) for each expired tile doesn't directly strike me as efficient. So I would be curious to know if and how much moving over to a c based program e.g.[1] using utime would help. I don't currently have the possibility to benchmark this on a full planet, but perhaps we can test these two options on the toolserver once the rendering stack is set up.
Presumably ptolemy is running a different filesystem, so the latter might behave quite differently. At first we can just touch the global planet-import timestamp though every couple of days expiring all tiles at once while we get everything else running reliably. I suspect there are still some optimizations possible that might be sufficient, but we will need to see what performance is like on ptolemy first.
Yay, we'll to this. Maybe it would be enough to touch the lower 4 or 5 zoom levels on a per-minute or per-5-minute basis and leave the rest for a weekly expire-all event.
Do you mean high zoom (e.g. zoom level 12 - 18) or low zoom (z 0 - z 6)? It seems more reasonable to expire the high zoom levels, as on those changes are more visible and they are much faster to render, as they contain less data. On the osm tile server, currently, low zoom tiles don't get expired at all, other than through a full reimport, so these can be months out of date. I wouldn't go as far as that, but rerendering low zoom tiles once a week in the background (e.g. with render_list) would probably be sufficient.
Peter
Kai
[1] http://trac.openstreetmap.org/browser/applications/utils/mod_tile/render_exp...
So I would be curious to know if and how much moving over to a c based program e.g.[1] using utime would help. I don't currently have the possibility to benchmark this on a full planet, but perhaps we can test these two options on the toolserver once the rendering stack is set up.
I think this is what the Toolserver is for.
Presumably ptolemy is running a different filesystem, so the latter might behave quite differently. At first we can just touch the global planet-import timestamp though every couple of days expiring all tiles at once while we get everything else running reliably. I suspect there are still some optimizations possible that might be sufficient, but we will need to see what performance is like on ptolemy first.
Yay, we'll to this. Maybe it would be enough to touch the lower 4 or 5 zoom levels on a per-minute or per-5-minute basis and leave the rest for a weekly expire-all event.
Do you mean high zoom (e.g. zoom level 12 - 18) or low zoom (z 0 - z 6)? It seems more reasonable to expire the high zoom levels, as on those changes are more visible and they are much faster to render, as they contain less data.
This is what i tried to say.
On the osm tile server, currently, low zoom tiles don't get expired at all, other than through a full reimport, so these can be months out of date.I wouldn't go as far as that, but rerendering low zoom tiles once a week in the background (e.g. with render_list) would probably be sufficient.
Once a week is okay, but we'll have to keep in mind that localisation of countries, islands, cities an co. are still a major task that massively affects the low (0-6) Zoomlevels.
Peter
River Tarnell schrieb:
Okay. I've started reinstalling ptolemy with our current Solaris
image.
Once that is done, I'll drop the existing database and create a new one owned by 'osm'. You should have access to the 'osm' MMT; see https://wiki.toolserver.org/view/Multi-maintainer_projects for details on how to use it. Once this is done you can start importing the database.
How's the name of the database? I tried to connect to it from nightshade with
osm@nightshade:~$ psql -hptolemy psql: FATAL: database "osm" does not exist
We'll need a workflow on how toolserver user can test their stylesheets
I know very little about renderd, so I suggest someone else comes up with a specific proposal for this, which I can implement.
As I said: Test your style via the osm-render-tool on the login-servers, which is as simple as
osm-render --style /path/to/style.xml --bbox 7.9,50.17,8.65,49.83
after it's ready to be tested on a real tile server, submit it via ticket to jira and install it on ptolemy (we can write a script or at least step-by-step instructions for that).
Peter
Peter Körner:
How's the name of the database?
"osm_mapnik" (and "osm_api", but we don't plan to use that one until we have more disk space).
Also, use the server name 'sql-mapnik' to connect. That way we can move the database later without having to change all the scripts.
after it's ready to be tested on a real tile server, submit it via ticket to jira and install it on ptolemy
Okay, that's fine, as long as "test it on the login servers" doesn't need any action from admins.
- river.
River Tarnell schrieb:
Since this does not confer any additional privileges, and the OSM data is not private, we can easily add people to the MMT (Peter, Tim?), and these people will be responsible for ensuring the database is available.
I'm not the right person for this job, because I have too less postgres experience. Greetings Tim alias Kolossos