PoC using PostGIS Raster extension with Terrain Tiles, an AWS open dataset of elevation tiles.
Figure 1. Preview application using Leaflet
We are using Terrain Tiles, an open dataset from AWS, more details here
If you want to to produce all the s3
uri for a specific zoom level, you can use commands like the above.
aws s3 ls --no-sign-request s3://elevation-tiles-prod/v2/geotiff/15/ --recursive | awk '{print $4}' | sed 's/^/\/vsis3\/elevation-tiles-prod\//' > elevation-tiles-prod.txt
Postgres/PostGis is extended with raster capabilities using under the hood GDAL. More information here:
Examples that uses aws s3 ls
export PGPASSWORD='postgres';
export AWS_NO_SIGN_REQUEST=YES
export GTIFF_DIRECT_IO=YES
// Copy geotiff locally
aws s3 cp --no-sign-request s3://elevation-tiles-prod/v2/geotiff/15/0/10850.tif 10850.tif
// Check geotiff info
gdalinfo /vsis3/elevation-tiles-prod/v2/geotiff/15/0/10850.tif
// Create an sql file with creating table
raster2pgsql -s 3857 -I -M -R -t 256x256 /vsis3/elevation-tiles-prod/v2/geotiff/15/0/10850.tif elevation_tiles_prod > inject_data_with_creating_table.sql
// Create an sql file, just importing
raster2pgsql -a -s 3857 -R -t 256x256 -k -Y /vsis3/elevation-tiles-prod/v2/geotiff/15/0/10850.tif elevation_tiles_prod > inject_data.sql
// Import to postgres DB raster
raster2pgsql -a -s 3857 -R -t 256x256 -k -Y /vsis3/elevation-tiles-prod/v2/geotiff/15/0/10850.tif elevation_tiles_prod | psql -h localhost -p 5432 -U postgres -d postgres
// Create txt file with all the import commands
aws s3 ls --no-sign-request s3://elevation-tiles-prod/v2/geotiff/1/ --recursive | awk '{print "raster2pgsql -a -s 3857 -R -t 256x256 -k -Y /vsis3/elevation-tiles-prod/" $4 " elevation_tiles_prod | psql -h localhost -p 5432 -U postgres -d postgres"}' >> inject_tiles_into_db_for_specific_zoom_level.txt
// Import raster directly to postgres DB
aws s3 ls --no-sign-request s3://elevation-tiles-prod/v2/geotiff/2/ --recursive | awk '{print "raster2pgsql -a -s 3857 -R -t 256x256 -k -Y /vsis3/elevation-tiles-prod/" $4 " elevation_tiles_prod | psql -h localhost -p 5432 -U postgres -d postgres"}' | bash
Or we can use a small script to autmate the proceduce. Set min and max elevation tiles in the script get_elevation_tiles_path.sh
./get_elevation_tiles_path.sh
Finally, execute this to insert the data
export PGUSER=postgres
export PGPASSWORD=postgres
export AWS_NO_SIGN_REQUEST=YES
export GTIFF_DIRECT_IO=YES
ulimit -n `ulimit -Hn`
parallel --jobs 600 -a tiles.txt raster2pgsql -a -s 3857 -R -t 256x256 -k -Y {} elevation_tiles_prod | psql -h localhost -p 5432 -U postgres -d postgres
For zoom level 7 we added 16.369 from the 16.384 tiles in 20 minutes. 8 zoom level 1 hour and 30 mins.
You have your own infrastructure setup and and what to inject the data, just what you need to do is run these sql scripts.
You can simply skip the sql scripts here and rename the file 13_add_data.sha to 13_add_data.sh
and specify the MAX_ZOOM_LEVEL
, s3 path
and when you run docker compose up
it will inject your own dataset.
docker compose up
docker compose down -v --rmi local
We are supporting Tile Map Service TMS and statistical endpoints by raster processing: DEM (Elevation) More information here https://postgis.net/docs/RT_reference.html#Raster_Processing_DEM
To check the available endpoints, please, check out here elevation-tiles-controller
cd springboot
mvn spring-boot:run
Here the limits are endless, since for every type of layer elevation
, slope
, tri
, tpi
, hillshade
, aspect
, besides the TMS {z}/{x}/{y}
you can specify the coloramp
, by passing in pg
format (Color-maps used by PostGIS, in particular those for the ST_ColorMap function). The best color ramps I found are here -> cpt-city
For this benchmarking, we used oha pointing out the localhost
.
For example
oha "http://localhost:8080/api/v1/elevation/0/0/0?colormap=pseudocolor"
We run the docker cluster in an EC2 instance Region us-east-1
(same as the s3 bucket is)
we used 1000 requests in zoom level 9 and randomly keep x and y tiles from range [0...256]
oha -n 1000 --rand-regex-url "http://localhost:8080/api/v1/elevation/9/([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])"
Figure 3. Single tile elevation request
# no regex here due the `filter` but number of connections to run concurrently = 100 (50 default) and number of parallel requests to send on HTTP/2 = 10 (1 default)
oha -n 1000 -c 100 -p 10 "http://localhost:8080/api/v1/elevation/9/0/0?colormap=pseudocolor"
Figure 4. Single tile elevation rendering with a coloramp
oha -n 1000 http://localhost:8080/api/v1/elevation-statistics/9/0/1
Figure 5. Single elevation tile statistics request
# no regex here due the `filter` but number of connections to run concurrently = 100 (50 default) and number of parallel requests to send on HTTP/2 = 10 (1 default), the zoom level is 1 and the extent is the whole world, clipped by 1 degree.
oha -n 1000 -c 100 -p 10 "http://localhost:8080/api/v1/elevation-statistics/1/-178,-89,178,89"
Figure 6. Elevation statistics from 4 tiles (zoom level 4), by specific extend
oha -n 1000 --rand-regex-url "http://localhost:8080/api/v1/slope/9/([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])"
Figure 7. Slope request of single tile
From figures 3-7 we can conclude that: PostGis Raster using outdb
rasters stored in S3
works and performs very well
Tip: For better benchmarking it would have been better to have our own clone of terrain-tiles in S3
As the title says, this is a proof of concept, which means you can experiment more with the final setup of the infrastructure, postgis.gdal_vsi_options
(you can find it here), but all you need -after you set up the data in DB- is to create your own application in every language/framework you want and use these native queries as we do in the ElevationTilesProdRepository
class here.
HAVE FUN!