Introduction Recently we partnered up with folks from the University of Akron to help determine how accurate UAS are compared to traditional mapping methods. Given the current difficulty to fly commercially in the National Airspace, this partnership gave us a unique opportunity to fly inside their Field House. This controlled space had a lot of […]
Posted by smathermather on August 10, 2016
Posted by smathermather on July 21, 2016
Guest blog post today from Brandon Garman (@brandon_garman):
After seeing Mapzen’s blog post on Sphere Maps, I wanted to try my hand at it. I downloaded the Github repository, set up a python simple server and ‘BOOM’ had my own instance running (Mapzen makes this process very easy). So next I pulled out my Wacom tablet, opened up Photoshop and began painting in circles with different colors and brushes. After I had a few results I was happy with, I loaded each one to see how I did. Not so well it turns out! I need to spend a bit more time practicing and fine-tuning before I can get any results half as nice as Mapzen’s examples.
Instead of hunkering down and refining my painted circles, I took a look at some of the non-hand drawn examples that came in the repo (glich, yinyang, checkerboard). Since all that needs uploaded is an image file, I decided to find some images of my own. PLANETS!! What better image to use to style terrain on earth than other planets! So, I picked a few images of our planetary neighbors and I have to say I’m very happy with some of the results.
This is probably my favorite result (the moon is a close second though, see below).
As a cartographer, seeing an all blue hillshade is jolting, but oddly amusing. Still an excellent result though.
There are some great colors and contrast here, however the result was a little blah.
Until I realized the majority of the dark colors were in the top left, which meant the lighting was backwards from what is typical. So I rotated the sphere image 180 degrees and the result was much better, close to that of Mars.
EUROPA – one of Jupiter’s moons
Not too bad, but has the same inverted look as the moon did.
Here it is rotated 180 degrees.
Obviously I was going to use Earth. This color scheme here is really great, but the level of detail in the data at this scale makes it look pixellated and hard to read.
I tried blurring the image a bit to smooth out some of the choppiness of the colors and it helped.
I love this color scheme as well, but has the same problems as the Earth image did.
I blurred this image as well. It helped a bit. Still a great color scheme though.
The best part about sphere map is that the only limitations are your creativity. The ability to create great looking maps is very easy. Also, the ability to create the most useless but entertaining maps is just as easy and is sometimes more fun…
Posted by smathermather on June 30, 2016
This is a post about OpenDroneMap, an opensource project I am a maintainer for. ODM is a toolchain for post-processing drone imagery to create 3D and mapping products. It’s currently in beta and under pretty heavy development. If you’re interested in contributing to the project head over here. The Problem So for most of the […]
Posted by smathermather on June 17, 2016
The Problem We have to filter out the roads and ditches without removing streams that cross roads or follow them closely. I’m going to use PostGIS to find the intersection of the streams lines data with a buffered roads polygon. If the intersected line is less than 50% of the length of the stream line, […]
Posted by smathermather on May 29, 2016
Here was the problem that needed solved last week (we have a few similar problems in upcoming projects, so this was an exciting thing to try): we needed to use PostGIS to access data in a SQLServer database. The SQLServer database backs the web site in question, the underlying content management system, etc., so no– removing SQLServer isn’t really an option at this stage. Obviously PostGIS is a requirement too… .
Before I go further, I used tds_fdw as the foreign data wrapper. The limitations here are as follows: it is a read-only connection, and only works with Sybase and SQLServer, as it uses tabular data stream protocol for communicating between client and server. This is not as generic a solution as we can use. Next time I’ll try ogr_fdw which is more generic as it can connect with other databases and other data types. Another advantage to ogr_fdw is we can use IMPORT FOREIGN SCHEMA. Regina Obe and Leo Hsu warn though to limit this to 150 tables or so for performance reasons.
With the limitations listed above, this is how we built the thang:
DROP SERVER beach_fdw CASCADE; -- Create the server connection with FOREIGN DATA WRAPPER CREATE SERVER beach_fdw FOREIGN DATA WRAPPER tds_fdw OPTIONS (servername 'name_or_ip', port '1433', database 'non-spatial-database', tds_version '7.1', msg_handler 'notice'); -- We map the postgres user to the user that can read the table we're interested in CREATE USER MAPPING FOR postgres SERVER beach_fdw OPTIONS (username 'user', password 'password'); -- Create the actual foreign table connection CREATE FOREIGN TABLE beach_closures ( AutoNumber int NOT NULL, Title varchar NOT NULL, StartDate timestamp without time zone NOT NULL, WaterQuality varchar NOT NULL, Latitude varchar NOT NULL, Longitude varchar NOT NULL, BeachStatus varchar NOT NULL, ClosureColor varchar NOT NULL) SERVER beach_fdw OPTIONS (schema_name 'schema_name', table_name 'vw_CMPBeachClosures'); -- Now we create a spatial view using our longitude and latitude CREATE VIEW v_beach_closures AS SELECT AutoNumber, Title, StartDate, WaterQuality, Latitude, Longitude, BeachStatus, ClosureColor, ST_SetSRID(ST_MakePoint(Longitude::numeric, Latitude::numeric), 4326) AS geom FROM beach_closures;
Voila! A nice little PostGIS enabled view of a SQLServer view or table!
Posted by smathermather on May 1, 2016
Guest post from my colleague Patrick Lorch who wrote up what we did the other day in order to view a whole bunch of tiled images in a directory in QGIS (I did some mild editing to his posts. Mistakes are mine). The great thing about the approach is that is generalizeable to most tools that use GDAL for their raster API. This is part of a series. You can view this series in reverse with this post:
Building virtual datasets from a bunch of tiffs
What do you do when someone gives you aerial images stored as tiles in different directories representing different zoom levels? The goal is to make them easy to use as baselayers in QGIS. The answer is to reference them in a virtual data set (VRT).
gdalbuildvrt is the ticket
First make lists of tiffs
If the directory structure is something like this:
total 8047 drwxr-xr-x 7 pdl 4096 Apr 29 14:22 ./ drwxr-xr-x 5 pdl 4096 Apr 27 22:10 ../ drwxr-xr-x 2 pdl 1310720 Apr 22 21:37 0/ drwxr-xr-x 2 pdl 393216 Apr 22 22:54 1/ drwxr-xr-x 2 pdl 98304 Apr 28 14:44 2/ drwxr-xr-x 2 pdl 32768 Apr 28 14:44 3/ drwxr-xr-x 2 pdl 8192 Apr 28 14:44 4/
Then you first need a set of list files listing tiffs in each directory.
ls 0\*.tif > list0.txt ls 1\*.tif > list1.txt ls 2\*.tif > list2.txt ls 3\*.tif > list3.txt ls 4\*.tif > list4.txt
Now make the vrts
gdalbuildvrt -input_file_list list0.txt aerial_2015_0.vrt gdalbuildvrt -input_file_list list1.txt aerial_2015_1.vrt gdalbuildvrt -input_file_list list2.txt aerial_2015_2.vrt gdalbuildvrt -input_file_list list3.txt aerial_2015_3.vrt gdalbuildvrt -input_file_list list4.txt aerial_2015_4.vrt
Now you can open these in QGIS depending on what zoom level you need.
These VRTs may now be loaded as ordinary rasters in QGIS or whatever you please. In this case, we retiled with multiple resample levels (see this post for more info), so we’ll have to define max/min ranges at which the different image collections are visible.
Thanks for the write up Pat!
Posted by smathermather on April 15, 2016
Subdivision of geographic data is a panacea to problems you didn’t know you had.
Maybe you deal with vector data, so you pre-tile your vector data to ship to the browser to render– you’re makin’ smaller data. Maybe you use cutting edge PostGIS so you apply ST_Subdivide to keep your data smaller than the database page size like Paul Ramsey describes here. Smaller’s better… . Or perhaps you are forever reprojecting your data in strange ways, across problematic boundaries or need to buffer in an optimum coordinate system to avoid distortion. Regardless of the reason, smaller is better.
Maybe you aren’t doing vector work, but this time raster. What’s the equivalent tiling process? I wrote about this for GeoServer almost 5 (eep!) years ago now (with a slightly more recent follow up) and much of what I wrote still applies:
- Pre-tile your raw data in modest chunks
- Use geotiff so you can use internal data structures to have even smaller tiles inside your tiles
- Create pyramids / pre-summarized data as tiles too.
Fortunately, while these posts were written for GeoServer, they apply to any tiler. Pre-process with gdal_retile.
gdal_retile.py -v -r bilinear -levels 4 -ps 6144 6144 -co "TILED=YES" -co "BLOCKXSIZE=256" -co "BLOCKYSIZE=256" -s_srs EPSG:3734 -targetDir aerial_2011 --optfile list.txt
Let’s break this down a little:
First we choose our resampling method for our pyramids (bilinear). Lanzcos would also be fine here.
Next we set the number of resampling levels. This will depend on the size of the dataset.
Next we specify the pixel and line size of the output geotiff. This can be pretty large. We probably want to avoid a size that forces the use of bigtiff (i.e. 4GB).
-ps 6144 6144
Now we get into the geotiff data structure — we internally tile the tifs, and make them 256×256 pixels. We could also choose 512. We’re just aiming to have our tile size near to the size that we are going to send to the browser.
-co "TILED=YES" -co "BLOCKXSIZE=256" -co "BLOCKYSIZE=256"
Finally, we specify our coordinate system (this is state plane Ohio), our output directory (needs created ahead of time) and our input file list.
-s_srs EPSG:3734 -targetDir aerial_2011 --optfile list.txt
That’s it. Now you have a highly optimized raster dataset that can:
Pretty much any geospatial solution which uses GDAL can leverage this work to make for very fast rendering of raster data to a tile cache. If space is an issue, apply compression options that match your use case.
Posted by smathermather on April 12, 2016
Last year I really enjoyed attending and presenting at North Carolina GIS in Raleigh. As many of you know, Free and Open Source Software for Geospatial North America (FOSS4GNA, alleged by some to rhyme with “lasagna”) will be in Raleigh this year, in a short few weeks.
I highly encourage you to go. First of all, it’s FOSS4GNA, so lots of free and open source geospatial goodness. But, Raleigh and NCGIS in general are a hot spot of open source geospatial stuff. Last years’ North Carolina GIS was a mini FOSS4G, mixed with the standard uhhhh, not so open source crowd. I came expecting (most respectfully) form of (name your favorite state GIS conference). What I saw was that but also more than 20 talks on FOSS Geo stuff. Don’t believe me? Here’s the list:
Open-ing the Future of NOAA GIS | Speaker Tony LaVoi The Rise of 3D GIS on the Web | Speaker Patrick Cozzi plas.io and Greyhound: Point Clouds in Your Browser | Speaker Howard Butler Open Source, Open Discussion | Speakers: Ralph Dell GISP, Randal Hale, Jason Hibbets, Dr. Helena Mitasova GISP How to Build Fat Polygons | Speaker Skip Daniels How to Use GitHub to Hire Your Next Analyst | Speakers: Dave Michelson, Cameron Carlyle QGIS for the Desktop | Speaker Randal Hale GRASS7: New Features and Tools for Spatio-Temporal Analytics and Visualization | Speaker Dr. Helena Mitasova MapLoom: A New Web-client With Versioned Editing (GeoGit) Integration | Speakers: Syrus Mesdaghi, Tyler Garner Quality of Life Dashboard | Speaker Tobin Bradley Defaulting to Open (at least trying to…) | Speaker Justin Greco Using Geospatial Applications to Build ForWarn | Speaker Bill Hargrove National Map Corps: Crowdsourcing Map Features for the USGS | Speaker Silvia Terziotti Wake County Open Data: Where Will It Take You? | Speakers: Carter Vickery, Bill Scanlon FOSS and Web Mapping | Speaker Ashley Hanes, A-B Tech CC National Park Service GIS Data + OpenStreetMap = Places of Interest Spatial Analysis of Wildfire Occurrences in North Carolina Using the R Project for Statistical Computing | Speaker David Jones Point Cloud in Your Pocket | Speaker Stephen Mather The Unknowns: An IT Professional’s Guide to Open Source | Speaker Paul Ramsey Open Data? Show Me the Money! | Speaker Blake Esselstyn Exploring Spatial Data Infrastructure in an Open Source World | Speaker Jacqueline Lowe, UNC-A Open Source Geospatial Foundation (OSGeo) | Speaker Doug Newcomb Open Data Kit (ODK) An Exciting, Free, and Open-Source Field Data Collection Alternative | Speaker Eric Wilson
But, lest you think this is a new thing, now that FOSS is up and coming in the geo world, in 2013, NCGIS played host to a dozen open source geospatial presentations, and has been on this trend since at least 2001.
I recommend checking out some of the amazing FOSS Geo work endemic to NC State, while you are there. If you don’t know Helena Mitasova from the Open Source Geospatial Research and Education Laboratory, you should. I’m hoping she and her students have on display their Tangible Landscape which ties a sandbox in to real-time GRASS DEM processing (flow accumulation, viewsheds, fire modeling, etc.):
Finally, I’d like to really briefly address what is in the minds of many as they think on North Carolina these days — North Carolina’s HB2 legislation. The FOSS4GNA organizers have addressed this bill, and communicated their position and their accommodations. The response, quoted in part below speaks for itself:
As we shared shortly after HB2 was passed, it was too late to relocate and/or cancel the conference. We are very grateful to the good people who recognized that boycotting FOSS4G NA hurts a very inclusive conference and community. The fact you are coming means a great deal, and we do not take it for granted. Thank you!
After talking with so many people in the last few weeks, it is very clear that our LGBT attendees, including myself, do not stand alone.
Here are some of the things we have put in place to help ensure all of our attendees are safe & welcome:
There will be 4 gender neutral restrooms. And the venue is updating their signs to clearly state they are gender neutral restrooms. For those interested, we will be encouraging people to make donations to the ACLU, who is suing the State of North Carolina because of HB2. Our code of conduct is in place, and will be enforced by staff. There is a map that lists trans friendly restrooms in the area. Huge thanks to Emily Waggoner for creating it! Our sponsors have taken a stance against HB2.
I love this community, and I am so happy to see these responses. Please come and join us. It will be an amazing conference.
Post script: Thanks to Doug Newcomb for the history lesson on NCGIS. I hope to share more of his info in future posts.
Posted by smathermather on March 27, 2016
Great news on OpenDroneMap. We now have a branch that has MVS-Texturing integrated, thanks to continuing work by Spotscale, and of course continuing integration work by @dakotabenjamin.
The MVS-Texturing branch isn’t fully tested yet, nor fully integrated, but the initial results are promising. MVS-Texturing itself handles the problems of choosing the best photos for a given facet on a textured model in order to do a great job texturing a complex scene. This bears the promise of vastly improved textured models and very nice orthophotos.. It seems an ideal drop in for the texturing limitations of OpenDroneMap. From the project site:
Our method addresses most challenges occurring in such reconstructions: the large number of input images, their drastically varying properties such as image scale, (out-of-focus) blur, exposure variation, and occluders (e.g., moving plants or pedestrians). Using the proposed technique, we are able to texture datasets that are several orders of magnitude larger and far more challenging than shown in related work.
When we apply this approach to one of our more difficult datasets, which was taken on a partially cloud part of the day…
we get very promising results:
Posted by smathermather on March 19, 2016
I finally got PDAL properly compiled with Point Cloud Library (PCL) baked in. Word to the wise — CLANG is what the makers are using to compile. The PDAL crew were kind enough to revert the commit which broke GCC support, but why swim upstream? If you are compiling PDAL yourself, use CLANG. (Side note, the revert to support GCC was really helpful for ensuring we could embed PDAL into OpenDroneMap without any compiler changes for that project.)
With a compiled version of PDAL with the PCL dependencies built in, I can bypass using the docker instance. When I was spawning tens of threads of Docker and then killing them, recovery was a problem (it would often hose my docker install completely). I’m sure there’s some bug to report there, or perhaps spawning 40 docker threads is ill advised for some grander reason, but regardless, running PDAL outside a container has many benefits, including simpler code. If you recall our objectives with this script, we want to:
- Calculate relative height of LiDAR data
- Slice that data into bands of heights
- Load the data into a PostgreSQL/PostGIS/pgPointCloud database.
The control script without docker becomes as follows:
#!/bin/bash # readlink gets us the full path to the file. This is necessary for docker readlinker=`readlink -f $1` # returns just the directory name pathname=`dirname $readlinker` # basename will strip off the directory name and the extension name=`basename $1 .las` # PDAL must be built with PCL. # See http://www.pdal.io/tutorial/calculating-normalized-heights.html pdal translate "$name".las "$name".bpf height --writers.bpf.output_dims="X,Y,Z,Intensity,ReturnNumber,NumberOfReturns,ScanDirectionFlag,EdgeOfFlightLine,Classification,ScanAngleRank,UserData,PointSourceId,HeightAboveGround" # Now we split the lidar data into slices of heights, from 0-1.5 ft, etc. # on up to 200 feet. We're working in the Midwest, so we don't anticipate # trees much taller than ~190 feet for START in 0:1.5 1.5:3 3:6 6:15 15:30 30:45 45:60 60:105 105:150 150:200 do # We'll use the height classes to name our output files and tablename. # A little cleanup is necessary, so we're removing the colon ":". nameend=`echo $START | sed s/:/-/g` # Name our output bpfname=$name"_"$nameend.bpf # Implement the height range filter pdal translate $name.bpf $bpfname -f range --filters.range.limits="HeightAboveGround[$START)" # Now we put our data in the PostgreSQL database. pdal pipeline -i pipeline.xml --writers.pgpointcloud.table='pa_layer_'$nameend --readers.bpf.filename=$bpfname --writers.pgpointcloud.overwrite='false' done
We still require our pipeline xml in order to set our default options as follows:
<?xml version="1.0" encoding="utf-8"?> <Pipeline version="1.0"> <Writer type="writers.pgpointcloud"> <Option name="connection"> host='localhost' dbname='user' user='user' password=‘password’ </Option> <Option name="table">54001640PAN_heightasz_0-1.5</Option> <Option name="compression">dimensional</Option> <Filter type="filters.chipper"> <Option name="capacity">400</Option> <Reader type="readers.bpf"> <Option name="filename">54001640PAN_heightasz_0-1.5.bpf</Option> </Reader> </Filter> </Writer> </Pipeline>
And as before, we can use parallel to make this run a
little lot faster:
find . -name '*.las' | parallel -j20 ./pdal_processor.sh
For the record, I found out through testing that my underlying host only has 20 processors (though more cores). No point in running more processes than that… .
So, when point clouds get loaded, they’re broken up in to “chips” or collections of points. How many chips do we have so far?:
user=# SELECT COUNT(*) FROM "pa_layer_0-1.5"; count ---------- 64413535 (1 row)
Now, how many rows is too many in a PostgreSQL database? Answer:
In other words, your typical state full of LiDAR (Pennsylvania or Ohio for example) are not too large to store, retrieve, and analyze. If you’re in California or Texas, or have super dense stuff that’s been flown recently, you will have to provide some structure in the form of partitioning your data into separate tables based on e.g. geography. You could also modify your “chipper” size in the XML file. I have used the default 400 points per patch (for about 25,765,414,000 points total), which is fine for my use case as then I do not exceed 100 million rows once the points are chipped: