Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

OpenDroneMap — the future that awaits (part 삼)

Posted by smathermather on October 27, 2015

Two posts precede this one, ODM — the future that awaits, and ODM — the future that awaits (part 이)

Ben Discoe has a good point on the first post, specifically:

As I see it, the biggest gap is not in smoother uploading or cloud processing in the cloud. The biggest gap is Ground Control Points. Until there’s a way to capture those accurately at a prosumer price point, we are doomed to a patchwork of images that don’t align, which is useless for most purposes, like overlaying other geodata.

Ben’s right of course. If drone data is produced, analyzed, and combined in isolation, especially while prosumer and consumer grade drones don’t have verifiable ground control, the data can’t be combined with other geodata.

The larger framework that I’m proposing here side-steps those issues in two ways:

  1. Combine drone data with other data from the start. Drones are a platform and a choice. Open aerial imagery, the best available, should always be used in a larger mosaic. If Landsat is the best you’ve got… Use it. If a local manned flight has better data… use it. If an existing open dataset from a photogrammetric / engineering company is available… use it. And if the drone data gets you those extra pixels… use it. But if you don’t have ground control (which you likely don’t), tie it into the larger mosaic. Use that mosaic as the consistency check.
  2. The above isn’t always practical. Perhaps the existing data are really old, or are too low in resolution. Maybe the campaign is so big and other data sources so poor that the above is impractical. In this case, internal consistency is key. Since OpenDroneMap now leverages OpenSfM, we have the option of doing incremental calculation of camera positions and sparse point clouds. If we have 1000 images and need to add 50, we don’t have to reprocess the first 1000.

Posted in 3D, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — the future that awaits (part 이)

Posted by smathermather on October 25, 2015

In my previous post, ODM — the future that awaits, I start to chart out OpenDroneMap beyond the toolchain. Here’s a bit more, in outline form. More narrative and breakdown to come. (this is the gist)


Take OpenDroneMap from simple toolchain to an online processing tool + open aerial dataset. This would be distinct from and complementary to OpenAerialMap:

  1. Explicitly engage and provide a platform for drone enthusiasts to contribute imagery in return for processing services.
  2. Address and serve:
    • Aerial imagery
    • Point clouds
    • Surface models
    • Terrain models
  3. Additionally, as part of a virtuous circle, digitizing to OSM from this aerial imagery would refine the surface models and thus the final aerial imagery
    • More on this later: in short digitizing OSM on this dataset would result in 3D photogrammetric breaklines which would in turn refine the quality of surface and terrain models and aerial imagery.

Outputs / Data products:

  • Aerial basemap
    • (ultimately with filter for time / season?)
  • Point cloud (see e.g.
  • Digital surface model (similar to Open Terrain)
  • Digital elevation model (in conjunction with Open Terrain)

Likely Software / related projects

Back of the envelope calculations — Mapping a city with drones

If ODM is to take submissions of large portions of data, data collection campaigns may come into play. Here are some back-of-the-envelope calculations for flying whole cities, albeit the medium size cities of San Francisco and Cleveland. This ignores time needed for all sort so things, including coordinating with local air traffic control. As such, this is a best case scenario set of estimates.

Drone Flight Height Pixel Overlap Per flight City Name City Area Total # of Flights Total Flight time
E384 400 ft 3 cm 60% 1.5 sq mile San Francisco 50 sq miles 33 flights 66 hours
E384 400 ft 5 cm 90 % 0.5 sq mile San Francisco 50 sq miles 100 flights 200 hours
E384 400 ft 3 cm 60 % 0.5 sq mile Cleveland 80 sq miles 54 flights 108 hours
E384 400 ft 5 cm 90 % 0.5 sq mile Cleveland 80 sq miles 160 flights 320 hours
Iris+ 400 ft 3 cm 60% 0.078 sq mile San Francisco 50 sq miles 640 flights 213 hours
Iris+ 400 ft 5 cm 90% 0.026 sq mile San Francisco 50 sq miles 1920 flights 640 hours

Posted in 3D, OpenDroneMap, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — the future that awaits

Posted by smathermather on October 24, 2015

Do you recall this 2013 post on GeoHipster?:

Screen shot of geohipster write-up

Later on, I confessed my secret to making accurate predictions:

screen shot of 2014 predictions

In all this however, we are only touching the surface of what is possible. After all, while we have a solid start on a drone imagery processing toolchain, we still have gaps. For example, when you are done producing your imagery from ODM, how do you add it to OpenAerialMap? There’s no direct automatic work flow here; there isn’t even a guide yet.

Screenshot of openaerialmap

And then once this is possible, is there a hosted instance of ODM to which I can just post my raw imagery, and the magical cloud takes care of the rest? Not yet. Not yet.

So, this is the dream. But the dream is bigger and deeper:

I remember first meeting Liz Barry of PublicLab at CrisisMappers in New York in 2014. She spoke about how targeted (artisanal?) PublicLab projects are. They aren’t trying to replace Google Maps one flight at a time, but focus on specific problems and documenting specific truths in order to empower community. She said it far more articulately and precisely, of course, with all sorts of sociological theory and terms woven into the narrative. I wish I had been recording.

Then, Liz contrasted PublicLab with OpenDroneMap. OpenDroneMap could map the world. OpenDroneMap could piece together from disparate parts all the pixels for the world:

  • At a high resolution (spatial and temporal)
  • For everywhere we can fly
  • One drone, balloon, and kite flight at a time
  • And all to be pushed into common and public dataset, built on open source software commonly shared and developed.

Yes. Yes it could, Liz. Exactly what I was thinking, but trying hard to focus on the very next steps.

This future ODM vision (the “How do we map the world with ODM) relies on a lot of different communities and technologies, from PublicLab’s MapKnitter, to Humanitarian OpenStreetMap Team’s (HOT’s) OpenAerialMap / OpenImageryNetwork, to KnightFoundation / Stamen’s OpenTerrain, ++ work by Howard Butler’s team on point clouds in the browser (Greyhound, PDAL,, etc.).

Over the next while, I am going to write more about this, and the specifics of where we are now in ODM, but I wanted to let you all know, that while we fight with better point clouds, and smoother orthoimagery, the longer vision is still there. Stay tuned.

Posted in 3D, OpenDroneMap | Tagged: , | 5 Comments »

Reflections on Goldilocks, Structure from Motion, near scale remote sensing, and the special problems therein

Posted by smathermather on October 19, 2015

Goldilocks and getting your reflection just right…

I have been reading a bit about drone remote sensing of agriculture fields. On one hand, it’s amazing, world changing technology. On the other hand, some part of all of it is bunk. What do I mean? Well, applying techniques created for continent size analyses may not scale down well. Why? Well for one, all those clever techniques (like Normalized Difference Vegetation Index, as well as its non-normalized siblings) rely heavily on two things: 1– being on average right over a large area; 2 — painting with such a broad brush as to be difficult to confirm or refute.

There. I said it.

Ok, tangible example: you fly a drone over your ag field, stitch the images together, calculate a vegetation index of your choice, and you get a nice map of productivity, or plant stress, or whatever it is that some vendor is selling. One problem: which camera view do you use for each spot on the ground?

Diagram of reflectance gradient on leaf.

Diagram of reflectance gradient on leaf.

I call this the Goldilocks problem in remote sensing — reflectance messing with what you are hoping are absolute(ish) values of reflectance:

If you use the forward image (away from the sun), you are going to get a hot spot because the light from the sun reflects more strongly in this direction. If you take the image in line with the sun, you are going to get something a little too dark, because of lack of backscatter. But if you use the image just above, you’ll get something just right.

Fix this problem (or only fly on cloudy days), and you are going to eliminate a lot of bias in your data. Long-term, addressing this when there is adequate data / images is on my mental wish list for texturing in OpenDroneMap. BTW, the big kids with satellites at their command have to deal with this too. They call it all sorts of things, but “Bi-Directional Reflectance Function” or BRDF is a common moniker.

Meshing — Why do we build a mesh after we build a point cloud?

Ok, another problem I have been giving some thought to… . In my previous post, I address some of the issues with point cloud density as well as appropriate (as opposed to generic) meshing techniques. We take a point cloud (exhibit A):

Dense point cloud

And we convert it to a mesh:

Dense Mesh

Dense Mesh

As we established yesterday, if we look to closely at the mesh, it’s disappointing:

Un-textured mesh of buildings

Un-textured mesh of buildings

And so I asserted that the problem is that we aren’t dealing with different types of objects in different ways when building a mesh. I stand by that assertion.

But… why are we doing a point cloud independently of the mesh? Why not build them at the same time? Here, maybe these crude and inaccurate figures will help me communicate the idea:

Diagram of leaf with three camera observations

Diagram of leaf with three camera observations

Diagram of building roof with 3 camera observations

Diagram of building roof with 3 camera observations

Why aren’t we building that whole surface, rather than just the points that we find as we go? Is this something that something like LSD-SLAM can help with? We would have to establish gradient cut-offs for where we decide where the roof line ends and e.g. the ground begins, but that seems a reasonable compromise. (Perhaps while that’s happening we detect / region grow that geometry, classify it as roof, and wrap it in a set of break lines).

The advantage here is that if we build the structure of the mesh directly from the images, then when we texture the mesh, we don’t have to make any guesses about which cameras to use for the mesh. More importantly, we are making minimal a priori assumptions about structures when building the mesh. I think this will lead to superior vegetation meshes.  One disadvantage is that we can’t guarantee our mesh is ever complete, and it will likely never be continuous, but hopefully as a trade-off becomes a much better approximation of structure which will help its use in, e.g. generating orthophotos.

Too abstract? Too dumb? IDK. Curious what you think.

Posted in 3D, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — Improvements Needed

Posted by smathermather on October 18, 2015

Talking about the future sometimes requires critiquing the present. The wonderful thing about an open source project is we can be quite open about limitations, and discuss ways forward. OpenDroneMap is a really interesting and captivating project… and there’s more work to do.

To understand what work needs done, we need to understand OpenDroneMap / structure from motion in general. Some of the limitations endemic to ODM are specific to its maturity as a project. Some of the limitations to ODM are extant in commercial closed-source industry leaders. I’ll highlight each as I do the walk through.

A simplified version of Structure from Motion (SfM) workflows as they apply to drone image processing are as follows:

Find features & Match features –> Find scene structure / camera positions –> Create dense point cloud –> Create mesh –> Texture mesh –> Generate orthophoto and other products

This misses some steps, but gives the major themes. Let’s visualize these as drawings and screen shots. (In the interest of full disclosure, the screen shots are from a closed source solution so that I can demonstrate the problems endemic across all software I have tested to date.)

Diagrams / screenshots of the toolchain parts:

Find features & Match features --> Find scene structure

Find features & Match features –> Find scene structure


Create Dense Point Cloud

Create Dense Point Cloud


Create mesh

Create mesh


Texture mesh

Texture mesh


And then generate orthophoto and secondary products (no diagram)

Problem space:

Of these, let’s highlight in bold known deficiencies in ODM:

Find features & Match features –> Find scene structure / camera positions –> Create dense point cloud –> Create mesh –> Texture mesh –> Generate orthophoto and other products

(These highlights assume that our new texturing engine that’s being written will address deficiencies there. Time and testing will tell… . This also assumes that the inclusion of OpenSfM in the toolchain fixes the scene structure /camera issues. This assumption also requires more testing.)

Each portion of the pipeline is dependent upon the next, if for example the camera positions are poor, point cloud won’t be great, and the texturing will be very problematic. If the dense point cloud isn’t as dense as possible, features will be lost, and the mesh, textured mesh, orthophoto, and other products will be degraded as well. For example, see these two different densities of point clouds:

Create Dense Point Cloud

More sparse point cloud


Less sparse (dense) point cloud

Less sparse point cloud

It becomes clear that the density and veracity of that point cloud lays the groundwork for the remainder of the pipeline.

ODM Priority 1: Improve density / veracity of point cloud

So what about the mesh issues? The meshing process for ODM and its closed source siblings (with possible exceptions) is problematic. Take for example this mesh of a few buildings:

Textured mesh of building

Textured mesh of building

The problems with this mesh become quite apparent when we view the un-textured counterpart:

Un-textured mesh of buildings

Un-textured mesh of buildings

We can see many issues with this mesh. This is a problem with all drone image processing tools I have tested to date — geometric surfaces are not treated as planar, meshing processes treat vegetation, ground, built environment equally, and thus don’t model any of them well.

ODM Priority 2: Improve meshing process

Priority 2 is difficult space, probably requires automated or semi-automated classification of the point cloud &/or input imagery, and while simple in the case of buildings, may be quite complicated in the case of vegetation. Old-school photogrammetry would have hand digitized hard and soft breaklines for built environments. How we handle this for ODM is an area we have yet to explore.


I am optimistic that ODM’s Find features & Match features –> Find scene structure / camera positions step is much improved with the integration of OpenSfM (please comment if you’ve found otherwise and have test cases to demonstrate). I am hopeful that the upcoming Texture mesh –> Generate orthophoto improvements will be a good solution. Where we need to improve will be in the near future is in the Create dense point cloud step. Where every software I have tested needs improvement, closed source and open source, is in the Create mesh step.

Posted in 3D, OpenDroneMap | Tagged: , | 1 Comment »

Geocoding from Structure from Motion — Integrating MicroMappers and OpenDroneMap

Posted by smathermather on October 17, 2015

There are many automated solutions that solve really interesting problems, but at this point in time, it is the semi-automated solutions that really fascinate me. These are solutions that combine the subtlety and intelligence of the human mind with the scale of automation.

In this context, I have been thinking a lot about OpenDroneMap — what are the parts of the toolchain which should be automated and improved, and where can a human touch help. Mostly I have been thinking about how ODM can be improved by a human touch, especially where such interpretation can aid in creating better 3 dimensional structure. However, recently as I thought about Micromappers I coined (I think I coined it) the phrase “geocoding from structure from motion”.

Imagine if you will a system that allows a small army of volunteers to easily code images with information in order to help make sense of the world through those images. Imagine this system is tied to aerial imagery and used in a humanitarian crisis. Put these things together and you have Micromappers.

Screen shot of micromappers video

So, what if any thing digitized or circled was automatically geocoded in 3D space based on the 3D information derived from Structure from Motion. By geocoded, I don’t mean geocoded to the location of the camera, but geocoded to the location of the feature in 3D space based on deriving implicit 3D information from the video combined with the GPS position of the camera.

OpenDroneMap and a host of other tools already generate 3D info based on image inputs. Below is a video of LSD-SLAM, a technique that does this in real time that might make it a little clearer what this magic is.

(Enter, selfish side of this thought)

If a workflow like this worked out, then we can geocode anything by marking up individual stills in an image series. Further, the information we derive from this markup can then be used to help classify and improve other outputs (like digital surface models, digital elevation models, etc.) from Structure from Motion. Finally, prior to OpenDroneMap being feature complete as a drone imagery processing tool, we have an easy-to-use tool for deriving good enough secondary products, i.e. geocoding, with primary products slated for improvement being orthophoto, mesh, and point cloud.

Post script note: this is not a funded project, but an interesting thought experiment. I’ll have a future of OpenDroneMap (as I see it) post up here shortly.

Posted in micromappers, OpenDroneMap | Tagged: , | Leave a Comment »

Humanitarian UAV Experts Meeting — first blush.

Posted by smathermather on October 13, 2015

UAViators, MIT Lincoln Labs, UNOCHA, and others organized and hosted the UAViators Experts Meeting on MIT’s campus this weekend. It was a remarkable event, if only for the thoughtfulness and knowledge base of the people in the room. The meeting brought together UAV operators, manufacturers, humanitarians, and a few folks at the intersection of these.

For me, it was relevatory with respect to all the non-mapping specific drone applications, from the advancement of technologies for last mile logistics to basic tactical / observational applications.

What was also interesting was some of the insights into regulatory issues and questions that are on the horizon.

I gave a short presentation on OpenDroneMap and therefore much of my time was spent thinking and listening in order to understand the potential application of OpenDroneMap in the humanitarian and development space. This extends a bit my understanding of its role and potential role in environmental applications.

More soon!

Posted in 3D, OpenDroneMap | Leave a Comment »

Korean Drumming at FOSS4G Seoul

Posted by smathermather on October 3, 2015

Korean Drumming at FOSS4G Seoul:

Posted in Conference, FOSS4G, FOSS4G, National Park, Other | Tagged: , , | Leave a Comment »

Pictures from my last few weeks.

Posted by smathermather on October 3, 2015

Posted in Conference, FOSS4G, FOSS4G, National Park, Other | Tagged: , , | Leave a Comment »

Mini-series on Korean words, part 4: Apologies

Posted by smathermather on October 3, 2015

In order to function at a most basic level in a given society (which I do not yet in the South Korean context), it is good to know the basic words of courtesy — the equivalents of “Excuse me”, “Pardon me”, “Nice to meet you”, “Hello”, “Goodbye”, etc..

Today we’ll talk about how to say “I’m sorry.” Between talking across cultural / language / expectation differences, and just spending time with individuals you might not know well, being able to apologize is a very important tool in the toolkit.

Hangul for "I'm sorry".

Hangul for “I’m sorry”.

Mian (mee ahn) is the root of one way of apologizing in Korean. Often you’ll be saying this formally, so Mianheyo (미안해요) would be what you would say to apologize. If you don’t need the formal, usually you’ll say “Mianhe” 미안해.

For a more comprehensive coverage of apologies (plus pronunciation!), see Sweet and Tasty TV’s coverage of this:

Posted in Conference, FOSS4G, FOSS4G, Other | Tagged: , , | Leave a Comment »


Get every new post delivered to your Inbox.

Join 1,575 other followers