That’s a really good question. Unfortunately it’s usually the wrong question to ask.

The right questions are:

  1. What accuracy do I need?

  2. How much effort is required to achieve that accuracy compared to alternative methods?

Why? Because photogrammetry can achieve almost any accuracy you desire, provided you make the pixel size small enough. Of course, if you make the pixel size smaller, you need more images to cover the same area, which means more effort is required — which is why the second question is important.

But how accurate can it be?

OK, there are circumstances where the limits to accuracy might be important, so in Part 1 I’ll look at what the limits actually are.

Fundamentally, there are limits to how small the size of a pixel on the surface on an object can be — you can’t see something smaller than the wavelength of light, and that’s about half a micron (depending on colour). This is why we have electron scanning microscopes instead of just making optical microscopes with stronger and stronger lenses. The smallest object pixel sizes I’ve used, with conventional lenses and normal cameras, have been 5–10 times larger than that.

The most accurate published result with our software is 5 µm in plan and 15 µm in depth, using a pair of 6 megapixel Canon EOS 300D digital SLRs with macro lenses. (More on “plan” and “depth” later.) That used an object pixel size of around 10 µm. The purpose was for measuring denture wear, so in that case the small area covered by an image with that object pixel size wasn’t a problem and the accuracy required was high.

Another area where high accuracy and levels of detail can be important is in creating digital models of archaeological artefacts in heritage mapping like those shown below, where a 15 cm/6 inch high cuneiform cone was modelled. The first image shows a 26 mm × 14 mm portion of the cone’s surface model as a wireframe. The point density is just over 1,000 points per square mm:

Wireframe of a portion of a cuneiform cone model generated by 3DM Analyst

The next image shows the base of the cone as a colourised point cloud:

Point cloud

To “prove” that the point cloud was really a point cloud and not a textured mesh I had to render the point cloud at a high resolution and then resample it down to a lower resolution, so that the points would become smaller than a pixel in size and therefore a little “transparent”. Even rendered with a 50 µm pixel size, using one pixel to render a 3D point meant the surface was completely opaque because the average point spacing was 30 µm!

Finally, here we have the camera positions used to capture the cone. The cone was placed on a turntable and rotated while the camera was held stationary for each rotation of the cone. It looks like the camera is going around the cone because the data is shown in the cone’s reference frame. The green dots are points on the cone detected automatically by the software so it can determine the position and orientation of the camera at the time each image was captured:

The camera positions used to capture the cone

So, if you really do need detail and accuracy, and the area to be captured is not too large, then ~1,000 points per square mm accurate to ~15 microns can be achieved using inexpensive off-the-shelf digital cameras and lenses.

Otherwise, you need to estimate how much accuracy you will be able to achieve using a particular setup so you can minimise the effort required (by capturing as much area in each image as you can) while still being able to deliver the accuracy specified. In Part 2 I’ll look at how you can predict the accuracy of the data in a photogrammetric project and then give some real-life examples to illustrate the effort required.