This article talks about how to get the best results out of the process of photogrammetry
To get the most out of this benchmark, please refer to the main Photogrammetry page for the core underlying theory behind achieving a successful result.
The key goal is to provide the software with enough features and consistent information so that it can successfully align your photos, and triangulate it into 3D spatial data.
Quality is a basic necessity to achieve a good result, if you foresee too much variation in the overall lighting and environment, it is recommended to reconsider the photogrammetry subject.
Variation in photos is required to build a complete result, if you foresee that you would be unable to take many photos from different perspectives and distances, reconsider the photogrammetry subject.
Feature Points as mentioned above are required, these are influenced by the geometry and its materiality.
Overlap is required for the alignment process other than feature points, ensure that photos overlap so that feature points can be correlated across images to align them.
The following guide (in no particular order) is a way to identify qualities that impact the ability for the software to align photos and detect depth. By default, some subjects have differing qualities in this benchmark than others that may detract or improve alignment and depth processing. Recommendations for dealing with each feature are provided to help improve the process.
These alerts signify a quality that is difficult to overcome, if these become applicable to your photogrammetry subject we recommend that you do not proceed with it. You are always welcome to book a consultation with us to discuss and plan your approach.
A dynamic environment, such as a moving crowd or dynamic weather patterns means that there is too much variation for the software to identify similar features across the photo set.
TIP: The background environment can be a great tool to help in photo alignment as it provides context. This is applicable for objects with low geometric or visual features.
For the software to easily recognise features for alignment, it is desirable for the photo-set to be consistent. Very dark or overexposed parts of the subject can lead to incorrect alignment and it is also common for these features to just go missing as the software has trouble picking up depth. Depth is best read with subtle shifts in shadows as a gradient.
It is best to reconsider the subject for photogrammetry if you are unable to take varied perspectives from many different distances.
Perspectives: A varied set of angles provides more information to produce a more complete 3D model. As the camera has direct vision like human eyes do, we encourage to directly take photos of everything you want to include. This will increase efficiency of software in aligning the photos.
Proximity: If there is too much obstruction in the area due to crowds or other objects, it may hinder your ability to to take photos from a variety of proximities. Photos from far-away to mid-range, will give more improve alignment.
Motion is usually inherent to the subject so it is hard to control, examples include things like kinetic sculptures or objects that light up dynamically.
It is best to reconsider the subject for photogrammetry if subject cannot be made static nor have a static phase. Capturing a dynamic subject using photogrammetry is not ideal.
Depending on what can be captured in frame, it is ideal to maximise the amount of different geometric features.
Large amounts of featureless geometry, such as a blank plaster wall, will not have enough feature points for the software to align with.
Repeated elements can confuse the program as the feature points are very similar. For example, the facade of a skyscraper. Usually this is not a problem as lighting, small inconsistencies/imperfections can act as feature points.
Photogrammetry, and digital reconstruction technologies in general will struggle capturing extremely thin or small subjects especially if the subject is comprised primarily of fine details. Objects smaller than 500mm will struggle to have any detail retained, and objects thinner than 500mm will struggle to have their thickness captured.
Details like carpet texture or hair are considered very thin, the resulting 3D model will not be able to generate each individual strand, but only the overall shape. There are no techniques to help with this.
If you need something in your model, you need to have a photo of it.
Visual Features refer to the surface quality or materiality of the object. A wall covered in graffiti is the perfect example of optimal Visual Features compared to a blank brick wall.
All the qualities of Visual Features do not need to be maximised for good results, as any one of them should be enough to give enough information for alignment. Usually, variation given by the light/environment and imperfections on surfaces can be enough to offset any lack of visual features.
As transparent objects allow light to pass through, a masking agent that covers the surface can be used to make it recognisable by the software. A removable/cleanable, matte spray paint is ideal as it can be applied with an even coating. Masking tape is a cheap alternative.
If you are unable to make the subject opaque, we recommend choosing a different subject all together.