Point Clouds to Meshes
Last updated
Last updated
This workflow applies mainly to point clouds captured using terrestrial scanners. This article will also only outline the rough workflow and thinking, it will very briefly explain how to use an example suite of softwares - but feel free to use your own preferred software for any step.
This first part introduces terminology and concepts relevant to producing meshes from point clouds. Skip ahead for the practical steps.
Meshes can be generated from a variety of algorithms, but best results use the points and normals to estimate the surface.
Meshes are a discrete representation of 3d geometry that uses points and edges that defines a 'face.' There are not a continuous surface, but an approximation composed of smaller pieces. Read More:
Meshes 101Normals in 3d pipelines refer to the 'direction' that the elements in a mesh are pointing at. In the image below, you can see the normals represent the perpendicular outward direction of the face.
So how does this work with our Point Clouds? Points are merely a XYZ coordinate that the laser hit, and a colour, where are the faces to tell us the normals?
Normals need to be estimated for a point cloud - each point will be compared with its neighbours to find the best 'face' to fit against it, leading to an approximated Normal. (There are other algorithms for normal estimation that have their own pros and cons as well)
In the below example, we can see that the normals are all correctly pointing outwards.
Normals can also be visualised in rgb, each colour represents the respective xyz vector of the normal.
When creating a mesh with correct normals, the normals help the mesh determine what is outside and inside, allowing it to accurately predict the overall surface continuity. The result is as one would expect:
In the 3d pipeline, normals can point 'inside' or 'outside' - vertices, edges, faces are abstract elements. Their normals can arbitrarily face inside or outside.
It is up to the user to point it in the correct direction. Heh.
However, objects in real life are not infinitely small points or infinitely thin faces, they have thickness. Normals in real life will always point 'outside,' so it is good practice to align with reality. But sometimes you want 'inward' pointing normals - take for example a 3d scan of a room. Technically you are looking at:
The 'inward' normals when outside the room
...and correct 'outward' normals when you're inside the room.
Normals can be easily flipped or inverted once calculated.
It is trivial for a computer to estimate normals, but very common for normal continuity to go wrong.
When estimating normals, there's a 50/50 chance for guessing the correct orientation - this can be a problem especially where there are disconnects in the point cloud data set, leading to sections or parts having flipped normals.
With incorrect normals, the mesh algorithm does not know how to preserve surface continuity, leading to breaks and surfaces turning inside out.
One can manually isolate points facing the wrong direction and flip the normals but this is tedious and time-consuming - it is best to get it right from the start. There are also algorithms to align normals to try and fix this discrepancies but
Therefore, it is important for a good mesh to have correct normals. The following process covers the steps that guarantees correct normals for most 3d scanning applications, where we can rely on metadata to assist.
Complex natural objects like trees will largely remain unaccounted for.
Compatible filetypes include, but are not limited to: .e57. Terrestrial Scanners will save their scanner location as a sensor/camera component inside the scan.
Do not combine scans together or edit them - run normal estimation on each scan individually.
Use the normal estimation method that is available to you - this depends on the dataset. For example, iPad exports can only use 2. or 3. Ordered from most viable to least.
Use 'scanning grids'. Data coming out of the our terrestrial scanners should be a 'structured dataset', the scan data is correlated with the image used to capture it. It is a more robust version of 2.
Use the 'sensor location' to orient the normals. As normals have a 50/50 chance of facing the right way, help it by along by using the scanner. If the scanner has picked it up, then it has to be the 'outside' of the object - eliminating any 50/50 guessing by the algorithms.
Add a manual scan origin and orient normals towards that origin.
Edit, clean, and merge scans.
Generate the mesh.
Reproject colour.
Sensor location can be confirmed within a scan.
For Normal Estimation, use Normals > Compute Normals:
Pick the most appropriate fitting algorithm:
Plane - Works best on noisy datasets but will smooth out any sharp or small details
Triangular - Will preserve sharp and small details but noisy datasets can lead to weird results.
Quadric - Works best for smooth or curved datasets.
Use scanning grids, where possible.
Otherwise orient towards the sensor.
Add this manually if there is no sensor metadata in the dataset.
With sunlight on, you should see points with normals facing away from the screen turn black. Use this to confirm normal estimation.
Clean scans now or after the merge. Merge all of the scans in preparation for meshing.
Use Poisson Mesh Reconstruction to generate a mesh, enabling density as scalar.
Octree Depth/Resolution determines the final detail level only - increasing or decreasing has no effect on the accuracy of the eventual mesh.
Use the density scalar to remove large interpolated areas of the mesh that are automatically generated
The mesh at this point has no colour except for vertex colour, the following steps cover how to transfer point cloud colour to the mesh.
Export mesh.
If you only require the mesh, stop here, the mesh can then be used in other CAD software.
If you need to re-colour Export point cloud dataset for use later.
A Note on Vertex Colours Vertex colours are how point cloud-based meshes get coloured, this means that you 'lose' color detail with less mesh detail. If you intend to colour your mesh using the point cloud, then you should prepare a higher mesh density version.
The alternative to vertex colours is to eventually make a image texture from the point cloud colour, or from a different scanning method like photogrammetry.
If required, simplify the mesh:
InstantMeshes for a more controlled model if you need to do more mesh modelling or sculpting.
Meshlab for faster/easier results. Use Filter > Remeshing > Simplification: Quadric Edge Collapse Decimation. Default settings are okay but you may opt in for preserving the boundary and/or heavy simplification of planar surfaces.
UV Mapping
Use your preferred 3D software to UV map (except Rhino).
One quick way is to use Blender.
Import your object.
Select the object.
Go into the object's Edit Mode.
[A] to select everything
In the menu above, UV > Smart UV Project
Export the object again
Point Cloud Colour to Mesh
Open both mesh and point clouds in Meshlab.
Filter > Texture > Transfer Vertex Attributes to Texture (1 or 2 meshes)
Select appropriate source and target. Give texture a name and size.
Export mesh and the texture will be saved as well.
Photogrammetry to Mesh
Align both data sets in your preferred photoscanning software
Transfer photos to mesh texture.