Point Clouds to Meshes

This workflow applies mainly to point clouds captured using terrestrial scanners. This article will also only outline the rough workflow and thinking, it will very briefly explain how to use an example suite of softwares - but feel free to use your own preferred software for any step.

Introduction

This first part introduces terminology and concepts relevant to producing meshes from point clouds. Skip ahead for the practical steps.

Meshes can be generated from a variety of algorithms, but best results use the points and normals to estimate the surface.

Meshes

Meshes are a discrete representation of 3d geometry that uses points and edges that defines a 'face.' There are not a continuous surface, but an approximation composed of smaller pieces. Read More:

pageMeshes 101

Normals

Normals in 3d pipelines refer to the 'direction' that the elements in a mesh are pointing at. In the image below, you can see the normals represent the perpendicular outward direction of the face.

Normals and Points

So how does this work with our Point Clouds? Points are merely a XYZ coordinate that the laser hit, and a colour, where are the faces to tell us the normals?

Normals need to be estimated for a point cloud - each point will be compared with its neighbours to find the best 'face' to fit against it, leading to an approximated Normal. (There are other algorithms for normal estimation that have their own pros and cons as well)

In the below example, we can see that the normals are all correctly pointing outwards.

Normals can also be visualised in rgb, each colour represents the respective xyz vector of the normal.

When creating a mesh with correct normals, the normals help the mesh determine what is outside and inside, allowing it to accurately guess where surface continuity is. The result is as one would expect:

Bad Normals

However, it is very common for this normal estimation to go wrong - especially where there are disconnects in the point cloud data set. The normal is technically facing the right direction, but it is flipped, i.e. it is facing inside instead of outside.

This usually means that when estimating normals, there's a 50/50 change things will be facing in the right direction.

With incorrect normals, the mesh algorithm does not know how to preserve surface continuity, leading to breaks and surfaces turning inside out as it tries to accomoate for the normals.

One can manually isolate points facing the wrong direction and flip the normals but this is tedious and time-consuming - it is best to get it right from the start.

Priorities

Therefore, it is important for a good mesh to have correct normals. The following process covers the steps that guarantees correct normals for most scanning applications, though trees will largely remain unaccounted for.


Workflow

  1. Compatible filetypes include, but are not limited to: .e57. Terrestrial Scanners will save their scanner location as a sensor/camera component inside the scan.

  2. Do not combine scans together or edit them - run normal estimation on each scan individually.

  3. Use the normal estimation method that is available to you - this depends on the dataset. Ordered from most viable to least.

    1. Use 'scanning grids'. Data coming out of the our terrestrial scanners should be a 'structured dataset', the scan data is correlated with the image used to capture it. It is a more robust version of 2.

    2. Use the 'sensor location' to orient the normals. As normals have a 50/50 chance of facing the right way, help it by along by using the scanner. If the scanner has picked it up, then it has to be the 'outside' of the object - eliminating any 50/50 guessing by the algorithms.

    3. Add a manual scan origin and orient towards it.

  4. Merge scans and generate the mesh.

  5. Reproject colour.


Example Workflow

CloudCompare - Point Cloud Processing & Mesh Generation

  1. Sensor location can be confirmed within a scan.

  2. For Normal Estimation, use Normals > Compute Normals:

    1. Use scanning grids, where possible.

    2. Otherwise use sensor.

  3. With sunlight on, you should see points with normals facing away from the screen turn black. Use this to confirm normal estimation.

  4. Merge all of the scans in preparation for meshing.

  5. Clean your scans.

  6. Use Poisson Mesh Reconstruction to now generate a mesh, enabling density as scalar.

  7. Use the density scalar to remove large interpolated areas of the mesh that are automatically generated

  8. The mesh at this point has no colour except for vertex colour, the following steps covert how to transfer point cloud colour to the mesh.

  9. Export mesh.

    1. If you only require the mesh, stop here, the mesh can then be used in other CAD software.

    2. If you need to re-colour Export point cloud dataset for use later.

Mesh Simplification

  1. If required, simplify the mesh:

    1. InstantMeshes for a more controlled model if you need to do more mesh modelling or sculpting.

    2. Meshlab for faster/easier results. Use Filter > Remeshing > Simplification: Quadric Edge Collapse Decimation. Default settings are okay but you may opt in for preserving the boundary and/or heavy simplification of planar surfaces.

UV Mapping

  1. Use your preferred 3D software to UV map (except Rhino).

  2. One quick way is to use Blender.

    1. Import your object.

    2. Select the object.

    3. Go into the object's Edit Mode.

    4. [A] to select everything

    5. In the menu above, UV > Smart UV Project

    6. Export the object again

Point Cloud Colour to Mesh

  1. Open both mesh and point clouds in Meshlab.

  2. Filter > Texture > Transfer Vertex Attributes to Texture (1 or 2 meshes)

  3. Select appropriate source and target. Give texture a name and size.

  4. Export mesh and the texture will be saved as well.

Last updated