Point Clouds to Meshes

This workflow applies mainly to point clouds captured using terrestrial scanners. This article will also only outline the rough workflow and thinking, it will very briefly explain how to use an example suite of softwares - but feel free to use your own preferred software for any step.

Introduction

This first part introduces terminology and concepts relevant to producing meshes from point clouds. Skip ahead for the practical steps.

Meshes can be generated from a variety of algorithms, but best results use the points and normals to estimate the surface.

Meshes

Meshes are a discrete representation of 3d geometry that uses points and edges that defines a 'face.' There are not a continuous surface, but an approximation composed of smaller pieces. Read More:

Meshes 101

Normals

Normals in 3d pipelines refer to the 'direction' that the elements in a mesh are pointing at. In the image below, you can see the normals represent the perpendicular outward direction of the face.

Normals and Points

So how does this work with our Point Clouds? Points are merely a XYZ coordinate that the laser hit, and a colour, where are the faces to tell us the normals?

Normals need to be estimated for a point cloud - each point will be compared with its neighbours to find the best 'face' to fit against it, leading to an approximated Normal. (There are other algorithms for normal estimation that have their own pros and cons as well)

In the below example, we can see that the normals are all correctly pointing outwards.

Normals can also be visualised in rgb, each colour represents the respective xyz vector of the normal.

When creating a mesh with correct normals, the normals help the mesh determine what is outside and inside, allowing it to accurately predict the overall surface continuity. The result is as one would expect:

Inside or Outside?

In the 3d pipeline, normals can point 'inside' or 'outside' - vertices, edges, faces are abstract elements. Their normals can arbitrarily face inside or outside.

It is up to the user to point it in the correct direction. Heh.

However, objects in real life are not infinitely small points or infinitely thin faces, they have thickness. Normals in real life will always point 'outside,' so it is good practice to align with reality. But sometimes you want 'inward' pointing normals - take for example a 3d scan of a room. Technically you are looking at:

  • The 'inward' normals when outside the room

  • ...and correct 'outward' normals when you're inside the room.

Normals can be easily flipped or inverted once calculated.

Bad Normals

It is trivial for a computer to estimate normals, but very common for normal continuity to go wrong.

When estimating normals, there's a 50/50 chance for guessing the correct orientation - this can be a problem especially where there are disconnects in the point cloud data set, leading to sections or parts having flipped normals.

With incorrect normals, the mesh algorithm does not know how to preserve surface continuity, leading to breaks and surfaces turning inside out.

One can manually isolate points facing the wrong direction and flip the normals but this is tedious and time-consuming - it is best to get it right from the start. There are also algorithms to align normals to try and fix this discrepancies but


Priorities

Therefore, it is important for a good mesh to have correct normals. The following process covers the steps that guarantees correct normals for most 3d scanning applications, where we can rely on metadata to assist.

Complex natural objects like trees will largely remain unaccounted for.


Workflow

  1. Compatible filetypes include, but are not limited to: .e57. Terrestrial Scanners will save their scanner location as a sensor/camera component inside the scan.

  2. Do not combine scans together or edit them - run normal estimation on each scan individually.

  3. Use the normal estimation method that is available to you - this depends on the dataset. For example, iPad exports can only use 2. or 3. Ordered from most viable to least.

    1. Use 'scanning grids'. Data coming out of the our terrestrial scanners should be a 'structured dataset', the scan data is correlated with the image used to capture it. It is a more robust version of 2.

    2. Use the 'sensor location' to orient the normals. As normals have a 50/50 chance of facing the right way, help it by along by using the scanner. If the scanner has picked it up, then it has to be the 'outside' of the object - eliminating any 50/50 guessing by the algorithms.

    3. Add a manual scan origin and orient normals towards that origin.

  4. Edit, clean, and merge scans.

  5. Generate the mesh.

  6. Reproject colour.


Example Workflow

CloudCompare - Point Cloud Processing & Mesh Generation

  1. Sensor location can be confirmed within a scan.

  2. For Normal Estimation, use Normals > Compute Normals:

    1. Pick the most appropriate fitting algorithm:

      1. Plane - Works best on noisy datasets but will smooth out any sharp or small details

      2. Triangular - Will preserve sharp and small details but noisy datasets can lead to weird results.

      3. Quadric - Works best for smooth or curved datasets.

    2. Use scanning grids, where possible.

    3. Otherwise orient towards the sensor.

      1. Add this manually if there is no sensor metadata in the dataset.

  3. With sunlight on, you should see points with normals facing away from the screen turn black. Use this to confirm normal estimation.

  4. Clean scans now or after the merge. Merge all of the scans in preparation for meshing.

  5. Use Poisson Mesh Reconstruction to generate a mesh, enabling density as scalar.

    1. Octree Depth/Resolution determines the final detail level only - increasing or decreasing has no effect on the accuracy of the eventual mesh.

  6. Use the density scalar to remove large interpolated areas of the mesh that are automatically generated

  7. The mesh at this point has no colour except for vertex colour, the following steps cover how to transfer point cloud colour to the mesh.

  8. Export mesh.

    1. If you only require the mesh, stop here, the mesh can then be used in other CAD software.

    2. If you need to re-colour Export point cloud dataset for use later.

A Note on Vertex Colours Vertex colours are how point cloud-based meshes get coloured, this means that you 'lose' color detail with less mesh detail. If you intend to colour your mesh using the point cloud, then you should prepare a higher mesh density version.

The alternative to vertex colours is to eventually make a image texture from the point cloud colour, or from a different scanning method like photogrammetry.

Mesh Simplification

  1. If required, simplify the mesh:

    1. InstantMeshes for a more controlled model if you need to do more mesh modelling or sculpting.

    2. Meshlab for faster/easier results. Use Filter > Remeshing > Simplification: Quadric Edge Collapse Decimation. Default settings are okay but you may opt in for preserving the boundary and/or heavy simplification of planar surfaces.

UV Mapping

  1. Use your preferred 3D software to UV map (except Rhino).

  2. One quick way is to use Blender.

    1. Import your object.

    2. Select the object.

    3. Go into the object's Edit Mode.

    4. [A] to select everything

    5. In the menu above, UV > Smart UV Project

    6. Export the object again

Texturing

Point Cloud Colour to Mesh

  1. Open both mesh and point clouds in Meshlab.

  2. Filter > Texture > Transfer Vertex Attributes to Texture (1 or 2 meshes)

  3. Select appropriate source and target. Give texture a name and size.

  4. Export mesh and the texture will be saved as well.

Photogrammetry to Mesh

  1. Align both data sets in your preferred photoscanning software

  2. Transfer photos to mesh texture.

Last updated