NExT Lab
Maker SpacesFabLabNExT LabRobotics LabPrint Room and Loans
  • NExT Lab
  • Contact Details
  • NExT Lab Access
  • Sustainability
    • 3D Printing
  • Case Studies & Projects
    • |3DS|VR| Voices of Country
    • |3DP|AR| Prosthetic Habitats
    • |AR| Studio 40: The Field
    • |VR|3DP| Gravity Sketch: Door Handles
    • |3DS| 3D Scanning Examples
    • |AR|3DP| GRANULAR
  • 3D Printing |3DP|
    • 3D Printing at the NExT Lab
      • Other 3D Printing Options
    • Get Started
    • Design Approaches
    • Modelling Guidelines
    • 3D Print Farm
      • Quick-Start Guide
        • File Naming Conventions
      • Detailed Overview
        • 3D Printing Mesh Preparation
        • Submitting a Print Request
        • Post-Submission: Updating, Paying & Collecting
        • Slicing & Settings
    • Open Access Printers
      • PRUSA Open-Access
        • Workflows
          • Materials Experimentation
          • Experimental Techniques
        • Prusa i3 MK3S Fundamentals
        • Hardware Glossary
          • Extruder
          • Hotend & Nozzle
          • Print Surface, Bed & Y Axis
          • PINDA Inductive Probe
          • X-Axis Gantry
          • Z-Axis Stage
        • Software/Slicer Glossary
          • Plater/Virtual Print Bed
          • Print Settings
          • Filament Settings
          • Printer Settings
        • Troubleshooting
          • Filament Jam/Clog
          • Worn Nozzle
          • Broken/Loose Heatbreak
          • First Layer Issues/Prints Not Sticking to Bed
          • Stringing & Oozing Hotend
    • Use Own Filament
    • Key Techniques
      • Hollowing Models
      • Combating Warping
      • Split Models & Joints
      • Joints and Connections
      • Fillets & Chamfers
      • Accuracy, Precision & Tolerancing
      • Post-Processing & Finishing
        • No Sanding Method
        • Sanding Method
        • Epoxy Method
        • Fillers Method
      • Printing for Transparency
      • Mesh Techniques
        • Meshes 101
        • Working with Meshes
        • Repairing Meshes
        • Other Techniques
          • Thicken a Mesh with Grasshopper
          • Mesh Manipulation with Blender
          • Custom Supports in Meshmixer
      • Topography Models
      • Using the Makerbot Experimental Extruder
      • Troubleshooting
      • Adjusting Print Settings
    • Resources
      • Downloadable Software & Accounts
      • Software Primers
        • Autodesk Meshmixer
        • Blender
    • Mold Making and Casting
  • 3D Scanning |3DS|
    • 3D Scanning at the NExT Lab
    • 3D Scanning Use Cases
    • Guides
      • Principles of 3D Scanning / Digital Reconstruction
      • Photogrammetry
        • Photogrammetry Theory
        • Photogrammetry Benchmark
        • Technical Guides
          • From Photos to 3D Spatial Data
          • Advanced Techniques
          • Taking Measurements + Visualisation
          • From Photogrammetry to 3D Printing
      • BLK360 Terrestrial LiDAR Scanner
        • BLK360 Benchmark
        • Scan
        • Register
          • Export from iPad
        • Process
      • Artec Handheld SLT Scanners
        • Using the Scanners
        • Manual Alignment
        • Fill Holes
        • Smoothing
        • Frame Selection
      • VLX LiDAR SLAM Scanner
        • VLX setup
        • Preparing to Scan
        • Using the Scanner
        • Processing the Scans
      • Working with Point Clouds
        • Point Clouds to Meshes
    • Troubleshooting
      • General
      • Artec EVA
      • Leica BLK360
      • VLX
  • Augmented Reality |AR|
    • Augmented/Mixed Reality at the NExT Lab
      • Use Case of AR
    • Guides
      • Hololens 2
      • Fologram
        • Fologram Applications
          • Fologram for Hololens
          • Fologram for Mobile
        • Fologram for Rhino
        • Fologram for Grasshopper
        • Shared Experiences / Tracked Models
        • Extended Functionality
          • Preparing Models for AR
          • Interactivity
          • Fabrication
      • Unity and Vuforia
        • Unity Primer
        • 2D Targets (Image Targets)
        • 3D Targets (Object Targets)
        • Vuforia Primer
        • Creating a Simple AR App
          • Unity Next Steps: Interaction
          • Model Recognition
    • Troubleshooting
      • Hololens & Fologram
      • FAQ: Augmented Reality
    • Resources
      • Platforms (Hardware)
        • Microsoft Hololens
        • Mobile
      • Software Packages
      • Student Contact
        • AR: Intro Sessions
        • AR: Workshops and Resources
          • UntYoung Leaders Program Workshopitled
          • Young Leaders Program Workshop
          • Construction as Alchemy
  • Virtual Reality |VR|
    • Virtual Reality at the NExT Lab
    • Guides
      • Virtual Reality Hardware Set Up
        • Meta Quest 3
          • Troubleshooting
        • HTC Vive Headsets
          • HTC Vive
            • Troubleshooting
          • HTC Vive Pro
          • HTC Vive Cosmos
            • Troubleshooting
      • Twinmotion VR
        • Twinmotion VR: Features
        • Twinmotion VR: Troubleshooting
      • Virtual Reality Experiences
        • Unreal Engine
          • Unreal Engine Primer
            • Process: Level Building, Playing & Packaging
            • Actors: Components, Content and Editors
            • Materials & Textures
            • Lighting & Mobility
            • Player: VR and non-VR
            • Interactivity & Blueprints
          • Unreal Engine: Guides
            • Setting up a VR-ready File & Templates
            • Creating a Basic VR Experience
            • Custom Collision and Navigation
            • UV and Lightmaps
            • Outputting Content
            • Unreal Troubleshooting
            • Point Cloud Visualisation
          • VR: Video Tutorial Series
            • Exporting from Rhino
            • Model Preparation in 3DS Max
            • Unreal Engine
      • Designing in Virtual Reality
        • Gravity Sketch
          • Quick Start
        • Masterpiece Creator
    • Student Contact
      • VR: Intro Sessions
  • Sensing
    • Body Tracking
      • Usage
        • Technical Specifications
      • Data Analysis in Grasshopper
        • Analysis Examples
      • Animated Point Clouds(UE)
  • ROBOTICS
    • Robotic Dog
      • Operational Health & Safety
      • Robot Dog Setup
      • Operation Setup
        • Operation Basics
        • Arm Mode
        • Programming Mode
        • Mapping Mode
      • Advanced Operations
      • Expansion Equipment / Attachments
      • Basic Simulation
      • Troubleshooting
Powered by GitBook
On this page
  • Introduction
  • Meshes
  • Normals
  • Bad Normals
  • Priorities
  • Workflow
  • Example Workflow

Was this helpful?

  1. 3D Scanning |3DS|
  2. Guides
  3. Working with Point Clouds

Point Clouds to Meshes

PreviousWorking with Point CloudsNextTroubleshooting

Last updated 11 months ago

Was this helpful?

This workflow applies mainly to point clouds captured using terrestrial scanners. This article will also only outline the rough workflow and thinking, it will very briefly explain how to use an example suite of softwares - but feel free to use your own preferred software for any step.

Introduction

This first part introduces terminology and concepts relevant to producing meshes from point clouds. Skip ahead for the practical steps.

Meshes can be generated from a variety of algorithms, but best results use the points and normals to estimate the surface.

Meshes

Meshes are a discrete representation of 3d geometry that uses points and edges that defines a 'face.' There are not a continuous surface, but an approximation composed of smaller pieces. Read More:

Normals

Normals in 3d pipelines refer to the 'direction' that the elements in a mesh are pointing at. In the image below, you can see the normals represent the perpendicular outward direction of the face.

Normals and Points

So how does this work with our Point Clouds? Points are merely a XYZ coordinate that the laser hit, and a colour, where are the faces to tell us the normals?

Normals need to be estimated for a point cloud - each point will be compared with its neighbours to find the best 'face' to fit against it, leading to an approximated Normal. (There are other algorithms for normal estimation that have their own pros and cons as well)

In the below example, we can see that the normals are all correctly pointing outwards.

Normals can also be visualised in rgb, each colour represents the respective xyz vector of the normal.

When creating a mesh with correct normals, the normals help the mesh determine what is outside and inside, allowing it to accurately predict the overall surface continuity. The result is as one would expect:

Inside or Outside?

In the 3d pipeline, normals can point 'inside' or 'outside' - vertices, edges, faces are abstract elements. Their normals can arbitrarily face inside or outside.

It is up to the user to point it in the correct direction. Heh.

However, objects in real life are not infinitely small points or infinitely thin faces, they have thickness. Normals in real life will always point 'outside,' so it is good practice to align with reality. But sometimes you want 'inward' pointing normals - take for example a 3d scan of a room. Technically you are looking at:

  • The 'inward' normals when outside the room

  • ...and correct 'outward' normals when you're inside the room.

Normals can be easily flipped or inverted once calculated.

Bad Normals

It is trivial for a computer to estimate normals, but very common for normal continuity to go wrong.

When estimating normals, there's a 50/50 chance for guessing the correct orientation - this can be a problem especially where there are disconnects in the point cloud data set, leading to sections or parts having flipped normals.

With incorrect normals, the mesh algorithm does not know how to preserve surface continuity, leading to breaks and surfaces turning inside out.

One can manually isolate points facing the wrong direction and flip the normals but this is tedious and time-consuming - it is best to get it right from the start. There are also algorithms to align normals to try and fix this discrepancies but


Priorities

Therefore, it is important for a good mesh to have correct normals. The following process covers the steps that guarantees correct normals for most 3d scanning applications, where we can rely on metadata to assist.

Complex natural objects like trees will largely remain unaccounted for.


Workflow

  1. Compatible filetypes include, but are not limited to: .e57. Terrestrial Scanners will save their scanner location as a sensor/camera component inside the scan.

  2. Do not combine scans together or edit them - run normal estimation on each scan individually.

  3. Use the normal estimation method that is available to you - this depends on the dataset. For example, iPad exports can only use 2. or 3. Ordered from most viable to least.

    1. Use 'scanning grids'. Data coming out of the our terrestrial scanners should be a 'structured dataset', the scan data is correlated with the image used to capture it. It is a more robust version of 2.

    2. Use the 'sensor location' to orient the normals. As normals have a 50/50 chance of facing the right way, help it by along by using the scanner. If the scanner has picked it up, then it has to be the 'outside' of the object - eliminating any 50/50 guessing by the algorithms.

    3. Add a manual scan origin and orient normals towards that origin.

  4. Edit, clean, and merge scans.

  5. Generate the mesh.

  6. Reproject colour.


Example Workflow

CloudCompare - Point Cloud Processing & Mesh Generation

  1. Sensor location can be confirmed within a scan.

  2. For Normal Estimation, use Normals > Compute Normals:

    1. Pick the most appropriate fitting algorithm:

      1. Plane - Works best on noisy datasets but will smooth out any sharp or small details

      2. Triangular - Will preserve sharp and small details but noisy datasets can lead to weird results.

      3. Quadric - Works best for smooth or curved datasets.

    2. Use scanning grids, where possible.

    3. Otherwise orient towards the sensor.

      1. Add this manually if there is no sensor metadata in the dataset.

  3. With sunlight on, you should see points with normals facing away from the screen turn black. Use this to confirm normal estimation.

  4. Clean scans now or after the merge. Merge all of the scans in preparation for meshing.

  5. Use Poisson Mesh Reconstruction to generate a mesh, enabling density as scalar.

    1. Octree Depth/Resolution determines the final detail level only - increasing or decreasing has no effect on the accuracy of the eventual mesh.

  6. Use the density scalar to remove large interpolated areas of the mesh that are automatically generated

  7. The mesh at this point has no colour except for vertex colour, the following steps cover how to transfer point cloud colour to the mesh.

  8. Export mesh.

    1. If you only require the mesh, stop here, the mesh can then be used in other CAD software.

    2. If you need to re-colour Export point cloud dataset for use later.

A Note on Vertex Colours Vertex colours are how point cloud-based meshes get coloured, this means that you 'lose' color detail with less mesh detail. If you intend to colour your mesh using the point cloud, then you should prepare a higher mesh density version.

The alternative to vertex colours is to eventually make a image texture from the point cloud colour, or from a different scanning method like photogrammetry.

Mesh Simplification

  1. If required, simplify the mesh:

    1. Meshlab for faster/easier results. Use Filter > Remeshing > Simplification: Quadric Edge Collapse Decimation. Default settings are okay but you may opt in for preserving the boundary and/or heavy simplification of planar surfaces.

UV Mapping

  1. Use your preferred 3D software to UV map (except Rhino).

  2. One quick way is to use Blender.

    1. Import your object.

    2. Select the object.

    3. Go into the object's Edit Mode.

    4. [A] to select everything

    5. In the menu above, UV > Smart UV Project

    6. Export the object again

Texturing

Point Cloud Colour to Mesh

  1. Open both mesh and point clouds in Meshlab.

  2. Filter > Texture > Transfer Vertex Attributes to Texture (1 or 2 meshes)

  3. Select appropriate source and target. Give texture a name and size.

  4. Export mesh and the texture will be saved as well.

Photogrammetry to Mesh

  1. Align both data sets in your preferred photoscanning software

  2. Transfer photos to mesh texture.

for a more controlled model if you need to do more mesh modelling or sculpting.

InstantMeshes
Meshes 101
Normals visualised as vectors, and front/back or outside/inside as coloured faces.
Look for discontinuous colours along a surface that you would expect to be of a similar colour.
Some of the normals are pointing inwards.
Example of meshing the above point cloud with incorrect (in this case, discontinuous) normals.