NExT Lab
Maker SpacesFabLabNExT LabRobotics LabPrint Room and Loans
  • NExT Lab
  • Contact Details
  • NExT Lab Access
  • Sustainability
    • 3D Printing
  • Case Studies & Projects
    • |3DS|VR| Voices of Country
    • |3DS| Our Quiet Neighbour
    • |3DS| OFF FORM | OFF MODERN
    • |3DP|AR| Prosthetic Habitats
    • |AR| Studio 40: The Field
    • |VR|3DP| Gravity Sketch: Door Handles
    • |3DS| 3D Scanning Examples
    • |AR|3DP| GRANULAR
  • 3D Printing |3DP|
    • 3D Printing at the NExT Lab
      • Other 3D Printing Options
    • Get Started
    • Design Approaches
    • Modelling Guidelines
    • 3D Print Farm
      • Quick-Start Guide
        • File Naming Conventions
      • Detailed Overview
        • 3D Printing Mesh Preparation
        • Submitting a Print Request
        • Post-Submission: Updating, Paying & Collecting
        • Slicing & Settings
    • Open Access Printers
      • PRUSA Open-Access
        • Workflows
          • Materials Experimentation
          • Experimental Techniques
        • Prusa i3 MK3S Fundamentals
        • Hardware Glossary
          • Extruder
          • Hotend & Nozzle
          • Print Surface, Bed & Y Axis
          • PINDA Inductive Probe
          • X-Axis Gantry
          • Z-Axis Stage
        • Software/Slicer Glossary
          • Plater/Virtual Print Bed
          • Print Settings
          • Filament Settings
          • Printer Settings
        • Troubleshooting
          • Filament Jam/Clog
          • Worn Nozzle
          • Broken/Loose Heatbreak
          • First Layer Issues/Prints Not Sticking to Bed
          • Stringing & Oozing Hotend
    • Use Own Filament
    • Key Techniques
      • Hollowing Models
      • Combating Warping
      • Split Models & Joints
      • Joints and Connections
      • Fillets & Chamfers
      • Accuracy, Precision & Tolerancing
      • Post-Processing & Finishing
        • No Sanding Method
        • Sanding Method
        • Epoxy Method
        • Fillers Method
      • Printing for Transparency
      • Mesh Techniques
        • Meshes 101
        • Working with Meshes
        • Repairing Meshes
        • Other Techniques
          • Thicken a Mesh with Grasshopper
          • Mesh Manipulation with Blender
          • Custom Supports in Meshmixer
      • Topography Models
      • Using the Makerbot Experimental Extruder
      • Troubleshooting
      • Adjusting Print Settings
    • Resources
      • Downloadable Software & Accounts
      • Software Primers
        • Autodesk Meshmixer
        • Blender
    • Mold Making and Casting
  • 3D Scanning |3DS|
    • 3D Scanning at the NExT Lab
    • 3D Scanning Use Cases
    • Guides
      • Principles of 3D Scanning / Digital Reconstruction
      • Photogrammetry
        • Photogrammetry Theory
        • Photogrammetry Benchmark
        • Technical Guides
          • From Photos to 3D Spatial Data
          • Advanced Techniques
          • Taking Measurements + Visualisation
          • From Photogrammetry to 3D Printing
      • BLK360 Terrestrial LiDAR Scanner
        • BLK360 Benchmark
        • Scan
        • Register
          • Export from iPad
        • Process
      • Artec Handheld SLT Scanners
        • Using the Scanners
        • Manual Alignment
        • Fill Holes
        • Smoothing
        • Frame Selection
      • VLX LiDAR SLAM Scanner
        • VLX setup
        • Preparing to Scan
        • Using the Scanner
        • Processing the Scans
      • Working with 3D Scan Data
        • Point Clouds and Rhino
        • Point Clouds and Cloud Compare
        • Point Clouds and Blender
        • Point Clouds to Meshes
    • Troubleshooting
      • General
      • Artec EVA
      • Leica BLK360
      • VLX
  • Augmented Reality |AR|
    • Augmented/Mixed Reality at the NExT Lab
      • Use Case of AR
    • Guides
      • Hololens 2
      • Fologram
        • Fologram Applications
          • Fologram for Hololens
          • Fologram for Mobile
        • Fologram for Rhino
        • Fologram for Grasshopper
        • Shared Experiences / Tracked Models
        • Extended Functionality
          • Preparing Models for AR
          • Interactivity
          • Fabrication
      • Unity and Vuforia
        • Unity Primer
        • 2D Targets (Image Targets)
        • 3D Targets (Object Targets)
        • Vuforia Primer
        • Creating a Simple AR App
          • Unity Next Steps: Interaction
          • Model Recognition
    • Troubleshooting
      • Hololens & Fologram
      • FAQ: Augmented Reality
    • Resources
      • Platforms (Hardware)
        • Microsoft Hololens
        • Mobile
      • Software Packages
      • Student Contact
        • AR: Intro Sessions
        • AR: Workshops and Resources
          • UntYoung Leaders Program Workshopitled
          • Young Leaders Program Workshop
          • Construction as Alchemy
  • Virtual Reality |VR|
    • Virtual Reality at the NExT Lab
    • Guides
      • Virtual Reality Hardware Set Up
        • Meta Quest 3
          • Troubleshooting
        • HTC Vive Headsets
          • HTC Vive
            • Troubleshooting
          • HTC Vive Pro
          • HTC Vive Cosmos
            • Troubleshooting
      • Twinmotion VR
        • Twinmotion VR: Features
        • Twinmotion VR: Troubleshooting
      • Virtual Reality Experiences
        • Unreal Engine
          • Unreal Engine Primer
            • Process: Level Building, Playing & Packaging
            • Actors: Components, Content and Editors
            • Materials & Textures
            • Lighting & Mobility
            • Player: VR and non-VR
            • Interactivity & Blueprints
          • Unreal Engine: Guides
            • Setting up a VR-ready File & Templates
            • Creating a Basic VR Experience
            • Custom Collision and Navigation
            • UV and Lightmaps
            • Outputting Content
            • Unreal Troubleshooting
            • Point Cloud Visualisation
          • VR: Video Tutorial Series
            • Exporting from Rhino
            • Model Preparation in 3DS Max
            • Unreal Engine
      • Designing in Virtual Reality
        • Gravity Sketch
          • Quick Start
        • Masterpiece Creator
    • Student Contact
      • VR: Intro Sessions
  • Sensing
    • Body Tracking
      • Usage
        • Technical Specifications
      • Data Analysis in Grasshopper
        • Analysis Examples
      • Animated Point Clouds(UE)
  • ROBOTICS
    • Robotic Dog
      • Operational Health & Safety
      • Robot Dog Setup
      • Operation Setup
        • Operation Basics
        • Arm Mode
        • Programming Mode
        • Mapping Mode
      • Advanced Operations
      • Expansion Equipment / Attachments
      • Basic Simulation
      • Troubleshooting
Powered by GitBook
On this page
  • Movement-Action-Poses
  • Spatial Interaction
  • Spatial Interaction with Proxy

Was this helpful?

  1. Sensing
  2. Body Tracking
  3. Data Analysis in Grasshopper

Analysis Examples

PreviousData Analysis in GrasshopperNextAnimated Point Clouds(UE)

Last updated 1 year ago

Was this helpful?

The following examples are basic explanations on how to get started with the datasets, some techniques are shown but also single vs multi branch examples for when working with multiple limbs etc.

These examples primarily focus on colouring as preview, but these values can be used as with all data to drive design or other forms of representation.


Liberally use panels to check the flow of data to ensure you understand the content!

Datasets can get large so pay careful attention to data structures. For components with inputs/outputs that are grafted or flattened, do these BEFORE you link anything further.


Movement-Action-Poses

Data intrinsic to the body can be started on by comparing frame-to-frame data using body joints or body curves.


Speed is calculated as distance over time. We can compare joint positions from one frame to the next directly as distance. Time is given to us by the Kinect's recording framerate: 30fps.

Input is a single body joint, in the example above, [25] FOOT_RIGHT.

  1. Setup data for comparison.

  2. Calculate speed.

  3. Optional adjustment for readability or to make more sense with Rhino file's units.

  4. Remap values - remap here for what is most appropriate for the source domain S. For example, if it is relative speed we can use the bounds of the values. If interested in only a range of speeds within 2-5m/s, that can be the source domain and the clipped values C output may be more relevant.

    1. Here is also where other types of analyses may be done instead, example; comparatives.

A. Preview the text B. Preview colours and geometry in Rhino.


Spatial Interaction

How does the body act on the space? The following examples cover how to work with body and spatial data in the form of point clouds.


It is recommended that you construct your scripts using low resolution datasets where possible so you may experiment with the different data structures required.

Firstly, working on an entire point cloud when it is not necessary will just lead to unnecessary processing time. So, optionally, extract just the region effected.

Be wary of keeping track of (2), where you have two point clouds; inside and outside the region. These will likely be merged later on.


One way to generate Heatmaps, is to use Attractor logic. The following examples work through heatmapping feet positions.

  1. Prepare your data sets; region to work on (inside region example as above), and the joint data. In this example, only one joint is chosen.

  2. Point attractor setup - as the body joint datatree defaults to joints per frame, each resulting branch is essentially the distances per frame. Manipulate the data tree to extract the point cloud as it were in one frame.

  3. Remap values for use in preview. This is a generic remap but here is where you can consider what this means, is it a gradient of influence, or a bounds of interaction etc..

This results in one single frame, and a not so representative display as the entire region is shaded. Let's address these two points next.


In (2), one way to collapse the data; to consider all the joint positions; is to use [Sort List]. Default behaviour is to sort values smallest to largest - we can think of it this way:

  • The result of [Flip Matrix] means that for each point in the point-cloud, is a list of distances to each frame's body joint.

  • [Sort List] will sort these distances.

  • [List Item] with index 0 effectively tells which frame the point is closest to. By doing so the tree structure is collapse, and we are left with 1 value per point-cloud point. Summarily this tells us how the point cloud is affected overall.

As with before in (3), this is a generic remap that can be expanded upon.

In (4), [Interpolate] is used to transition the values. This component takes a certain data structure:

  • Merge both colours where each branch corresponds to a point in the point-cloud, with its two colour values, original and adjusted. (This will work with any list)

  • Input parameter t, is the interpolation amount between each branch of input data D. 0 t = closer to first [0] item in the list. 1 t = closer to the last item of the list.

  • We have to minus the values from the graph mapper to invert the effect, or we can swap the order of colours, or change the remap method.


If working with multiple joints, for example both feet - [21] FOOT_LEFT and [25] FOOT_RIGHT - we can adjust the above the following:

Main changes are in (1): this method uses two polylines for each joint over time to maintain the same type of data structure as before.

The following is a method that uses the joints as its original point data if required. This method reverses the data structure so far between the point-cloud points and body data, as highlighted:


Spatial Interaction with Proxy

'Attaching' helpers for any proxies between the body data and spatial data. Examples include sight cones, objects, or a body structure. Similar types of analysis as above can be used thereafter.


A simple form of attachment may just use the data of the body as a way for 3pt operations. You can define a plane with at least 3 points, which can be used for orientation as well. Here is a simple/naive way for sight using tracked points on the face.

Click to Enlarge
Click to Enlarge
Click to Enlarge
Click to Enlarge - Main areas of change grouped in pink
Click to Enlarge