NExT Lab
Maker SpacesFabLabNExT LabRobotics LabPrint Room and Loans
  • NExT Lab
  • Contact Details
  • NExT Lab Access
  • Sustainability
    • 3D Printing
  • Case Studies & Projects
    • |3DS|VR| Voices of Country
    • |3DP|AR| Prosthetic Habitats
    • |AR| Studio 40: The Field
    • |VR|3DP| Gravity Sketch: Door Handles
    • |3DS| 3D Scanning Examples
    • |AR|3DP| GRANULAR
  • 3D Printing |3DP|
    • 3D Printing at the NExT Lab
      • Other 3D Printing Options
    • Get Started
    • Design Approaches
    • Modelling Guidelines
    • 3D Print Farm
      • Quick-Start Guide
        • File Naming Conventions
      • Detailed Overview
        • 3D Printing Mesh Preparation
        • Submitting a Print Request
        • Post-Submission: Updating, Paying & Collecting
        • Slicing & Settings
    • Open Access Printers
      • PRUSA Open-Access
        • Workflows
          • Materials Experimentation
          • Experimental Techniques
        • Prusa i3 MK3S Fundamentals
        • Hardware Glossary
          • Extruder
          • Hotend & Nozzle
          • Print Surface, Bed & Y Axis
          • PINDA Inductive Probe
          • X-Axis Gantry
          • Z-Axis Stage
        • Software/Slicer Glossary
          • Plater/Virtual Print Bed
          • Print Settings
          • Filament Settings
          • Printer Settings
        • Troubleshooting
          • Filament Jam/Clog
          • Worn Nozzle
          • Broken/Loose Heatbreak
          • First Layer Issues/Prints Not Sticking to Bed
          • Stringing & Oozing Hotend
    • Use Own Filament
    • Key Techniques
      • Hollowing Models
      • Combating Warping
      • Split Models & Joints
      • Joints and Connections
      • Fillets & Chamfers
      • Accuracy, Precision & Tolerancing
      • Post-Processing & Finishing
        • No Sanding Method
        • Sanding Method
        • Epoxy Method
        • Fillers Method
      • Printing for Transparency
      • Mesh Techniques
        • Meshes 101
        • Working with Meshes
        • Repairing Meshes
        • Other Techniques
          • Thicken a Mesh with Grasshopper
          • Mesh Manipulation with Blender
          • Custom Supports in Meshmixer
      • Topography Models
      • Using the Makerbot Experimental Extruder
      • Troubleshooting
      • Adjusting Print Settings
    • Resources
      • Downloadable Software & Accounts
      • Software Primers
        • Autodesk Meshmixer
        • Blender
    • Mold Making and Casting
  • 3D Scanning |3DS|
    • 3D Scanning at the NExT Lab
    • 3D Scanning Use Cases
    • Guides
      • Principles of 3D Scanning / Digital Reconstruction
      • Photogrammetry
        • Photogrammetry Theory
        • Photogrammetry Benchmark
        • Technical Guides
          • From Photos to 3D Spatial Data
          • Advanced Techniques
          • Taking Measurements + Visualisation
          • From Photogrammetry to 3D Printing
      • BLK360 Terrestrial LiDAR Scanner
        • BLK360 Benchmark
        • Scan
        • Register
          • Export from iPad
        • Process
      • Artec Handheld SLT Scanners
        • Using the Scanners
        • Manual Alignment
        • Fill Holes
        • Smoothing
        • Frame Selection
      • VLX LiDAR SLAM Scanner
        • VLX setup
        • Preparing to Scan
        • Using the Scanner
        • Processing the Scans
      • Working with Point Clouds
        • Point Clouds to Meshes
    • Troubleshooting
      • General
      • Artec EVA
      • Leica BLK360
      • VLX
  • Augmented Reality |AR|
    • Augmented/Mixed Reality at the NExT Lab
      • Use Case of AR
    • Guides
      • Hololens 2
      • Fologram
        • Fologram Applications
          • Fologram for Hololens
          • Fologram for Mobile
        • Fologram for Rhino
        • Fologram for Grasshopper
        • Shared Experiences / Tracked Models
        • Extended Functionality
          • Preparing Models for AR
          • Interactivity
          • Fabrication
      • Unity and Vuforia
        • Unity Primer
        • 2D Targets (Image Targets)
        • 3D Targets (Object Targets)
        • Vuforia Primer
        • Creating a Simple AR App
          • Unity Next Steps: Interaction
          • Model Recognition
    • Troubleshooting
      • Hololens & Fologram
      • FAQ: Augmented Reality
    • Resources
      • Platforms (Hardware)
        • Microsoft Hololens
        • Mobile
      • Software Packages
      • Student Contact
        • AR: Intro Sessions
        • AR: Workshops and Resources
          • UntYoung Leaders Program Workshopitled
          • Young Leaders Program Workshop
          • Construction as Alchemy
  • Virtual Reality |VR|
    • Virtual Reality at the NExT Lab
    • Guides
      • Virtual Reality Hardware Set Up
        • Meta Quest 3
          • Troubleshooting
        • HTC Vive Headsets
          • HTC Vive
            • Troubleshooting
          • HTC Vive Pro
          • HTC Vive Cosmos
            • Troubleshooting
      • Twinmotion VR
        • Twinmotion VR: Features
        • Twinmotion VR: Troubleshooting
      • Virtual Reality Experiences
        • Unreal Engine
          • Unreal Engine Primer
            • Process: Level Building, Playing & Packaging
            • Actors: Components, Content and Editors
            • Materials & Textures
            • Lighting & Mobility
            • Player: VR and non-VR
            • Interactivity & Blueprints
          • Unreal Engine: Guides
            • Setting up a VR-ready File & Templates
            • Creating a Basic VR Experience
            • Custom Collision and Navigation
            • UV and Lightmaps
            • Outputting Content
            • Unreal Troubleshooting
            • Point Cloud Visualisation
          • VR: Video Tutorial Series
            • Exporting from Rhino
            • Model Preparation in 3DS Max
            • Unreal Engine
      • Designing in Virtual Reality
        • Gravity Sketch
          • Quick Start
        • Masterpiece Creator
    • Student Contact
      • VR: Intro Sessions
  • Sensing
    • Body Tracking
      • Usage
        • Technical Specifications
      • Data Analysis in Grasshopper
        • Analysis Examples
      • Animated Point Clouds(UE)
  • ROBOTICS
    • Robotic Dog
      • Operational Health & Safety
      • Robot Dog Setup
      • Operation Setup
        • Operation Basics
        • Arm Mode
        • Programming Mode
        • Mapping Mode
      • Advanced Operations
      • Expansion Equipment / Attachments
      • Basic Simulation
      • Troubleshooting
Powered by GitBook
On this page
  • Installation
  • Sensor SDK
  • Body Tracking SDK
  • Apps
  • Modes
  • On Colour
  • Downloads
  • Using the Device
  • Physical Setup
  • Software Setup
  • Recording
  • Large Files
  • Extraction
  • Troubleshooting

Was this helpful?

  1. Sensing
  2. Body Tracking

Usage

PreviousBody TrackingNextTechnical Specifications

Last updated 1 year ago

Was this helpful?

Installation

Install the following SDK to their default location. This location should be in C:/Program Files, the apps only work when this is the case.


Sensor SDK

Download and install with the MSI: Azure Kinect SDK 1.4.1.exe


Body Tracking SDK

Download and install the latest version of the MSI.


Apps

Use the Recording app to save data streams using the Kinect. Use the Extract Data app to extract body tracking and point cloud data from the Recordings, without the Kinect.

This workflow assumes that there is no need for live processing of data.

Modes

The kinect features various modes:

If using for body tracking results, use NFOV unbinned, or WFOV 2x2 Binned.

Mode
Resolution
FoI
FPS
Operating range*

NFOV unbinned

640x576

75°x65°

0, 5, 15, 30

0.5 - 3.86 m

NFOV 2x2 binned (SW)

320x288

75°x65°

0, 5, 15, 30

0.5 - 5.46 m

WFOV 2x2 binned

512x512

120°x120°

0, 5, 15, 30

0.25 - 2.88 m

WFOV unbinned

1024x1024

120°x120°

0, 5, 15

0.25 - 2.21 m

On Colour

The Recording app has built in two colour resolutions, 720p and 1080p. If large filesizes and processing power are a limitation, use 720p. The filesize reduction is significant without that much loss in colour information.

Downloads


Using the Device

Physical Setup

The device uses the depth data stream with the pretrained body-tracking detection model to identify bodies. Essentially, body-tracking relies on having a clear view of the subject.

Your silhouette, certain materials and the angle of the capture device will affect the accuracy.

Generally, front-on angles work best where there is no occlusion. From more intense side views where the body occludes itself, there will likely be a loss in accuracy.

  1. Use the kit's stand or optional tripod to place the Kinect

  2. Connect the device to your laptop/computer:

    1. Connect via USB-C for both data and power

    2. OR Connect via USB for data, and power separately.

Software Setup

Ensure adequate storage is available on your laptop/computer. The recording files can get really big very fast, consider splitting up recordings so as to not overwhelm your system with large files. (For reference, a 5 second stream is 1-2GB)

Ensure adequate processing overhead, close any unnecessary apps etc.

  1. Search for Kinect Viewer from your Start Menu.

    1. If you cannot start it, you may have to try the other data/power connection above.

  2. Open the device and start it to ensure it is running smoothly, try out the different settings.


Recording

The Recording app opens 3 windows:

  • Console that provides feedback and status messages.

  • Small GUI for starting/stopping recordings.

  • Preview of the captured point cloud and body-tracking data.

  1. Run the app - pick a mode to start in.

    1. You can adjust the framerate and colour resolution after, but must restart the app for a different mode.

  2. Refer to the Console for feedback and status.

  3. Use the Preview to adjust your device location and suitability of the data captured, especially for the body tracking result.

  4. Start the recording from the GUI.

    1. Starting a recording will stop the preview.

    2. You will have to wait a few seconds for the recording to warm-up, you may have to adjust to this by checking the output in between recordings.

  5. Stop the recording from the GUI.

  6. Close the GUI window with the X to shut everything down properly.

  7. Recordings are saved automatically with the date+time to the same folder as the app is in.

    1. These recordings are layered MKV movie files that have colour, depth and the device configurations.

Large Files

For recordings with very large file sizes, use a tool to split it up into chunks first! When extracting data, computers can easily run out of memory.

Microsoft recommends using MKVToolNix:

Use the portable version for ease of use.

  1. Open mktoolnix-gui.exe.

  2. Open the multiplexer tab

  3. In the Input sub-tab, use [Add Source Files] (bottom) to add your recordings. Verify that you see multiple tracks (COLOR, DEPTH etc.).

  4. Use the Output sub-tab's Splitting menu.

    1. Split mode: After output duration

    2. Duration, in seconds. E.g. 2s or 10s.

  5. Adjust destination file path + name.

  6. Add to job queue.

  7. In the Job Queue tab > Job Queue > Start all pending Jobs


Extraction

The Get Data app runs through the following process:

  1. Input the recording file (same folder) and the start frame.

  2. Recording starts playing until the provided start frame.

  3. After the start frame, each frame gets processed into body-track and point-cloud data.

Throughout the process, the following windows are generated:

  • Input prompts.

  • Console that provides feedback and status messages.

  • Small GUI for controlling the processing.

  • Preview of the extraction.

  1. Ensure recording file is in the same folder as this Get Data app.

  2. Run the app.

  3. Use the first window to input details for the extraction:

    1. Browse for your recording.

    2. Input a starting frame as an integer, the video should be at 30 frames per second. E.g. to start at the 2 second mark, multiply 2 x 30 to start at frame 60.

    3. If invalid file or frame, you will be prompted again. (Note a frame number larger than your recording is still a valid frame, it will just never reach it for data extraction)

  4. A preview of the recording will launch and advance to your starting frame, where it will start recording.

  5. Refer to the Console for feedback and status.

  6. Use the GUI to advance one frame at a time, or turn on/off auto advance.

    1. Temporal smoothing can be applied, where the body tracking will smooth itself out in reference to the previous frames. This can help with jittery data but not where the body is incorrectly detected completely.

  7. Close the GUI window with the X to save and shut everything down properly.

  8. Extracted data is saved automatically to the same folder as the initial recording filename + bodyframes.csv and pointcloud.e57.


Troubleshooting

No Device Detected

Try the alternative connection method for power.

Device Failed to Open

Turn the Kinect on/off at the power source, or un-plug/re-plug it.

Image Preview Looks Skewed

Preview's aspect ratio can be freely adjusted and is independent to the extracted data.

No Bodies Detected

You should be able to see the body-tracking result from Recording. Certain clothing can throw off the body tracking but our understanding is inconclusive. Consider clothing that has a clearer silhouette, capturing angle and any occlusion.

Other Issues

Come speak to NExT Lab!

These are packaged Python scripts. They use a modified version of module, for more information:

Extract Data ⧉

Recording ⧉

0.5 - 3.9 m

ibaiGorordo's pyazurekinect
Technical Specifications
Azure Kinect Body Tracking SDK downloadMicrosoftLearn
Logo
MKVToolNix Downloads – Matroska tools for Linux/Unix and Windows
https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/develop/docs/usage.md
Logo