Body Tracking

The Microsoft's Azure Kinect is a sensor/camera unit for depth, colour and IR data, with an IMU for general computer vision processing.

Body Tracking is one application for computer vision, allowing for designers to explore the many applications of understanding the body in design contexts. There is a pre-trained model for body tracking that we can tap into the quickly get started.

Device Information:



Access

Under Construction.

In the mean time, please speak to NExT Lab about using the device. Loan the device from the Media Hub.


Kit

The kit includes:

  • Azure Kinect Device

  • Stand - for placing it on a flat surface

  • USB-C Power and Data Cable (1m)

  • (Optional) Tripod

  • (Optional) USB-C to USB-A Cable

  • (Optional) Separate power supply + adapter

The device can use a single <1.5m USB-C cable for data and power. For reliability, or for longer cable runs, or if your computer does not support USB-C, ask for the optional USB-C to A Cable and Power Adapter as well.


Getting Started

This Knowledge Base only covers this workflow.

NExT Lab has built out a simple workflow that you can use as well to quickly get spatial and body tracking data from the sensor.

For specifications and usage

pageUsage

DIY

For developing your own applications, Microsoft's knowledge base has the device fairly well documented. There are two software packages provided:

  • Sensor SDK for accessing the device's sensor (depth, colour, orientation etc.) data streams.

  • Body Tracking SDK, a pre-trained computer vision model that can track bodies from the depth data stream.

Other forms of integration are also available/maintained by the community, such as a SDK wrapper for Python or plugins for Unreal Engine.

Last updated