Usage
Last updated
Last updated
Install the following SDK to their default location. This location should be in C:/Program Files, the apps only work when this is the case.
Download and install with the MSI: Azure Kinect SDK 1.4.1.exe
Download and install the latest version of the MSI.
Use the Recording app to save data streams using the Kinect. Use the Extract Data app to extract body tracking and point cloud data from the Recordings, without the Kinect.
This workflow assumes that there is no need for live processing of data.
The kinect features various modes:
If using for body tracking results, use NFOV unbinned, or WFOV 2x2 Binned.
NFOV unbinned
640x576
75°x65°
0, 5, 15, 30
0.5 - 3.86 m
NFOV 2x2 binned (SW)
320x288
75°x65°
0, 5, 15, 30
0.5 - 5.46 m
WFOV 2x2 binned
512x512
120°x120°
0, 5, 15, 30
0.25 - 2.88 m
WFOV unbinned
1024x1024
120°x120°
0, 5, 15
0.25 - 2.21 m
The Recording app has built in two colour resolutions, 720p and 1080p. If large filesizes and processing power are a limitation, use 720p. The filesize reduction is significant without that much loss in colour information.
These are packaged Python scripts. They use a modified version of ibaiGorordo's pyazurekinect module, for more information:
Technical SpecificationsThe device uses the depth data stream with the pretrained body-tracking detection model to identify bodies. Essentially, body-tracking relies on having a clear view of the subject.
Your silhouette, certain materials and the angle of the capture device will affect the accuracy.
Generally, front-on angles work best where there is no occlusion. From more intense side views where the body occludes itself, there will likely be a loss in accuracy.
Use the kit's stand or optional tripod to place the Kinect
Connect the device to your laptop/computer:
Connect via USB-C for both data and power
OR Connect via USB for data, and power separately.
Ensure adequate storage is available on your laptop/computer. The recording files can get really big very fast, consider splitting up recordings so as to not overwhelm your system with large files. (For reference, a 5 second stream is 1-2GB)
Ensure adequate processing overhead, close any unnecessary apps etc.
Search for Kinect Viewer from your Start Menu.
If you cannot start it, you may have to try the other data/power connection above.
Open the device and start it to ensure it is running smoothly, try out the different settings.
The Recording app opens 3 windows:
Console that provides feedback and status messages.
Small GUI for starting/stopping recordings.
Preview of the captured point cloud and body-tracking data.
Run the app - pick a mode to start in.
You can adjust the framerate and colour resolution after, but must restart the app for a different mode.
Refer to the Console for feedback and status.
Use the Preview to adjust your device location and suitability of the data captured, especially for the body tracking result.
Start the recording from the GUI.
Starting a recording will stop the preview.
You will have to wait a few seconds for the recording to warm-up, you may have to adjust to this by checking the output in between recordings.
Stop the recording from the GUI.
Close the GUI window with the X to shut everything down properly.
Recordings are saved automatically with the date+time to the same folder as the app is in.
These recordings are layered MKV movie files that have colour, depth and the device configurations.
For recordings with very large file sizes, use a tool to split it up into chunks first! When extracting data, computers can easily run out of memory.
Microsoft recommends using MKVToolNix:
Use the portable version for ease of use.
Open mktoolnix-gui.exe.
Open the multiplexer tab
In the Input sub-tab, use [Add Source Files] (bottom) to add your recordings. Verify that you see multiple tracks (COLOR, DEPTH etc.).
Use the Output sub-tab's Splitting menu.
Split mode: After output duration
Duration, in seconds. E.g. 2s or 10s.
Adjust destination file path + name.
Add to job queue.
In the Job Queue tab > Job Queue > Start all pending Jobs
The Get Data app runs through the following process:
Input the recording file (same folder) and the start frame.
Recording starts playing until the provided start frame.
After the start frame, each frame gets processed into body-track and point-cloud data.
Throughout the process, the following windows are generated:
Input prompts.
Console that provides feedback and status messages.
Small GUI for controlling the processing.
Preview of the extraction.
Ensure recording file is in the same folder as this Get Data app.
Run the app.
Use the first window to input details for the extraction:
Browse for your recording.
Input a starting frame as an integer, the video should be at 30 frames per second. E.g. to start at the 2 second mark, multiply 2 x 30 to start at frame 60.
If invalid file or frame, you will be prompted again. (Note a frame number larger than your recording is still a valid frame, it will just never reach it for data extraction)
A preview of the recording will launch and advance to your starting frame, where it will start recording.
Refer to the Console for feedback and status.
Use the GUI to advance one frame at a time, or turn on/off auto advance.
Temporal smoothing can be applied, where the body tracking will smooth itself out in reference to the previous frames. This can help with jittery data but not where the body is incorrectly detected completely.
Close the GUI window with the X to save and shut everything down properly.
Extracted data is saved automatically to the same folder as the initial recording filename + bodyframes.csv and pointcloud.e57.
Try the alternative connection method for power.
Turn the Kinect on/off at the power source, or un-plug/re-plug it.
Preview's aspect ratio can be freely adjusted and is independent to the extracted data.
You should be able to see the body-tracking result from Recording. Certain clothing can throw off the body tracking but our understanding is inconclusive. Consider clothing that has a clearer silhouette, capturing angle and any occlusion.
Other Issues
Come speak to NExT Lab!