Skip to main content
Skip table of contents

TalenTrack

1 Preface

TalenTrack is a markerless 3D body pose tracking system based on deep learning algorithms and image analysis only. It requires no markers, senders or other equipment to be carried by the persons tracked, and no calibration of the individual bodies. Tracking is done in real-time by a number of cameras analyzing overlapping fields of view and identifying persons in the video, recognizing the human body and estimating the positions of their joints. The positions of these joints are then located in 3D space and sent to a graphics engine.

1.2 System Specifications

Number of persons tracked:

  • up to 4 persons

Base area of tracking:

  • 5x3 meters (dependent on camera setup)

Tracked body parts (position of main joints / nucleus of):

  • Arms: shoulder, elbow, wrist

  • Legs: hip, knee, ankle

  • Torso: Pelvis, hip center

  • Head: general position

Tracking data format (UDP stream to one or multiple graphics engines):

  • Proprietary TalenTrack Sender for all joints in one stream

  • TrackMen Camera sender for one single joint per sender

  • FreeD sender for one single joint per sender

Tracking data delay (depends on engine components and application):

  • around 1 frame if each TalenTrack camera has an own engine

  • around 3 frames if 3 TalenTrack cameras are used per engine

2 Hardware Setup

2.1 Components and Wiring

TalenTrack is available in two setup configurations:

  1. Multi Machine Setup: Multiple cameras, each with one engine to process its data. This setup provides the lowest tracking data delay of only one frame

  2. Single Machine Setup: Three cameras, connected to a single engine processing all the data, resulting in a three times higher tracking data delay compared to the multi machine setup. Two or Three single machine setups can be combined in order to use six or nine cameras for TalenTrack. This makes the tracking area larger

The basic components remain the same for both setups but the network setup will be different, as explained in the dedicated sections below.

2.1.1 Workstation

The workstation requirements can be found in the System Requirements page.

2.1.2 Cameras

The tracking cameras are provided by Pixotope. They are using the following connections:

  • GigE Vision protocol ethernet video connection

  • Power over Ethernet (PoE) power supply

2.1.3 Software

TalenTrack runs in a dedicated Linux environment on dedicated machines only. The operating system (OS), programs and software configurations are performed by Pixotope support before on-site installation.

The setup can be performed remotely. In that case, the user performs the guided OS installation, following a dedicated manual that will enable the engine to start up the Linux environment and enable remote access for tech support.

2.2 Multi Machine Setup

The multi machine setup employs one workstation per tracking camera, reducing the delay caused by image acquisition and processing to one frame.

2.2.1 Controls

The workstations are interconnected via LAN. Workstation 1 is defined as Master, the others are Clients. The Master will control the other workstations using application menu commands and may use the remote control software NoMachine or VNC viewer access for advanced works on the client workstations.

Thus, Workstation 1 is the one computer that may need display and mouse/ keyboard.

2.2.2 Video and Reference

Workstation 1 receives an undelayed video input from one of the studio cameras. It is both used for pre-visualization and as a reference/genlock. The generally specified video card accepts a video format up to 1080p.

2.2.3 Network - multi machine setup

The network setup of the system comprises two connections for Client engines and three connections for the Master engine.

Illustration 1: Multi machine setup diagram

  1. Connection to the Graphics Network: This connection is exclusive to Workstation 1 and is used to:

    1. Send the talent tracking data to the graphics engine, as well as

    2. Receive the TrackMen camera tracking data for internal visualization tools.

  2. Connection to the Tracking Cameras: A gigabit ethernet direct connection between one workstation and the respective tracking camera, powered by a PoE injector.

  3. Connection between the Engines: If multiple TalenTrack engines are being used, they will have a dedicated network to exchange data.

2.3 Single Machine Setup

The single machine setup uses only one workstation for every three tracking cameras. This triples the delay compared to the multi machine setup.

Due to the necessity of transmitting three video streams over one cable, the single machine setup variations listed below uses 10 Gbit ethernet connections. In the diagrams below the following colors are used for the connection types:

  1. Red: 10 Gbit (RJ45)

  2. Blue: 1Gbit (RJ45)

  3. Green: SDI (BNC)

2.3.1 Video and Reference

The workstation receives an undelayed video input from one of the studio cameras. It is both used for pre-visualization and as a reference/genlock. The generally specified video card accepts a video format up to 1080p.

2.3.2 Network - single machine setup

NEW2_Setup_scheme_3cam.jpg

Illustration 2: Single machine setup diagram for 3 cameras

The network setup of the system comprises:

  1. Connection to the Tracking Network: This connection is used to:

    1. Send the talent tracking data to the graphics engine, as well as

    2. Receive the TrackMen camera tracking data for internal visualization tools.

  2. Connection to the Tracking Cameras:

    1. A 10 Gbit ethernet connection on the workstation, to connect to a 10 Gbit PoE
      switch which is connecting and powering all cameras.

2.3.3 Six-Cam Single Machine Setup

Two single Machine setups with each 3 TalenTrack cameras can be connected. The connection is established with a 1 Gbit switch between those two workstations. Each workstation needs a 10 Gbit PoE switch to connect to its 3 cameras and a 10 Gbit network port.

NEW2_Setup_scheme_6cam.jpg

Illustration 3: Six-Cam Single Machine setup diagram

2.3.4 Nine-Cam Single Machine Setup

Three single Machine setups with each 3 TalenTrack cameras can be connected. In contrary to the six-cam setup, the nine-cam setup needs a 10 Gbit capable switch between those three workstations. Each workstation needs a 10 Gbit PoE switch to connect to its 3 cameras and a 10 Gbit network port. Additionally to this setup the server workstation needs a second 10 Gbit port to connect to the 10 Gbit tracking network.

NEW2_Setup_scheme_9cam.jpg

Illustration 4: Nine-Cam Single Machine setup diagram

2.4 Camera Installation

A minimum of three cameras with different perspectives is required to create sufficient 3D data to locate a persons joints in 3D space.

The amount of cameras needed is determined by the area that needs to be covered, and by the application for the system. For example, a precise position estimation of the persons nucleus is well calculated with a perspective from high above while a high resolution of individual joints is better recognized in a smaller area from a more shallow angle.

Their positioning should adhere to the general rules below, if not specified otherwise for the individual use case. The actual lens settings and exact orientation will be adjusted in 4.2 Camera Settings with the interface up and running.

  1. All cameras must have an overlapping field of view with neighboring cameras

  2. All cameras must be able to fully see a dedicated calibration target at the same time as a predefined central “Master” camera

Illustration 3: Exemplary camera distribution from above, indicating overlapping fields of view

  1. The cameras should look down on the target area at an angle of around 30°

  2. The cameras should be at a height of around 4 meters 5. The cameras should have a distance of around 6m towards the target area center

Illustration 4: General advice for camera installation height

Illustration 5: TalenTrack cameras lowered on poles

3 Interface

3.1 System Interface

3.1.1 Overview

Illustration 6: TalenTrack interface of the Master engine

Task bar: The task bar shows a number of quick access menus and symbols. Most notably, the Tracking menu next to the openSUSE start menu on the far left. In there, the key applications for the tracking system are quickly accessible.

Illustration 7: Task bar with open Tracking menu

System Log: The System Log displays messages related to the Linux system and to all tracking applications. The majority of these messages are for debugging purposes only but sometimes, support may request information from it.

AppManager: The AppManager lists tracking-related programs and offers the correct way to stop and restart them. It can be brought up by clicking the colorful camera symbol in the right part of the task bar. Programs listed in green are currently running. Programs listed in orange are currently (re-)starting Programs listed in red are currently off.

Illustration 8: AppManager with TalenTrack running and “Show All Apps” active (right)

To show programs that are not currently running, right-click the AppManager icon and check the Show All Apps option. Depending on what programs have been started, a different combination of apps may show green.

The Apps can be recognized as follows:

License Manager

Check License validity

Allow remote access

Registers at a server in Cologne and allows supporter to log in remotely via NoMachine System

System Log Window

Displays messages related to the Linux system and to all tracking applications

TalenTrack Lens Calibration

Tool for calibrating the individual tracking camera optical characteristics

TalenTrack Client

Observer running on the local machine, processing the feed of one tracking camera

TalenTrack Client Group

Master engine program to start the TalenTrack Clients on all engines

TalenTrack GUI front end

TalenTrack main configuration window responsible for timing and sending the data

TalenTrack Offset Calculator

Tool for calibrating the positions of the cameras in the studio

TalenTrack Server

Observer collecting the data from all Clients and merging them to forward the information to the TalenTrack front end

TalenTrack Streamer

Streams the camera locally connected to the Master engine, for calibration purposes outside of the actual tracking

TalenTrack Streamer Group

Master engine program to start the TalenTrack Streamer on all engines

VNC Server

Allows remote access through LAN via a VNC viewer

Observer: There are two Observer versions, the Server and the Client, as outlined above.

Observer Client: The Observer Client running on both the Client and the Master engines is processing individual tracking camera inputs and generating 2D information from them. It shows the live video in the Camera’s view tab and will put a wireframe skeleton graphic on a person it recognizes.

Illustration 9: Observer Client with open Camera’s view

Illustration 10: Observer Server

Observer Server: The Observer Server collects all Observer Client data streams and combines the 2D information about the recognized persons into 3D information, to forward it to the TalenTrack GUI front end.

The Server shows all incoming tracking camera videos, including a wireframe skeleton for persons and a grid indicating the coordinate system of the system. Clicking a video in the list on the left will select it for the large view in the main part of the window

TalenTrack GUI front end: The front end is the configuration program that takes the persons joints 3D data and sends it to the graphics engine, as well as provide some pre-visualization tools.

Illustration 11: TalenTrack GUI Front End

TalenTrack front end video: This video is an input from a studio camera. It can be used to quickly check whether it is available as the reference source. It can be tracked by a TrackMen camera tracking system and then also paint a wireframe skeleton in 3D space for verifying the generated talent tracking data.

Illustration 12: TalenTrack Front End Video

3.1.2 Accessing client engines

In a Multi-Machine setup, the Observer Clients are running on dedicated computers and some early setup configurations or troubleshooting may require to access these machines individually.

Illustration 13: Find NoMachine in the Menu and open NoMachine window

If the clients do not have their own displays and HID, they can be accessed by remote desktop connection from a machine in the same LAN via NoMachine software:

  1. Open the Menu and enter “nomachine” to quickly find the software, and launch it.

  2. In the upcoming window, all local connections will be listed with their hostnames . Double-click to access the desired machine.

  3. In the login prompt, enter the following credentials:

User: tracking

Password: tmg51105k

4 Configuration and Calibration

4.1 Setting the reference input

Illustration 14: Access a client by double-clicking it in the list

TalenTrack is meant for professional live broadcast and event periphery and is thus triggered by a video reference source. Set the video input:

  1. Open the TalenTrack Front GUI window

  2. Scroll the menu on the right to the Camera section

  3. Open the Video Mode dropdown menu and select the correct video format of the input signal from the list

  4. Check the TalenTrack Video for live video

4.2 Camera Settings

Illustration 15: OberserverBU window with all camera views

Before calibration, the cameras should be adjusted to see the desired picture detail of the tracking area in good focus. After the 4.3.1 Camera calibration, the zoom or focus of the camera can not be manipulated any more without having to repeat that step. After 4.3.2 Spacial calibration, the position and orientation of the tracking camera may not be moved any more.

  1. Launch TalenTrack from the Tracking Menu and switch to the ObserverBU window

  2. All camera views are displayed on the left. Click the view of each camera to bring it to the large video window and perform the following steps:

  3. Adjust zoom and/or focus settings of the camera: loosen the two respective screws on the lens and carefully pull the rings until the field of view shows the covers the desired part of the studio and the movement area of the talent is in focus

Illustration 16: Lens settings, from left to right: focus (1), iris (2), zoom (3)

  1. When content with the field of view and focus settings, carefully fasten the two screws

  2. Adjust the brightness: Loosen the iris screw and set the iris to an appropriate setting. The brightness can be manipulated to great extend in the software, so the iris should be fairly open, to allow for a great range

  3. When content with the iris settings, carefully fasten the screw

  4. Switch to the ObserverClientBU window. This may require to remote log in to a client engine, see 3.1.2 Accessing client engines

  5. Click the gear icon in the upper right corner of the ObserverClientBU window to bring up the Settings.

  6. In the Settings window, switch to the Cam X tab. Adjust Exposure time (keep below 4000us to avoid motion blur) and Gain (keep below 15 to avoid image noise) until the video has a good overall brightness, with both dark and bright areas showing as much detail as possible

  7. Click Save

Note: It is not necessarily desirable to set the lens to the widest angle possible, as precision increases when a person fills more pixels in the video.

Illustration 17: Adjust brightness settings in the ObserverClientBU cam settings

4.3 Calibration

TalenTrack system calibration comprises two major steps, which are:

  • Camera lens calibration, which is used to enable the system to do image processing on the respective field of view, distortion and other projection parameters of the image.

  • Spacial calibration, which is used to align the coordinate systems of TalenTrack with the one of the virtual studio and also to locate the cameras within that system, so their individual 2D data can be combined to 3D data.

Illustration 18: Select camera to calibrate

4.3.1 Camera calibration

Illustration 19: TalenTrack Calibration window; activate k0 through k3 and select Type: VioTrack R Slave

For the calibration of the individual camera lens characteristics, follow the steps below for each camera. These calibrations will remain valid for as long as the mounting of the lens on the camera body remains untouched and the zoom and focus settings are the same. They are independent of the calibrations done in 4.3.2 Spacial calibration but are a prerequisite to performing them.

Lens calibration

Illustration 20: Calibration board recognition

1. Open the Tracking menu and start TalenTrack Calibration

2. A window will pop up and ask which camera the calibration should be done for. Select one camera and press OK

3. Lens calibration will start, showing the live video of the tracking camera

4. Depending on lens type and/or zoom/focus settings, the number of necessary distortion parameters that are part of the calculation may vary. If not sure about the best setting for the particular setup, check k0, k1, k2 and k3 in the Distortion parameters section in the lower part of the window.

5. Select the calibration Type on the left: VioTrack R Slave

If the selection for Distortion parameters is not visible in the main window, it may have to be brought up in the settings. To do so, open the Settings... window, enable Show expert settings and switch to the Main tab. Find the Application mode, switch it to Full and restart the lens calibration program to apply.

6. Hold the small calibration board in the picture. If it is recognized correctly, blue and red dots and green lines are drawn on it

7. Take samples: of the pattern by clicking Add Sample. The interface will draw a permanent green dot for each point it has recognized. Proceed to take samples with varying board positions until the full image is homogeneously covered in green dots. In doing so, mind the following best practices and the video tutorial:

  • Hold the board at varying angles towards the camera, to get proper depth information (see Illustration 22 for a good example of angling)

  • Hold the board at a distance towards the camera at which it covers at least around 1/12 of the video image, to get proper information from the lines they provide

  • Don’t move the board during taking a sample, to avoid motion blur

  • Avoid sharp light spots or shadows on the board

  • Take at least 12 samples

Illustration 21: Example of taking samples

8. Check result: The Root Mean Square Error (RMSE) and Maximum error of the calculation are being displayed in the lower right corner of the window. The RMSE should be below 1 and the Maximum below 5.

9. If content with the result, press Save. If not content, revise the lens calibration by closing the program and starting over.

Illustration 22: Error display of the lens calibration

The calibration errors vary per lens setting and do not necessarily indicate the quality of the calibration. However, unusually high errors indicate that something went wrong during calibration (review best practices in step 7) or the distortion parameters setting should be revised (review step 4)

4.3.2 Spacial calibration

Illustration 23: Offset calculation camera choice. Camera 2 is the Master camera in this example and thus not listed.

For position calibration of the cameras, please follow the steps below. For an accurate 3D alignment of the talent tracking data, all cameras have to be calibrated towards the selected Master camera and the cameras must not be moved afterwards. If individual cameras have been manipulated, their orientation may also be revised individually, without having to touch the other cameras. If that manipulated camera has been the Master camera though, all cameras have to be recalibrated towards it.

Offset calibration

Illustration 24: Offset Calculator window

  1. Make sure to have a valid lens calibration for all cameras, see 4.3.1 Camera calibration

  2. Open the Tracking menu and start TalenTrack Offset Calculator

  3. A window will pop up and ask which camera the calibration should be performed for. Select one camera and press OK. The selected cameras will always be calibrated towards the Master camera, hence the number of the Master camera itself is not listed

  4. The Offset Calculator will start, showing the live video of both the Master camera and the selected camera side by side

The offset calculation procedure is different from the lens calibration, even though parts of the GUI look similar. The goal however is, to calculate the camera position in relation to a pattern of known look and size. It is not the intention to analyze the field of view and cover all the video with green dots like in the lens calibration.

  1. Put the large calibration board in view of both cameras. If the system is able to process it, green lines and red dots will be painted on it and green checkmarks will be displayed below each video window

  2. Take samples of the pattern by clicking Add Sample with varying perspectives on the board. The interface will draw a permanent green dot for each point it has recognized. Move the board around the area of the overlapping fields of view of the cameras. In doing so, mind the following best practices:

  • Vary the board position and angle as much as possible

  • The board should be static for taking a sample

  • Avoid sharp light spots or shadows on the board

  • Take at least 8 samples

Illustration 25: Offset calibration with proper board recognition and 8 samples taken

  1. When having taken enough samples, click Get Offset Transformation and review the result: color-coded crosses will be displayed on the circles of the pattern. The crosses should be blue to green. Yellow and orange indicate a less precise result and red is the worst

  2. If the result shows a lot of orange and red crosses, try refining by taking additional samples from new perspectives. After these samples have been taken, click Get Offset Transformation again. If the result does not improve, start over from step 1. If a second run is still unsuccessful, it is likely that one or both lens calibrations should be revised

Illustration 26: Color code of the crosses on the calibration board after using Get Offset Transformation

  1. When content with the result, click Write for Observer

  2. Finish the process by closing the program or by continuing in the next step Coordinate system before closing!

Coordinate system

Illustration 27: Aligning the coordinate system with a measurable real marking and measuring the offset.

In a single offset calibration run, see Offset calibration, the origin and orientation of the tracking system is defined by setting the Master Transformation. To do so:

  1. Do not close the Offset Calculator after an offset calibration

  2. Place the calibration board in a way so the Master Camera can derive it’s own position from it. This will define the origin and orientation of the TalenTrack coordinate system which eventually has to match the camera tracking, so the tracking data appears in the correct location in the final composition

  • The three rings indicate the orientation: the axis through the two rings further apart from each other (indicated by a red line in the Offset Calculator video) will become the X axis. The Axis through the other two rings become the Y axis (blue line)

  • The origin of the TalenTrack coordinate system is the lower left dot on the pattern, when the rings for an L. (see Illustration 27)

  • The coordinate system can be moved in a later step in the ObserverBU but this transformation must rely on good measurements towards the camera tracking origin

Best practice: The board must be in the tracking cameras views but it is possible that the virtual studio origin is not. Make sure to put the board on a point that can be found later on and that can be measured from. Examples are edges of stages, seams between floor coverings, walls or permanent markings on the ground for both origin and one orientation axis.

  1. When the board is well recognized by the Master, click Write Master Transformation

Illustration 28: Click Write Master Transformation

  1. Close the Offset Calculator and launch TalenTrack from the Tracking menu

  2. Bring up the ObserverBU window. The origin and orientation of the system will be indicated by a green grid and colored coordinate system in the individual camera views in the right panel. Check plausibility for all cameras to verify their offset calculation results

Illustration 29: Check calculation plausibility in the camera views of the ObserverBU

  1. Open the Preferences menu and select the Transformation option

  2. In the Transformation Editor window, enter the measured distances from virtual studio origin to the board origin (Graphics to TalenTrack options active) or the other way round (Graphics to TalenTrack option inactive)

  3. Click Save to apply the changes

Illustration 30: Set the transformation to move the TalenTrack origin to the virtual studio origin

Tip: In the example in the picture, the origin was moved from the original dot to the edge of the board, which is easier to align with real objects. It also compensates for the thickness of the board and lowers the origin to the floor:

X: 0.09m (to the left side of the board)

Y: 0.15m (to the bottom of the board)

Z: 0.02m (thickness of the board)

5 Using TalenTrack

5.1 Tracking persons

When started, TalenTrack will automatically recognize persons in view, based on their human appearance. It will then allocate their joints in each camera perspective. When a joint can be recognized in two or more camera views, the system will calculate their position in 3D space, in relation to the systems origin, and connect multiple joints to a skeleton.

The system does not need to recognize the whole person but will also construct a skeleton for e.g. the upper body only.

5.1.1 Starting TalenTrack

Illustration 31: Starting TalenTrack from the Tracking menu

TalenTrack can be started in two ways:

  • Selecting TalenTrack from the Tracking menu or

  • Setting TalenTrack to autostart with the system launch by dragging and dropping the menu entry into the Autostart folder.

Both ways will automatically launch all programs necessary on all machines in the system, which will show up in the Master Engine interface as:

TalenTrack Client

TalenTrack Client Group

TalenTrack GUI front end

TalenTrack Server

Illustration 32: TalenTrack programs running, as seen in the App Manager

5.1.2 Reviewing the tracking

Illustration 33: TalenTrack tracking data visualizations

The tracking result can be pre-visualized in three ways:

1. In the ObserverBU: Select a camera view from the list on the left side of the OberserverBU window to bring it to the large view. A person recognized will have a colored wireframe skeleton drawn onto it. If the camera is contributing to the 3D position of the skeleton, the body will also be outlined by a bounding box rectangle.

2. In the TalenTrack Front GUI: The TalenTrack Front GUI shows a window with the coordinate system of colored axes and a grid for the virtual floor in it. The view can be navigated using the mouse and the sliders around it. Every skeleton with 3D position data will be drawn in this window, in correct relation to the coordinate system.

3. In the TalenTrack Video: The TalenTrack Video shows the video which is being used as reference source for the system. It can be combined with corresponding TrackMen camera tracking data to see the final result as a video overlay, taking the camera position and perspective for the graphics composition into account. To receive the tracking data,

3.1 Create a sender in the camera tracking engine, that sends to the graphics network IP address of the TalenTrack Master Engine.

3.2 Open the TalenTrack Front GUI and scroll the menu on the right to the Camera Receiver section.

3.3 Enter the port specified in the camera tracking sender.

Illustration 34: Camera Receiver port setting

5.1.3 Selecting an ID for a tracked person

Illustration 35: Selecting an ID for a person in the ObeserverBU

Each person entering the tracking area will automatically be labeled with an ID which will be transmitted in the tracking data stream. This ID will always be the next available. The system does not recognize and save individual bodies beyond the live tracking. A person re-entering the tracking area will thus potentially receive a different ID than before. However, this behavior also enables the system to reliably recognize a person, indifferent of their clothing or other factors defining their appearance.

To assign a selected ID for a person for dedicated applications:

1. Open the ObserverBU window.

2. Select a camera view that recognizes the person (skeleton drawn, rectangle around the person).

3. Click the rectangle area and select an ID from the dropdown menu

5.2 Sender settings

Illustration 36: TalenTrack sender settings

Sending the person tracking data to a graphics engine is handled by the TalenTrack GUI Front End. Depending on application for the data and implementation on the graphics engine side, this can be done in three different ways, as explained below. Multiple senders of various protocols can be used at the same time and send to multiple graphics engines.

5.2.1 TalenTrack sender

The native sender of TalenTrack sends the complete set of all skeletons and their joints to the specified receiving engine. To set up a sender:

1. Set the Pan and Tilt axis to match the graphics engines coordinate system

2. Set the length unit used in the graphics engines coordinate system

3. Set the IP address of the graphics engine as Host and define a Port to send to

4. Optionally, define a maximum number of skeletons to be allowed to be sent to the graphics engine. To set no limit, enter 0

5.2.3 TrackMen camera sender

Illustration 37: Camera Tracking sender and corresponding Camera Tracking Selector

If a graphics engine does not have an interface with the TalenTrack sender, the system can also send the positions of single joints as camera tracking data, since most graphics engines can receive this format. To set up TrackMen camera tracking sender:

1. Set the IP address of the graphics engine as Host and define a Port to send to in the Camera Tracking Sender section.

2. Set the Options for data format to match the format expected by the graphics engine.

3. Select up to four streams to be transmitted per sender by choosing a Skeleton and the respective Joint position from the dropdown lists in the Camera Tracking Selector. These may be e.g. the two ankles of a skeleton, to know the position of a persons feet, or the pelvis to have a good estimation of the position of the person in the room.

A camera tracking sender naturally only transmits a single position. Thus, the system is set up to send one stream per selection. The first selection will be sent to the defined Port in the Camera Tracking Sender section. The following selections will be sent to the subsequent port numbers. If for example the Port 4548 is set, the top selection will be sent to that port, the next one below it goes to the port 4549, the next one to 4550 and the last one to 4551.

5.2.4 FreeD camera sender

Illustration 38: FreeD Sender

The FreeD protocol is widely implemented in various graphics engines as well and can be used as an alternative to the TrackMen camera sender. The sender is set up the same as the Camera Tracking Sender in 5.2.3 TrackMen camera sender but the sections are labeled FreeD Sender and FreeD Selector.

The ---ZOOM--- and ---FOCUS--- sections are part of the general sender module and can be ignored for this application.

5.2.5 Tracking data delay

Illustration 39: TalenTrack Front delay setting with (forced) “Timestap too old” error in the System log

The tracking data transmission can be delayed if necessary to match rendering times and video throughput of the overall system. To adjust the tracking delay:

  1. Open the TalenTrack Front GUI and scroll the left menu to the Trigger Sender section

  2. Adjust the Delay (us) value to match the needs of the overall system

Note that this value can not be lowered below an minimum value, to allow allocation of processing times for the systems calculations!

  1. Check the system under on-air conditions, with a sufficient number of persons in the tracking area. Look for “Timestamp too old” error messages in the System log. If these appear, the Delay (us) has to be raised!

6 Troubleshooting

Tracking camera doesn’t work

If a camera doesn’t connect correctly, it’s view will remain empty in the ObserverBU. Try restarting the TalenTrack Client Group from the App Manager. If the camera remains disconnected, try rebooting the camera itself by disconnecting the Power over Ethernet connection.

Persons are not recognized

If persons in the tracking area do not appear as skeletons in the TalenTrack Front GUI, or the recognition is unstable, check the various views in the ObserverBU as described in 5.1.2 Reviewing the tracking.

Check whether at least two cameras can recognize a decent number of joints of the same person or maybe perspective and person position block too many joints.

If multiple cameras detect a skeleton but it does not receive the 3D recognition bounding box, the system may have trouble aligning the joints from multiple views. Check the spacial calibration in 4.3.2 Spacial calibration, especially Illustration 29 as a good hint. Cameras may have been accidentally moved e.g. during works on the lighting, and need re-calibration.

Joints are not reliably recognized

If the joint tracking seems unstable, check the various views in the ObserverBU as described in 5.1.2 Reviewing the tracking.

Check whether the joints can be seen from multiple views in the respective position within the tracking area.

Check whether the persons appearance has sufficient contrast towards the surroundings in the camera view. Increasing the brightness may help, see 4.2 Camera Settings.

If none of the above applies, but recognition seems unreasonably unstable, the camera calibration may be off. This is usually visible by a change in focus and may be caused by accidentally hitting the camera or extreme temperature changes. The camera(s) should be re-calibrated as described in 4.3.1 Camera calibration.

Tracking data is dislocated in the composition

If the tracked joints do not appear in the correct position in the virtual space, make sure the coordinate systems of TalenTrack and the camera tracking systems align correctly. Check the spacial calibration in 4.3.2 Spacial calibration, especially Illustration 29.

The coordinate system may have been put in the wrong place in the Coordinate system step or even the camera tracking may be incorrect.

Possibly, individual cameras have been accidentally moved e.g. during works on the lighting, and need re-calibration.

No data sent to the graphics engine, data stream unstable

TalenTrack data generation and transmission is based on the video input reference. Check video input signal for instabilities.

Check the 5.2.5 Tracking data delay and make sure the Trigger Sender Delay (us) is high enough to not cause errors. Send a ping to the graphics engine for a basic network test:

1. Open a Terminal Emulator from the Menu.

2. Type and confirm with Enter key:

ping <IP of the graphics engine>

3. If reachable, the terminal should show a timing with a low ms value, otherwise it will display “Destination Host Unreachable”

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.