Segmentation

The first stage in the processing pipeline is to process incoming video to find those segments of the image corresponding to the waveform of interest, ready for analysis. We achieved this using Ogglebox’s real-time region-based background extraction library, which compensates for the noise and illumination variation across the video images.

Input image frame from video camera.
Image frame after segmentation.

Recogntion and Tracking

To extract the waveform, we adopt an approach known as ‘tracking by recognition’. In each video frame, Ogglebox’s oriented line extraction algorithm was used to identify the vertical end stops across multiple possible locations of the head camera relative to the whiteboard. This allows us to compensate for the varying pose of the whiteboard as the user moves their head (and hence the video camera) around. Once found, it is a relatively simple matter to extract a coarse representation of the waveform that the user has drawn between these bounds.

Waveform at coarse resolution.