Using High-Level Synthesis, software-based algorithms that have data-dependent execution time can be easily accelerated in hardware, with deterministic results, writes Daniele Bagni
Video is rapidly becoming more significant in embedded systems, where the information gathered can be analysed, recorded or otherwise used for real-time control. As a result, the demand for higher quality video content is also rising.
However, image sensors remain prone to digital noise, which can be introduced at the acquisition or transmission stages.
Often the noise is random in nature; a bit-error or glitch in the analogue-to-digital conversion can cause a particularly vexing form of degradation known as ‘impulsive noise’. This is also known as ‘salt and pepper’ noise because it normally appears as either white or black specks in the image (Figure 1).
To combat this and other types of noise, either linear or non-linear spatial filters are often employed, which can identify pixels affected by noise and replace them based on the values of their surrounding pixels.
Linear filters are perhaps most common and are also referred to as ‘mean filters’, as they replace the noisy pixel with one of a value equal to the average of the surrounding pixels. This is typically achieved using a low-pass method and can ‘de-noise’ images very quickly. However, this approach is that it can often result in the edges of the image becoming blurred.
Consequently, in almost all scenarios, a non-linear filter is the preferred option, which is particularly effective at removing impulsive noise.
One popular form of non-linear filtering is the ‘median filter’, which can deliver excellent noise reduction with considerably less edge blurring over linear filters.
Just as with mean filtering, median filtering compares a pixel with its neighbours to decide whether it is noise or representative of the image.
Those identified as noise are replaced but instead of using the mean of the surrounding pixels, the median is used; as the new pixel will have the same value as one of its neighbouring pixels — and not a calculated figure — it results in more natural pixel values, which can be particularly apparent at the edge of the image and thus delivers much sharper images.
In order to calculate the median value for an array of 3×3 pixels, the so-called ‘sliding window’ centred around one noisy pixel, for example, the values must first be sorted in to numerical order.
The value of the pixel in the middle of the list will be used to replace the noisy pixel, so a larger ‘neighbourhood’ of pixels surrounding a noisy pixel delivers greater results, but requires more pixel values to be sorted in to ascending order.
This sorting represents the most critical stage in a median filter, one that can consume considerable instruction cycles in a DSP or a media processor.
The process of sorting data sets is common in algorithms and many types of sorting routines are used, such as bubble sort, shell sort, merge sort and quick sort. This last example, quick sort, is (as the name may suggests) the fastest algorithm for large data sets, while bubble sort is seen as the simplest.
As they run in software, only one comparison is carried out at a time, making them reiterative.
However, the random nature of the values means the number of iterations needed to sort a given data set can be difficult to determine. Indeed, there is much debate about how to evaluate the complexity of a sorting algorithm, which is probably indicative of the need for an alternative approach.
Image processing in real-time has one overriding requirement: deterministic behaviour. The nature of software-based sorting algorithms makes determinism difficult to attain and as such it represents a good example of why some software algorithms should be ported to hardware.
For more detail: FPGAs can clean up video picture noise