Tektronx PQA500: Reference Picture Quality Measurement Analyzer For Video Processing Systems.

Objective versus Subjective picture quality measurements are vital to
efficient and repeatable video processing design and evaluation.

Picture Quality Any Time, Any Place
In the past, viewers were constrained to watch television
in front of a receiver in their home. Today, people watch
video at any time and in any place. A viewer can now
browse video on their desktop PC, or watch a movie
outdoors on a mobile handheld device, or view HDTV
(High Definition Television) in their own home theater
system or at Digital Cinemas within a well controlled
viewing environment.
Powered by recent advances in video compression
technology, the number of ways of delivering content
to the viewers is increasing. The video engineer is faced
with a challenging environment and is required to
maximize the picture quality for each viewing condition
across a wide variety of content – while maintaining a
key differentiator of quality of service to the end viewer.
This application note focuses on the issues video
engineers face when developing and/or evaluating an
algorithm, device or video signal path; and the objective
picture quality measurement solutions Tektronix® provides
to support those requirements.

Video Quality Measurement Standards
The International Standard for measurement of subjective
picture quality is ITU-R BT 500. This standard defines a
variety of conditions to measure the picture quality of an
image, such as:
Display type
Viewing distance
Viewing environment
Viewer’s characteristics
These categories and others have been chosen to produce
consistent results over multiple viewer trials. The standard
has also defined specific methods for providing information
about the picture quality of an image. However, the
standard, as it exists today, does not cover emerging
developments in new formats – such as repurposing
applications from very high resolution video (such as DCinema master formats) down to much lower resolution
video for mobile reception, changing the display from CRT
to LCD and so on. Designers have to focus on picture
quality under a variety of conditions in addition to those
defined in the standard. Changes due to each of these
viewing conditions can take a very long time for an
engineer to evaluate subjectively.
When a video design engineer starts the initial design
of any video processing engine, they have to consider
the target application and specifications upon which
their design will be based. Once the initial beta version
has evolved, the designer will start debugging the video
processing architecture to ensure that it meets the
design specification criteria. Debugging leads to improving
and optimizing for specific needs.
Similarly, engineers in charge of a video path (like from a
head-end to a set-top box) must consider the effect of
all processing and compression/decompression on the
viewing experience…while maximizing bandwidth and
costs. Other applications include Studios who must
ensure their content is delivered as promised.
Whether designing or evaluating system components,
engineers must consider changes that can be made to
improve the picture quality and to optimize the video
output in order to fulfill their particular requirements.
Picture quality testing and documentation is required
at all stages.
Visual Testing Methodology
Historically, human testing has been performed to
subjectively verify the picture quality at the output of
each device. These picture quality evaluations would be
repeated at each stage of the development process.
However, there are known deficiencies in subjective picture
quality testing done by a human viewer. First, the results
are not objective. Different people will perceive overall
picture quality differently. In addition, subjective testing
is not repeatable. For instance the viewers may notice
more artifacts in the image earlier in the morning as
compared to what they would notice late at night. There
are issues of language and description that make it
difficult to communicate with the others about what is
most important to the quality of the image.
Using subjective methods, differing evaluations can result –
even if the same person had evaluated the same material.
Recording all these results, to track improvements, has also
not been an exact science…until now.
The ITU-R BT.500 Standard mitigated these problems
somewhat by using an average of the ratings from many
people, often around two dozen. Their data is averaged
into a score called Differential Mean Opinion Score (DMOS)
However, for some applications, when an even larger
number of viewers are required to obtain reliable,
repeatable and verifiable quality data, the required time,
expense and use of resources is often impractical.

Read entire technical PDF here!