UT ECE professors Al Bovik and Joydeep Ghosh have received a grant entitled “Intelligent Autonomous Video Quality Agents,” from the National Science Foundation to conduct interdisciplinary research into the theory and design of automatic “agents” that will be able to learn to identify and assess distortions on computer networks. The difficult problem of determining the perceptual quality of video transmitted through complex networks and viewed on heterogeneous platforms, from cell phones to Internet-based television, is a key problem for our YouTube generation. It is also central to a variety of vision applications including remote face detection and recognition and surveillance. This project designs and creates intelligent video “quality agents” that learn how to determine perceptual video quality in heterogeneous networks and to assess its impact on decision tasks such as face detection and recognition, all without the benefit of reference videos. The research will rely on advanced statistical models of natural videos, perceptual principles, machine learning, and intelligent adaptive agent collectives to handle videos simultaneously impaired by multiple distortion types. One of many important applications will be novel face-salient quality assessment agents and quality-aware face detection algorithms. Multiple, co-operative video and face quality agents will be trained using active learning based feedback mechanisms on mobile devices.