ALADDIN: Automated Low-Level Analysis and Description of Diverse Intelligence Video

Abstract: The Automated Low-Level Analysis and Description of Diverse Intelligence Video (ALADDIN) Program seeks to combine the state-of-the-art in video extraction, audio extraction, knowledge representation, and search technologies in a revolutionary way to create a fast, accurate, robust, and extensible technology that supports the multimedia analytic needs of the future. 

Contribution: UCF is a part of SRI-Sarnoff team and I've been leading UCF team since summer 2012. I have investigated the benefits of using concepts,attribute and objects in representating a video. I am also interested in problem of event detection using only few examplars.

Evaluation of Tracking Algorithms on ISIS Video Data for the Wide Area Surveillance Project:

Abstract: The project is a part of the Wide-Area Surveillance (WAS) project being implemented by the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T). This project targets the development and evaluation of desirable crowd tracking algorithm to be used in the (ISIS) context. ISIS is a camera system developed by Massachusetts Institute of Technology/Lincoln Laboratories (MIT/LL) and managed by Pacific Northwest National Laboratory (PNNL). The ISIS consists of a 100 Mpixel sensor (an array of image servers and associated hard drive storage array). While the ISIS camera can collect a large volume of video data for a wide area monitored, it demands an effective crowd tracking algorithm to be integrated with the ISIS software system supporting video viewing and analysis.[More Info

Contribution: The first method that we proposed was a part-based greedy approach capable of detecting occluded part of a person and was published in CVPR2012.The second approach was based on a global data association method which we utilized and introduced Generalized Minimum Clique Graph to efficiently track each individual in video sequences provided. The later was published in ECCV2012.

Visual analytics in multiple camera networks:

Abstract: The project is a part of the Visual analytics project being implemented by the the U.S. Army Research Laboratory, the U.S. Army Research Office (ARO). The prject targets include: 1) Detection and tracking of humans in multiple, disjoint camera videos, in potentially crowded scenarios, and learning of an adaptive appearance model for individual identification and subsequent target reacquisition. 2) Inference of object movement patterns, location of move-stop-move conditions and entry and exit locations in unobserved regions between pairs of cameras with disjoint fields of view. 3) Learning of scene semantics including spatial and temporal relationships between diverse types of entities observed in multiple cameras, where examples of such entities include, camera fields of view, entry and exit regions, object types, and dominant paths. 

Contribution: We proposed a new graph theoritic approach (GMMCP) to solve the Multiple Object Tracking problem. We show great performance in several benchmark. The work is published in CVPR 2015.

Who's Your Daddy?

Abstract: In this project, our goal is to bridge computer vision research with findings in anthropological studies to answer several key questions: 

-Do offspring resemble their parents?

-Do offspring resemble one parent more than the other?

-What parts of the face are more genetic?

-Do anthropologies' studies help learn better features?

Contribution: To answer these questions and address the problem of parent-offspring resemblance we propose an algorithm that fuses the features and metrics discovered via gated autoencoders with a discriminative neural network layer that learns the optimal, or what we call genetic, features for the task. For more information please check out the CVPR14 paper.[Project Page][Press Release] [Fox35 Interview]

The Science & 

Mathematics University

  • Facebook Clean Grey
  • Twitter Clean Grey
  • LinkedIn Clean Grey

© 2023 by Scientist Personal. Proudly created with Wix.com