SELECTED PROJECTS
|
Project |01 X-Eye: A Novel Wearable Vision System
The goal of this project is to design a smart portable device, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system will be implemented with embedded systems and can achieve real-time processing. The computing core of the system includes an asymmetric dual-core SOC with an ARM core and a DSP core. A pico projector ,which has a small volume size but can project large screen size, is used as the display device. For efficient memory management, a triple buffering mechanism is designed. On the other side, software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification, and then fingertips would be extracted for organizing the geometrical features of fingertip's shape to recognize user's gesture commands finally. A color is modeled by the Gaussian mixture model (GMM), and the model is trained by the expectation-maximization algorithm.
The result of this project is a small portable device which can achieve any-time any-where interaction to capture images, and project output images on any plane as large as 42 inches. In addition, you can also manage the image database in the device and select images by gesture interface. When a gesture command is achieved, a corresponding sound which is integrated in the embedded system would be alerted. |
|
Project |02 Moving Object Detection for Night Surveillance
Traditional background subtraction methods perform poorly at night. In this paper, a robust method is proposed for automatic visual surveillance in low-light level environment which has quality problems of low brightness, low contrast and high-level noise. The novel method includes techniques of illumination compensation and illumination-invariant background subtraction to solve the low-quality problem in night surveillance. Experiments are conducted on several challenging videos captured with drastic illumination
change at night. Experimental results demonstrate that the proposed approach significantly outperforms existing techniques for the extraction of moving objects at night. |
|
Project |03 Single Image Defogging
This project presents an automatic method for the defogging process from a single haze image. To recover a foggy image, an accurate depth map is estimated from a multi-level depth estimation method, which fuses depth maps with different sizes of patches in dark channel prior. Markov random field (MRF) which labels the depth level in adjacent region is modeled for the compensation of wrong estimated regions. Airlight is automatically estimated as the deepest and smoothness region from the MRF labeled result. The accurate estimated of airlight provides good performance of restoration with respect to visibility and contrast but without oversaturating. The algorithm is verified by a handful of foggy and hazy images. Experimental results demonstrate that our defogging method to obtain a high quality recovered image through an accurate depth map and airlight. (more...)
|
|
Project |04 Camera Anomaly Detection
The number of cameras is greatly increased due to security, road monitoring, and home-care demanded. Images remained clear and correct field of view (FOV) are very important for video surveillance, and yet a large-scale system installed with a huge amount of cameras is hard to maintain. This paper presents a camera anomaly detection method based on holistic feature analysis over time in salient regions for automatically online determination. The salient regions are constructed from a Markov Random Field framework, which is modeled by pixel-based accumulated movement. There are a handful of holistic features extracted from salient regions, and an online Kalman filter is introduced for recursive smoothing uncertain features. A finite state machine, then, is further designed for real-time event detection. The proposed method yields a robust solution for reducing noise produced from real-world complexities. Experiments are conducted on a set of recorded videos simulating various challenging situations.The test results show that the camera anomaly detection method is superior to other methods in terms of precision rate, false alarm rate, and time complexity.
|
|
Project |05 Intelligence Video Surveillance Cloud
Wide area monitoring for community and city can be a very challenging engineering task due to its scale and heterogeneity in sensor, algorithm, and visualization levels. Multi-modal cameras and algorithms have to be fused into compact presentation for a single operator to actively and effectively respond to anomaly events and jeopardy. This paper presents a distributed and scalable video surveillance system, subcontracted by intelligent surveillance components (ISCs) and visualization surveillance components (VSCs) in compliance with functional labors. The ISCs are high-level algorithms applying computer vision for behavioral analysis of human and vehicles. The VSCs constitute a multi-tier subsystem to visualize fused results of messages, key frames, streaming videos and geographic context information. The system helps the operator to focus attention on interested events gathered from distrusted ISCs and presented by VSCs on map and three-dimensional homographic views. Robustness and effectiveness of the system has been demonstrated by a test run of real scenarios deployed in a campus.
|
Others:
People Counting
|
Tripwire Event Detection (Intruder Detector)
|