Active-Pixel Cmos Image Sensors

Abstract--Active-pixel CMOS image sensors have many

attractive features, such as low power consumption, integrated

on peripheral circuits, and nondestructive column-parallel

readouts. Integrated on signal-processing circuits, they provide

the advantages of high-speed parallel operation and lower power

consumption, features which meet the special requirements of

CMOS image sensors. A number of integrated signal processing

circuit sensors have been developed and adopted not only for

video but also for machine-vision and security applications as

well. However, these approaches almost always involve lowering

the pixel density and increasing the chip size to accommodate the

added functions requiring large circuits. Hence proposed system

is designed to detect a motion on the basis of difference between

current and previous frame. While finding the difference system

will indicate the location or the position of the motions. System

will also capable to detect motion in dark by reducing the

infrared filtering from CMOS sensor. The system will then give

alert to user through light operating or buzzing the sound.

Keywords' CMOS image sensors.

I. INTRODUCTION

In market-driven applications such as surveillance,

automotive, and machine vision, there is an increased demand

for imaging systems with real-time processing capabilities. In

some cases, these specific requirements are quite hard to be

fulfilled through a conventional approach, consisting of a

standard charge-coupled device or complementary metal'

oxide'semiconductor (CMOS) camera linked to a digital

signal-processing platform. These systems are typically based

on general purpose architectures, performing real-time image

processing. Although their high computational power and high

flexibility are satisfactory for many applications, there are

some low-level images processing tasks that can be efficiently

executed using ad-hoc image processing capabilities

embedded directly in the imager. Thanks to the great

advantages of CMOS sub micrometer technology, allowing

even smaller device feature size, some recent CMOS image

sensors with integrated signal processing have been

developed, following two main approaches: pixel-level and

array-level processing.

II. PROPOSED IMPLEMENTATION

Camera

Frame Buffer

Live Frame

Comparator

Hardware Control Unit

Central Processing Unit

Figure 1.0 System Architecture

Figure 1.0 describes the proposed system architecture

where the CMOS sensor will be used to capture the frame at

maximum possible speed. These frames are then transferred to

central processor as array of data which will always get store

at buffer of exact size of captured frame. This buffer will be

utilized by the processing unit to find out any changes in

current captured view with respect to buffer image. If there are

any dissimilarity between buffered image and current captured

image more that the defined threshold values then the system

will generate the alert signals [5]. Defining the threshold value

is compulsory CMOS sensor result are environmental changes

dependent so system will never get exact result even after the

camera and the scene is steady. Design a CMOS module

interfacing with controller board system. Develop a software

program to capture a camera view and detect motion in

frames. Build microcontroller based hardware in order to ON

the emergency alert system.

III. RESEARCH METHODOLOGY TO BE EMPLOYED

Proposed system is mainly divided in two following modules.

First is video capturing to get the video frames, next is image

processing to get the images from frame, and next is to get the

pixel information from the image, the detection of color from

pixel and at last controlling the hardware.

III.1 IMAGE ACQUISITION

A digital image is produced by one or several image

sensors, which, besides various types of light-sensitive

cameras, include range sensors, tomography devices, radar,

ultra-sonic cameras, etc. Depending on the type of sensor, the

resulting image data is an ordinary 2D image, a 3D volume, or

an image sequence. The pixel values typically correspond to

light intensity in one or several spectral bands (gray images or

colour images), but can also be related to various physical

measures, such as depth, absorption or reflectance of sonic or

electromagnetic waves, or nuclear magnetic resonance.

III.2 PRE-PROCESSING

Before a computer vision method can be applied to image

data in order to extract some specific piece of information, it is

usually necessary to process the data in order to assure that it

satisfies certain assumptions implied by the method. Examples

are Re-sampling in order to assure that the image coordinate

system is correct. Noise reduction in order to assure that

sensor noise does not introduce false information. Contrast

enhancement to assure that relevant information can be

detected. Scale-space representation to enhance image

structures at locally appropriate scales.

III.3 FEATURE EXTRACTION

Image features at various levels of complexity are

extracted from the image data. Typical examples of such

features are Lines, edges and ridges. Localized interest

points such as corners, blobs or points.

III.4 DETECTION/SEGMENTATION

At some point in the processing a decision is made about

which image points or regions of the image are relevant for

further processing. Examples are Selection of a specific set of

interest points Segmentation of one or multiple image regions

which contain a specific object of interest.

III.5 HIGH-LEVEL PROCESSING

At this step the input is typically a small set of data, for

example a set of points or an image region which is assumed

to contain a specific object. The remaining processing deals

with, for example: Verification that the data satisfy modelbased

and application specific assumptions. Estimation of

application specific parameters, such as object poses or

objects size. Classifying a detected object into different

categories.

III.6. HARDWARE CONTROLLING

It is a microcontroller based device control system where

connected appliances can be operated using digital logic 1 or

0. In proposed system emergency alarm or the lighting system

can be activated.

IV. REQUIREMENT ANALYSIS

CMUcam4 v1.02 Firmware

The CMUcam4 is a fully programmable embedded

computer vision sensor. The main processor is the Parallax

P8X32A (Propeller Chip) connected to an Omni Vision 9665

CMOS camera sensor module.

IV.1 INTRODUCTION OF PARALLAX P8X32A

The design of the Propeller chip frees application

developers from common complexities of embedded systems

programming. For example: Eight processors perform

simultaneous processes independently or cooperatively,

sharing common resources through a central hub. The

Propeller application designer has full control over how and

when each cog is employed; there is no compiler-driven or

operating system-driven splitting of tasks among multiple

cogs. This method empowers the developer to deliver

absolutely deterministic timing, power consumption, and

response to the embedded application.

Asynchronous events are easier to handle than with

devices that use interrupts. The Propeller has no need for

interrupts; just assign some cogs to individual, high-bandwidth

tasks and keep other cogs free and unencumbered. The result

is a more responsive application that is easier to maintain. A

shared System Clock allows each cog to maintain the same

time reference, allowing true synchronous execution.

IV.2 PROGRAMMING ADVANTAGES

The object-based high-level Spin language is easy to learn, with special commands that allow developers to quickly exploit the Propeller chip's unique and powerful features. Propeller Assembly instructions provide conditional execution and optional flag and result writing for each individual instruction. This makes critical, multi-decision blocks of code more consistently timed; event handlers are less prone to jitter and developers spend less time padding, or squeezing, cycles.

IV.3 APPLICATIONS

The Propeller chip is particularly useful in projects that can be vastly simplified with simultaneous processing, including: Industrial control systems, Sensor integration, signal processing, and data acquisition, Handheld portable human-interface terminals, Motor and actuator control, User interfaces requiring NTSC, PAL, or VGA output, with PS/2 keyboard and mouse input, Low-cost video game systems, Industrial, educational or personal-use robotics, Wireless video transmission (NTSC or PAL).

IV.4 PROGRAMMING PLATFORM SUPPORT

The Propeller Demo Board convenient means to test-drive the Propeller chip's varied capabilities through a host of device Interfaces on one compact board. Main features are P8X32A-Q44 Propeller Chip, 24LC256-I/ST EEPROM for program storage, Replaceable 5.000 MHz crystal, 3.3 V and 5 V regulators with on/off switch, USB-to-serial interface for programming Communication, VGA and TV output, Stereo output with 16 ?? headphone amplifier, Electret microphone input, Two PS/2 mouse and keyboard I/O connectors, 8 LEDs (share VGA pins), Pushbutton for reset, Big ground post for scope hookup, I/O pins P0-P7 are free and brought out to header, Breadboard for custom circuits.

IV.5 PROPELLER TOOL SOFTWARE

The Propeller Tool Software is the primary development environment for Propeller programming in Spin and Assembly Language. It includes many features to facilitate organized development of object-based applications: multi-file editing, code and document comments, color-coded blocks, keyword highlighting, and multiple window and monitor support aid in rapid code development. Optional view modes allow you to quickly drill down to the information you need'by hiding comment lines, method bodies, or by showing the object's compiled documentation only. Example objects, such as keyboard, mouse, and graphics drivers, come standard with the free Propeller Tool software.

A.PROPELLENT LIBRARY AND EXECUTABLE

The Parallax Propellent software is a Windows-based tool for compiling and downloading to the Parallax Propeller chip'without using the Propeller Tool development software. The Propellent Executable provides the ability to do things like compile Spin source, save it as a binary or EEPROM image, identify a connected Propeller chip, and download to the Propeller chip, all via simple command-line switches or drag-and-drop operations.

B.PROPELLER GCC

The Propeller GCC Compiler tool-chain is an open-source, multi-OS, and multi-lingual compiler that targets the Parallax Propeller's unique multicore architecture. Parallax has collaborated with industry experts to develop all aspects of the tool chain, including the creation of a new development environment that simplifies writing code, compilation, and downloading to a Propeller board. Using the Large Memory Model (LMM) and Extended Memory Model (XMM) gives the developer the ability to write C or C++ programs that run faster than Spin or exceed Spin's 32 KB program size limit, respectively. Example objects, including C objects, are available through the Propeller Object Exchange.

V. RELATED WORK

VLSI design and experimental measurements of a single-chip CMOS image sensor with moving object detection and localization capability [6].Motion events are first detected using a frame differencing scheme; then they are processed by an on-the-fly clustering processor to localize the motion objects in the scene [7].Unlike existing systems relying on external FPGAs or CPLDs to perform object localization, our system does not require any external computation or storage

[8]. The proposed algorithm is integrated on chip, featuring compact silicon implementation and little power consumption. The proposed design is an ideal candidate of wireless sensor network node, for applications such as assisted living monitors, security cameras, and even robotic vision [9]. Future improvements include adoption of a dynamic resolution pixel array and event based object tracking [10].

Fig.CMU cam4 kit interface on computer

Fig. Motion detection by CMU cam4

VI. CONCLUSION

It is possible to make a low-power camera based motion detection system with the help of CMU cam4 kit. With enough time and testing, an advanced motion detection algorithm could be designed that minimizes power consumption while detecting motion easily. More advanced cameras with higher resolutions and better processors in the future will further this effort to create a truly smart sensor that knows when a room is occupied and when a room is not occupied.

Source: Essay UK - http://www.essay.uk.com/free-essays/information-technology/active-pixel-image-sensors.php



About this resource

This Information Technology essay was submitted to us by a student in order to help you with your studies.


Search our content:


  • Download this page
  • Print this page
  • Search again

  • Word count:

    This page has approximately words.


    Share:


    Cite:

    If you use part of this page in your own work, you need to provide a citation, as follows:

    Essay UK, Active-Pixel Cmos Image Sensors. Available from: <https://www.essay.uk.com/free-essays/information-technology/active-pixel-image-sensors.php> [26-05-20].


    More information:

    If you are the original author of this content and no longer wish to have it published on our website then please click on the link below to request removal: