Digital Image Processing Explained

What is Digital Image Processing (DIP)

It is the manipulation of the digital data with the help of computer hardware and software to produce digital maps in which the specific information has been extracted and highlighted.

Contributed by: Venkat

Origin of Digital Image

The first application of digital image was in the newspaper industry when the pictures were first sent by submarine cable between London and New York.

The cable picture transmission in 1921 reduced the time required to transport a picture across the Atlantic for more than a week to less than 3 hours.

Objectives of DIP

  • Improvement on pictorial information for human interpretation
  • Processing of image data for storage, transmission and representation for autonomous machine perception.

Also Read: Introduction to Image Pre-processing | What is Image Pre-processing?

What is an Image?

An Image is a spatial representation of a two dimensional or three-dimensional scene.

It is an array or a matrix pixel (picture elements) arranged in columns and rows.

An image is also a two-dimensional array specifically arranged in rows and columns. Digital Image is composed of picture elements, image elements, and pixels. A Pixel is most widely used to denote the elements of a Digital Image. If you wish to learn more such concepts, join Great Learning Academy’s free image processing course.

Image As a Matrix

A digital grayscale image is presented in the computer by pixels matrix. Each pixel of such an image is presented by one matrix element – an integer from the set. The numeric values in pixel presentation are uniformly changed from zero (black pixels) to 255 (white pixels).

When it comes to a binary or boolean image that comprises of only two colours, i.e., black and white, the matrix represents the colour black as 0 and the colour white as 1.

Types of Images

  • Binary Image – The binary image contains only two pixel elements 0 & 1. 

Here, 0 refers to black and 1 refers to white. This is also known as Monochrome.

  • Black and White Image– The image contains black and white color.
  • 8 Bit Color Format – It has 256 different shades of colors and known as Grayscale Image. In this format, 0 stands for Black, and 255 stands for white, and 127 stands for Gray.
  • 16 Bit Color Format – It has different colors and is known as a High Color Format. The distribution of Color is not the same as a Grayscale image.

Note: A 16 bit format is actually divided into three further formats which are Red, Green and Blue. That famous RGB format.

Phases Of Image Processing

Below are the fundamental steps in Digital Image Processing:

  1. Image Acquisition – Image acquisition involves preprocessing such as scaling etc. It could be as simple as being given an image that is already in digital form.
  2. Image Enhancement – Basically, enhancement techniques bring out detail that is obscured and highlight certain features of interest in an image, such as changing brightness & contrast etc.
  3. Image Restoration – Image restoration is an area that also deals with improving the appearance of an image. Image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation.
  4. Colour Image Processing – Colour image processing is an area that has been gaining its importance because of the significant increase in the use of digital images over the Internet. This may include colour modelling and processing in a digital domain etc. On the other hand, enhancement is subjective.
  5. Wavelets and Multiresolution Processing – The foundation for representing images in various degrees of resolution is enabled by wavelets. Images are subdivided into smaller regions for data compression and for pyramidal representation.
  6. Compression – Compression techniques reduce the storage required to save an image or the bandwidth to transmit it. Particularly for use over the internet, it is very much necessary to compress data.
  7. Morphological Processing – Morphological processing extracts image components that are useful in the representation and description of shape.
  8. Segmentation – Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward a successful solution of imaging problems that require objects to be identified individually.
  9. Representation and Description – Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data that constitutes either the boundary of a region or all the points in the region itself. Description deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.
  10. Object recognition – Recognition is the process that assigns a label, such as, “apple” to an object based on its descriptors.

Applications of DIP

  • Medical Field
  • Remote Sensing
  • Machine/Robot vision
  • Video processing
  • Pattern Recognition

Example: Take Slice from MRI Scan of canine heart, and find the boundaries between the tissue.

  • Image with grey levels representing tissue activity
  • Use a suitable filter to highlight edges.

Also Read: What is Image Recognition and How it is Used?

Need of Image Processing in Medical Field

  • Interfacing analog outputs of sensors such as endoscope to digitizers and into image processing systems
  • Changing density range of Black and White images
  • Color correction and manipulation of colors within a color image
  • Contour detection and area calculations of cells in a biometric image
  • Restoration and smoothing of images
  • Construction of 3D Images to 2D images
  • Zooming of images
  • Removal of artifacts from the image
  • Easy for doctors to see the interior portion of the human body.

Source :