Knowing where an object is in an image is called localization in computer vision. Using contour detection, we can detect the borders of objects, and therefore, localize them easily. Importantly, contour detection could be the very first step for many interesting applications such as image foreground extraction, simple image segmentation, detection and recognition. The official OpenCV documentation says: “The contours are a useful tool for shape analysis and object detection and recognition.” Let us discuss contour detection using OpenCV.
In this post, we are going to learn about contours and contour detection using OpenCV. Not only the theory, we will also cover a complete hands-on coding in both Python and C++ programming languages to have a first hand experience of contour detection using OpenCV.
Application of Contours in Computer Vision
You can build some really cool applications using contour detection and OpenCV. The following outlines interesting ones.
- In surveillance video, the technology of motion detection has got a number of applications. They range from indoor and outdoor security environments, traffic control, behavior detection during sports activities, to compression of video. (Moving Object Detection and Segmentation using Frame differencing and Summing Technique).

- Any unattended object in public places is generally considered as a suspicious object. An effective and safe solution could be: (Unattended Object Detection through Contour Formation using Background Subtraction).

- Practical application of contours: Image foreground extraction (similar to image segmentation) and replacing the background with a better and colorful one.

Now that we gave an idea of the content of this article, let us also see what we are going to cover in this tutorial. The rest of the content is organised as follows:
Table of Contents
- What are contours?
- Steps for finding and drawing contours using OpenCV.
- Finding and drawing contours using OpenCV.
- Contour hierarchies.
- Applications of contours in computer vision.
- Summary.
What are Contours
When we join all the points on the boundary of an object, we get a contour. Typically, a specific contour area is related to the boundary pixels, having similar color and intensity. Whenever the intensity or color changes greatly, then almost always we get a new contour area starting from there.
OpenCV makes it really easy for finding, and drawing contours in images. It provides two simple functions to do so: findContours()
, and drawContours()
. As promised earlier, for hands-on coding experience we will see how to use the above functions of OpenCV. We will also discuss what are the arguments/options these functions accept, in detail.
Not only the OpenCV functions, we will also see two different types of contour detection methods: CHAIN_APPROX_SIMPLE
and CHAIN_APPROX_NONE
. We will cover these in detail, in the rest of the post.
For contour detection, we will be using few images to show many use cases, and different scenarios. The following figure shows just one simple example of contour detection.
Now that we learned what contours are, let us discuss the steps involved for detecting contours.
Steps for Detecting and Drawing Contours in OpenCV
Detecting and drawing contours using OpenCV is a fairly simple task. The steps involved are:
- Read the Image and convert it to Grayscale Format
Read the image and convert the image to grayscale format. Converting the image to grayscale is very important as it prepares the image for the next step. Converting the image to a single channel grayscale image is important for thresholding, which in turn is necessary for the contour detection algorithm to work properly.
- Apply Binary Thresholding
Apply binary thresholding to the grayscale image. While finding contours, it is always advisable to first apply either binary thresholding or canny edge detection to the grayscale image. In this post, we will be applying binary thresholding to the image.
For the proper detection of contours, we need to convert the image to a single-channel color format (like grayscale), and then apply binary thresholding. Applying binary thresholding makes the objects completely black and white. The objects of interest, and their border will be completely white having the same color intensity. This is actually required for the contour detection algorithm to work properly. It will detect the borders of the objects from the white pixels (and of course, similar intensity as well because every white pixel will have a value of 255). Note: the black pixels with the value of 0 will be perceived as background and ignored.
At this point, one question may arise. What if we use single channels like R (red), G (green), and B (blue) instead of grayscale images and that too without applying any thresholding? In such a case, the contour detection algorithm will not work well. As we discussed, the algorithm looks for borders, and similar intensity pixels to detect the contours. A binary image provides this information much better than a single channel R, G, or B image. In a later part of the post, we have resultant images when using only a single R, G, or B channel instead of grayscale and thresholded images.
- Find the Contours
The next step is to use the findContours()
function to detect the contours in the image.
- Draw Contours on the Original RGB Image.
And finally, we will use the drawContours()
function to overlay the contours on the original RGB image.
The above steps will make much more sense, and become even clearer when we will start to code.
Finding and Drawing Contours using OpenCV
Let us start the coding part of the post. We will start with importing the OpenCV library and reading the input image.
Python:
import cv2
# read the image
image = cv2.imread('input/image_1.jpg')
We assume that the image is inside the input folder of the current project directory. The next step is to convert the image into a grayscale image (single channel format).
Note: All the C++ code is contained within the main() function.
C++:
#include<opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main() {
// read the image
Mat image = imread("input/image_1.jpg");
Python:
# convert the image to grayscale format
img_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
C++:
// convert the image to grayscale format
Mat img_gray;
cvtColor(image, img_gray, COLOR_BGR2GRAY);
In the above code block, we are using OpenCV’s cvtColor()
function to convert the original RGB image to a grayscale image. As discussed earlier, this will help to apply binary thresholding and in obtaining much better results.
Python:
# apply binary thresholding
ret, thresh = cv2.threshold(img_gray, 150, 255, cv2.THRESH_BINARY)
# visualize the binary image
cv2.imshow('Binary image', thresh)
cv2.waitKey(0)
cv2.imwrite('image_thres1.jpg', thresh)
cv2.destroyAllWindows()
C++:
// apply binary thresholding
Mat thresh;
threshold(img_gray, thresh, 150, 255, THRESH_BINARY);
imshow("Binary mage", thresh);
waitKey(0);
imwrite("image_thres1.jpg", thresh);
destroyAllWindows();
We are using the threshold()
function to apply binary thresholding to the image. Any pixel with a value greater than 150 will be updated to a value of 255
, that is, completely white. And, all the other pixels in the resulting image will be 0
, that is, black.
Once thresholded, we also visualize the binary image using the imshow()
function. This will help us visualize how the resultant image actually looks, and what type of image we can use for finding contours. The following image shows the binary image of the original RGB image.
In the above image, we can clearly see that the pen and border of the tablet are almost white, and the phone’s borders are also white. While finding the contours, the algorithm will consider these as the objects, and find the contour points around the borders of these white objects.
The background is completely black, including the backside of the phone. The contour finding algorithm will ignore such places. This is what we need for the contour detection algorithm to work properly in OpenCV. It will deem the white pixels around the border as similar intensity pixels, and the contour points will be joined around them due to similarity.
Drawing Contours using CHAIN_APPROX_NONE
Now, let us find and draw the contours using the CHAIN_APPROX_NONE method. The code is given below. It is followed by a detailed explanation of arguments and options.
Python:
# detect the contours on the binary image using cv2.CHAIN_APPROX_NONE
contours, hierarchy = cv2.findContours(image=thresh, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE)
# draw contours on the original image
image_copy = image.copy()
cv2.drawContours(image=image_copy, contours=contours, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)
# see the results
cv2.imshow('None approximation', image_copy)
cv2.waitKey(0)
cv2.imwrite('contours_none_image1.jpg', image_copy)
cv2.destroyAllWindows()
C++:
// detect the contours on the binary image using cv2.CHAIN_APPROX_NONE
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(thresh, contours, hierarchy, RETR_TREE, CHAIN_APPROX_NONE);
// draw contours on the original image
Mat image_copy = image.clone();
drawContours(image_copy, contours, -1, Scalar(0, 255, 0), 2);
imshow("None approximation", image_copy);
waitKey(0);
imwrite("contours_none_image1.jpg", image_copy);
destroyAllWindows();
First, let us start with the function findContours()
. It works using three essential arguments:
image
: This is the input image that we have to give and in our case it is the binary image that we obtained in the previous step.mode
: This is the contour retrieval mode. We have provided this as RETR_TREE which means the algorithm will retrieve all possible contours from the binary image. One can refer to the official documentation to know about the other contour retrieval modes. In a later part of this post, we will see in detail about the other contour retrieval modes.method
: This defines contour approximation method. In our case it isCHAIN_APPROX_NONE
. This will keep all the contour points whenever it runs through any boundary lines. This method is slightly slower than theCHAIN_APPROX_SIMPLE
, which we will see in the next section.
Note: make a copy of the original image as we do not want to edit it. Also, it is easy to visualize and understand results from different methods on the same image. We use the drawContours()
function to overlay the contours on the RGB image. Let us go over the arguments the function accepts.
image
: This is the input RGB image we want to draw the contour on.contours
: It indicates thecontours
we obtained from thefindContours()
function.contourIdx
: The contours we obtain contains the pixel coordinates of the contour points as lists. And, we can provide the index position from this list indicating exactly which contour point we want to draw. Providing a negative value will draw all the contour points.color
: This is the color of the contour points that we want to draw. We are drawing the points in green color.thickness
: This is the thickness of points.
Finally, we visualize the contours in a new window and save the image to disk as well.
We may want to compare for an idea, how the original image and the contours detected when overlaid on the image looks like. The following image shows both side-by-side.

We can see all the contour points along the boundaries of the pen. Along with that, we can also see the contours drawn along the borders of the phone. Obviously, it is not perfect as it depends on the binary image we feed to the algorithm. As we can see, some of the contour lines which are just beside the phone boundaries are halfway drawn. Whenever the intensity of the pixels change, then the algorithm will start to draw a new contour line. Therefore, we can see a few internal points in the camera lens of the phone.
You can also play around with the threshold value and check out what kind of results you are getting. A marginally different threshold value will surely produce a different binary image. And the resulting contour points will also be different.
Using Single Channel: Red, Green, or Blue
Just to get an idea, the following are some results when using red, green and blue channels separately, while detecting contours. We discussed this in the contour detection steps previously. The following are the Python and C++ code for the same image as above.
Python:
import cv2
# read the image
image = cv2.imread('input/image_1.jpg')
# B, G, R channel splitting
blue, green, red = cv2.split(image)
# detect contours using blue channel and without thresholding
contours1, hierarchy1 = cv2.findContours(image=blue, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE)
# draw contours on the original image
image_contour_blue = image.copy()
cv2.drawContours(image=image_contour_blue, contours=contours1, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)
# see the results
cv2.imshow('Contour detection using blue channels only', image_contour_blue)
cv2.waitKey(0)
cv2.imwrite('blue_channel.jpg', image_contour_blue)
cv2.destroyAllWindows()
# detect contours using green channel and without thresholding
contours2, hierarchy2 = cv2.findContours(image=green, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE)
# draw contours on the original image
image_contour_green = image.copy()
cv2.drawContours(image=image_contour_green, contours=contours2, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)
# see the results
cv2.imshow('Contour detection using green channels only', image_contour_green)
cv2.waitKey(0)
cv2.imwrite('green_channel.jpg', image_contour_green)
cv2.destroyAllWindows()
# detect contours using red channel and without thresholding
contours3, hierarchy3 = cv2.findContours(image=red, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE)
# draw contours on the original image
image_contour_red = image.copy()
cv2.drawContours(image=image_contour_red, contours=contours3, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)
# see the results
cv2.imshow('Contour detection using red channels only', image_contour_red)
cv2.waitKey(0)
cv2.imwrite('red_channel.jpg', image_contour_red)
cv2.destroyAllWindows()
C++:
#include<opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main() {
// read the image
Mat image = imread("input/image_1.jpg");
// B, G, R channel splitting
Mat channels[3];
split(image, channels);
// detect contours using blue channel and without thresholding
vector<vector<Point>> contours1;
vector<Vec4i> hierarchy1;
findContours(channels[0], contours1, hierarchy1, RETR_TREE, CHAIN_APPROX_NONE);
// draw contours on the original image
Mat image_contour_blue = image.clone();
drawContours(image_contour_blue, contours1, -1, Scalar(0, 255, 0), 2);
imshow("Contour detection using blue channels only", image_contour_blue);
waitKey(0);
imwrite("blue_channel.jpg", image_contour_blue);
destroyAllWindows();
// detect contours using green channel and without thresholding
vector<vector<Point>> contours2;
vector<Vec4i> hierarchy2;
findContours(channels[1], contours2, hierarchy2, RETR_TREE, CHAIN_APPROX_NONE);
// draw contours on the original image
Mat image_contour_green = image.clone();
drawContours(image_contour_green, contours2, -1, Scalar(0, 255, 0), 2);
imshow("Contour detection using green channels only", image_contour_green);
waitKey(0);
imwrite("green_channel.jpg", image_contour_green);
destroyAllWindows();
// detect contours using red channel and without thresholding
vector<vector<Point>> contours3;
vector<Vec4i> hierarchy3;
findContours(channels[2], contours3, hierarchy3, RETR_TREE, CHAIN_APPROX_NONE);
// draw contours on the original image
Mat image_contour_red = image.clone();
drawContours(image_contour_red, contours3, -1, Scalar(0, 255, 0), 2);
imshow("Contour detection using red channels only", image_contour_red);
waitKey(0);
imwrite("red_channel.jpg", image_contour_red);
destroyAllWindows();
}
The following figure shows the contour detection results for all the three separate color channels.

In the above image we can see that the contour detection algorithm is not able to find the contours properly. This is because it is not able to detect the borders of the objects properly, and also the intensity difference between the pixels is not well defined. This is the reason we prefer to use a grayscale, and binary thresholded image for detecting contours.
Drawing Contours using CHAIN_APPROX_SIMPLE
In this part, we will see how the CHAIN_APPROX_SIMPLE
algorithm works and differs from the CHAIN_APPROX_NONE
algorithm.
The following is the code for using CHAIN_APPROX_SIMPLE
algorithm.
Python:
"""
Now let's try with `cv2.CHAIN_APPROX_SIMPLE`
"""
# detect the contours on the binary image using cv2.ChAIN_APPROX_SIMPLE
contours1, hierarchy1 = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# draw contours on the original image for `CHAIN_APPROX_SIMPLE`
image_copy1 = image.copy()
cv2.drawContours(image_copy1, contours1, -1, (0, 255, 0), 2, cv2.LINE_AA)
# see the results
cv2.imshow('Simple approximation', image_copy1)
cv2.waitKey(0)
cv2.imwrite('contours_simple_image1.jpg', image_copy1)
cv2.destroyAllWindows()
C++:
// Now let us try with CHAIN_APPROX_SIMPLE`
// detect the contours on the binary image using cv2.CHAIN_APPROX_NONE
vector<vector<Point>> contours1;
vector<Vec4i> hierarchy1;
findContours(thresh, contours1, hierarchy1, RETR_TREE, CHAIN_APPROX_SIMPLE);
// draw contours on the original image
Mat image_copy1 = image.clone();
drawContours(image_copy1, contours1, -1, Scalar(0, 255, 0), 2);
imshow("Simple approximation", image_copy1);
waitKey(0);
imwrite("contours_simple_image1.jpg", image_copy1);
destroyAllWindows();
The only difference here is in the findContours()
function where we pass the method as CHAIN_APPROX_SIMPLE
instead of CHAIN_APPROX_NONE
.
This will only keep the end contour points whenever it runs through any vertical, horizontal, or diagonal lines. This means that any of the points along the straight paths will be dismissed and we will be left with only the end points For example, if we find contours along a rectangle, then all the contour points except the four corner points will be dismissed. This method is slightly faster than the CHAIN_APPROX_NONE
that we saw earlier. This is because the algorithm does not store all the points, uses less memory, and therefore, takes less time to complete the execution and find the contours.
The following image shows the results.
If you observe closely, there are almost no differences between the outputs of CHAIN_APPROX_NONE and CHAIN_APPROX_SIMPLE.
Now, why is that? If we print the contours list, we will clearly get a lesser number of coordinates for each contour area. Then, how come all the contours are drawn on the image?
The issue is with the drawContours()
function. Although the findContours()
function tends to find only the vertices on straight lines, the drawContours()
function always joins all the available points. Therefore, it will automatically fill any points between any two detected contour points, even if they are not in the contours
list.
So, how do we confirm that the CHAIN_APPROX_SIMPLE
algorithm is actually working. The most straightforward way is to loop over the contour points manually, and draw a circle using OpenCV on the detected contour coordinates. Then we can easily confirm our assumptions. Not only that, we will use a different image that will actually help us visualize the working of the algorithm better. We will use the following image.
The above image is of a book with black background which is just perfect for our experiments.
The following code block uses the above image to apply the CHAIN_APPROX_SIMPLE
algorithm. Almost all the code is the same as before, except a few new variable names.
Python:
# to actually visualize the effect of `CHAIN_APPROX_SIMPLE`, we need a proper image
image1 = cv2.imread('input/image_2.jpg')
img_gray1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
ret, thresh1 = cv2.threshold(img_gray1, 150, 255, cv2.THRESH_BINARY)
contours2, hierarchy2 = cv2.findContours(thresh1, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
image_copy2 = image1.copy()
cv2.drawContours(image_copy2, contours2, -1, (0, 255, 0), 2, cv2.LINE_AA)
cv2.imshow('SIMPLE Approximation contours', image_copy2)
cv2.waitKey(0)
image_copy3 = image1.copy()
for i, contour in enumerate(contours2): # loop over one contour area
for j, contour_point in enumerate(contour): # loop over the points
# draw a circle on the current contour coordinate
cv2.circle(image_copy3, ((contour_point[0][0], contour_point[0][1])), 2, (0, 255, 0), 2, cv2.LINE_AA)
# see the results
cv2.imshow('CHAIN_APPROX_SIMPLE Point only', image_copy3)
cv2.waitKey(0)
cv2.imwrite('contour_point_simple.jpg', image_copy3)
cv2.destroyAllWindows()
C++:
// using a proper image for visualizing CHAIN_APPROX_SIMPLE
Mat image1 = imread("input/image_2.jpg");
Mat img_gray1;
cvtColor(image1, img_gray1, COLOR_BGR2GRAY);
Mat thresh1;
threshold(img_gray1, thresh1, 150, 255, THRESH_BINARY);
vector<vector<Point>> contours2;
vector<Vec4i> hierarchy2;
findContours(thresh1, contours2, hierarchy2, RETR_TREE, CHAIN_APPROX_NONE);
Mat image_copy2 = image1.clone();
drawContours(image_copy2, contours2, -1, Scalar(0, 255, 0), 2);
imshow("None approximation", image_copy2);
waitKey(0);
imwrite("contours_none_image1.jpg", image_copy2);
destroyAllWindows();
Mat image_copy3 = image1.clone();
for(int i=0; i<contours2.size(); i=i+1){
for (int j=0; j<contours2[i].size(); j=j+1){
circle(image_copy3, (contours2[i][0], contours2[i][1]), 2, Scalar(0, 255, 0), 2);
}
}
imshow("CHAIN_APPROX_SIMPLE Point only", image_copy3);
waitKey(0);
imwrite("contour_point_simple.jpg", image_copy3);
destroyAllWindows();
Almost everything is the same as the previous code block, except the two for
loops that we have here. The first for loop
loops over each of the contour areas that is present in the contours
list. And the second one loops over each of the coordinates in that area. Then we draw a green circle on that coordinate point using the circle()
function from OpenCV. Finally, we visualize and save the results to disk.
This time there is no possibility of anything extra being drawn on the image other than what we are drawing.
After executing the code, we get the following result.

The above comparative image makes it much easier to visualize the working of CHAIN_APPROX_SIMPLE
algorithm. The vertical and horizontal lengths of the book contain only four dots on the corners, and the straight lines are completely ignored. Other than that, we can also see that the letters and bird are not completely drawn with contour lines. Instead only a few coordinates contain the circles that we have drawn.
The above figure pretty much confirms the effectiveness of the CHAIN_APPROX_SIMPLE
algorithm when working with contour detection using OpenCV.
Contour Hierarchies
In this section, we will learn about contour hierarchies. Hierarchies can be considered as parent-child relationships between the contours. This, we will learn along with the different contour retrieval modes as well.
We will see how each contour retrieval mode affects the contour detection in images, and how they provide us with the hierarchy results.
Parent-Child Relationship
By now, we know that using contour detection, we can detect objects in an image, especially if there is a large change in intensity between the object and the background. There can be many single objects scattered around in an image (similar to the output on the first example image used in this post). Or, there can be some objects or shapes inside one another. In such cases, there exists a relationship between the outer shapes, and the inner shapes. And most of the time, we can safely say that the outer shape is a parent of the inner shape, or the inner shape in a child of the outer shape.
Let us take a look at a simple example to fully understand the concept.
As we were talking about parent-child relationship in contours and shapes, we can also depict the above figure something like the following.
Each of the numbers in the above image have a significance. All the individual numbers, i.e., 1, 2, 3, and 4 are separate objects according to the contour hierarchy and parent child relationship. At the same time, 3 and 3a have a relationship between them. We can say that 3a is a child of 3. Also worth noting that 1, 2, and 4 are all parent shapes without any child relationship.
This also means that 1 and 2 have completely different contours. So, what we have numbered as parent 1 can be parent 2 as well. And what we have numbered as parent 2 can also be parent 1. But 3 and 3a are parent and child in that specific order. Then 4 is also a completely different shape which does not have any relationship with any other contour.
Contour Relationship Representation
You must have noticed that the findContours()
returned two details, one is the contours list and the other is the hierarchy. But we have not used the hierarchy till now, and also do not know how it is represented.
The hierarchy is represented as an array. This array contains four values, which mean:
[Next, Previous, First_Child, Parent
]
Let us discuss and understand what the values mean:
Next
: This denotes the next contour in an image which is present at the same hierarchical level. So, for contour 1, the next contour at the same hierarchical level is 2. Here, Next
will be 2. Accordingly, contour 3 has no contour at the same hierarchical level as itself. So, it’s Next
value will be -1.
Previous
: Previous denotes the previous
contour at the same hierarchical level. This means that contour 1 will always have it’s Previous
value as -1.
First_Child
: This is the first child contour of the current contour that we are considering. Contours 1 and 2 have no children at all. So, the index values for their First_Child
will be -1. But contour 3 has a child. So, for contour 3, the First_Child
position value will be the index position of 3a.
Parent: Parent
denotes the parent contour’s index position for the current contour. Contours 1 and 2 do not have any Parent
contour obviously. For the contour 3a, it’s Parent
is going to be contour 3 and for contour 4, the parent is contour 3a.
The above theory makes sense. But, how do we actually visualize the hierarchy arrays that we have discussed above? The best way is to use a simple image such as the one with simple lines and shapes, detect the contours and hierarchies using different retrieval modes, and print the values to visualize them.
Different Contour Retrieval Techniques
There are four contour retrieval techniques that OpenCV provides, namely, RETR_LIST, RETR_EXTERNAL, RETR_CCOMP, and RETR_TREE
.
Let us go over each of them along with the code and use the above image to retrieve the contours.
The following few lines of code reads the image from disk, converts it to grayscale, and applies the binary thresholding.
Python:
"""
Contour detection and drawing using different extraction modes to complement
the understanding of hierarchies
"""
image2 = cv2.imread('input/custom_colors.jpg')
img_gray2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
ret, thresh2 = cv2.threshold(img_gray2, 150, 255, cv2.THRESH_BINARY)
C++:
/*
Contour detection and drawing using different extraction modes to complement the understanding of hierarchies
*/
Mat image2 = imread("input/custom_colors.jpg");
Mat img_gray2;
cvtColor(image2, img_gray2, COLOR_BGR2GRAY);
Mat thresh2;
threshold(img_gray2, thresh2, 150, 255, THRESH_BINARY);
RETR_LIST
The RETR_LIST
contour retrieval method does not create any parent child relationship between the extracted contours. So, for all the contour areas that are detected, the First_Child
and Parent
index position values are always -1.
All the contours will have their corresponding Previous
and Next
contours according to the explanation in the previous section.
The following code block shows the example of the contour retrieval method, and also the output that we get.
Python:
contours3, hierarchy3 = cv2.findContours(thresh2, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
image_copy4 = image2.copy()
cv2.drawContours(image_copy4, contours3, -1, (0, 255, 0), 2, cv2.LINE_AA)
# see the results
cv2.imshow('LIST', image_copy4)
print(f"LIST: {hierarchy3}")
cv2.waitKey(0)
cv2.imwrite('contours_retr_list.jpg', image_copy4)
cv2.destroyAllWindows()
C++:
vector<vector<Point>> contours3;
vector<Vec4i> hierarchy3;
findContours(thresh2, contours3, hierarchy3, RETR_LIST, CHAIN_APPROX_NONE);
Mat image_copy4 = image2.clone();
drawContours(image_copy4, contours3, -1, Scalar(0, 255, 0), 2);
imshow("LIST", image_copy4);
waitKey(0);
imwrite("contours_retr_list.jpg", image_copy4);
destroyAllWindows();
The output:
LIST: [[[ 1 -1 -1 -1]
[ 2 0 -1 -1]
[ 3 1 -1 -1]
[ 4 2 -1 -1]
[-1 3 -1 -1]]]
We can clearly see that the 3rd and 4th index positions of all the detected contour areas are -1. It is just as we expected.
RETR_EXTERNAL
The RETR_EXTERNAL
contour retrieval method is a really interesting one. It only detects the parent contours, and ignores any child contours. So, all the inner contours like 3a and 4 will not have any points drawn on them.
The best thing is that we can visualize the contours effect with this retrieval mode right-away.
Python:
contours4, hierarchy4 = cv2.findContours(thresh2, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
image_copy5 = image2.copy()
cv2.drawContours(image_copy5, contours4, -1, (0, 255, 0), 2, cv2.LINE_AA)
# see the results
cv2.imshow('EXTERNAL', image_copy5)
print(f"EXTERNAL: {hierarchy4}")
cv2.waitKey(0)
cv2.imwrite('contours_retr_external.jpg', image_copy5)
cv2.destroyAllWindows()
C++:
vector<vector<Point>> contours4;
vector<Vec4i> hierarchy4;
findContours(thresh2, contours4, hierarchy4, RETR_EXTERNAL, CHAIN_APPROX_NONE);
Mat image_copy5 = image2.clone();
drawContours(image_copy5, contours4, -1, Scalar(0, 255, 0), 2);
imshow("EXTERNAL", image_copy5);
waitKey(0);
imwrite("contours_retr_external.jpg", image_copy4);
destroyAllWindows();
The following are the outputs. The resultant image is also given below for visualization.
EXTERNAL: [[[ 1 -1 -1 -1]
[ 2 0 -1 -1]
[-1 1 -1 -1]]]
The above output image shows only the points drawn on contours 1, 2, and 3. Contours 3a and 4 are omitted as they are child contours.
RETR_CCOMP
Unlike RETR_EXTERNAL, RETR_CCOMP
retrieves all the contours in an image. Along with that, it also applies a 2-level hierarchy to all the shapes or objects in the image. This means all the outer contours will have hierarchy level 1, and all the inner contours will have hierarchy level 2.
But what if we have a contour inside another contour with hierarchy level 2? Just like we have contour 4 after contour 3a. In such a case, contour 4 will have hierarchy level 1. Additionally, if there would have been any contours inside contour 4, then they would have had hierarchy level 2.
The following image has been numbered visualizing the above explanation, and to make things simpler and easy to understand.
The above image shows the hierarchy level as HL-1 or HL-2 for levels 1 and 2 respectively. Now, let us take a look at the code and the output hierarchy array also.
Python:
contours5, hierarchy5 = cv2.findContours(thresh2, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
image_copy6 = image2.copy()
cv2.drawContours(image_copy6, contours5, -1, (0, 255, 0), 2, cv2.LINE_AA)
# see the results
cv2.imshow('CCOMP', image_copy6)
print(f"CCOMP: {hierarchy5}")
cv2.waitKey(0)
cv2.imwrite('contours_retr_ccomp.jpg', image_copy6)
cv2.destroyAllWindows()
C++:
vector<vector<Point>> contours5;
vector<Vec4i> hierarchy5;
findContours(thresh2, contours5, hierarchy5, RETR_CCOMP, CHAIN_APPROX_NONE);
Mat image_copy6 = image2.clone();
drawContours(image_copy6, contours5, -1, Scalar(0, 255, 0), 2);
imshow("EXTERNAL", image_copy6);
// cout << "EXTERNAL:" << hierarchy5;
waitKey(0);
imwrite("contours_retr_ccomp.jpg", image_copy6);
destroyAllWindows();
The following is the output hierarchy array.
CCOMP: [[[ 1 -1 -1 -1]
[ 3 0 2 -1]
[-1 -1 -1 1]
[ 4 1 -1 -1]
[-1 3 -1 -1]]]
This time we can see that all the Next, Previous, First_Child
, and Parent
relationships are maintained according to the contour retrieval method, all the contours are detected as expected. Note that the Previous
of the first contour area is -1. And the contours which do not have any Parent
also have the value -1.
RETR_TREE
Just like RETR_CCOMP, RETR_TREE
also retrieves all the contours. Additionally, it also creates a complete hierarchy, and the levels are not restricted to 1 or 2. Each of the contours can have their own hierarchy according to the level they are on, and the corresponding parent-child relationship that they belong to.
Let us take a look at the following figure.
From the above figure it is clear that contours 1, 2, and 3 are at the same level, that is level 0. Contour 3a is present at hierarchy level 1 as it is a child of contour 3. Contour 4 is a new contour area and has hierarchy level as 2.
The following is the code to retrieve contours using RETR_TREE mode.
Python:
contours6, hierarchy6 = cv2.findContours(thresh2, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
image_copy7 = image2.copy()
cv2.drawContours(image_copy7, contours6, -1, (0, 255, 0), 2, cv2.LINE_AA)
# see the results
cv2.imshow('TREE', image_copy7)
print(f"TREE: {hierarchy6}")
cv2.waitKey(0)
cv2.imwrite('contours_retr_tree.jpg', image_copy7)
cv2.destroyAllWindows()
C++:
vector<vector<Point>> contours6;
vector<Vec4i> hierarchy6;
findContours(thresh2, contours6, hierarchy6, RETR_TREE, CHAIN_APPROX_NONE);
Mat image_copy7 = image2.clone();
drawContours(image_copy7, contours6, -1, Scalar(0, 255, 0), 2);
imshow("EXTERNAL", image_copy7);
// cout << "EXTERNAL:" << hierarchy6;
waitKey(0);
imwrite("contours_retr_tree.jpg", image_copy7);
destroyAllWindows();
The output hierarchy array that we get is as follows.
TREE: [[[ 3 -1 1 -1]
[-1 -1 2 0]
[-1 -1 -1 1]
[ 4 0 -1 -1]
[-1 3 -1 -1]]]
Finally, let’s look at the complete image with all the contours drawn when using RETR_TREE mode.
All the contours are drawn as expected, and all the contour areas are clearly visible. We can also infer that contours 3 and 3a are two separate contours as they have different contour boundaries and areas. And at the same time, contour 3a is a child of contour 3.
Now, you can try all the above concepts using the code provided and on different images as well. Try with images containing different shapes, and experiment with different threshold values as well. Notice how the results and contours would differ in each case.
A Run Time Comparison of Different Contour Retrieval Methods
Until now, we have seen how each of the contour retrieval methods work. One thing to notice here is that all the retrieval methods have different run times as they all have different contour extraction levels and hierarchies. The following table shows the comparison of run time for each of the methods discussed so far.
Contour Retrieval Method | Time Take (in seconds) |
RETR_LIST | 0.000382 |
RETR_EXTERNAL | 0.000554 |
RETR_CCOMP | 0.001845 |
RETR_TREE | 0.005594 |
If we observe closely, we can infer some pretty interesting details from the above table. RETR_LIST
and RETR_EXTERNAL
take the least amount of time to execute. RETR_LIST
does not define any hierarchy and RETR_EXTERNAL
only retrieves the parent contours. So, there seems to be a reasoning behind these timings.
Now, RETR_CCOMP
takes the next most time to execute. It retrieves all the contours, and defines a two level hierarchy. Finally, RETR_TREE
takes the most amount of time to execute as it retrieves all the contours and defines the independent hierarchy level for each parent-child relationship as well.
The above time difference may not seem significant. It is good to note that they do behave differently according to the contours they extract, and the hierarchy levels they define.
Limitations
Until now we have seen some interesting examples and encouraging results. However, there are cases where the algorithm might fail to deliver meaningful and good results. Let us take a look at a few such cases.
- We have seen that with a dark background, and an almost clear object or image, the contour detection algorithm works really well. In contrast, the following image has a bright object (puppy), but the background has some clutter (other objects). Observe how the contours are not complete, and the detection of multiple or incorrect contours due to clutter in background.

- Contour detection can also fail when we are trying to detect contours of an object in an image which has a background with lot of lines (or even a few unwanted lines) crossing the image.
Taking Your Learning Further
If you think that you have learned something interesting in this article and would like to expand your knowledge, then you may like the Computer Vision 1 course offered by OpenCV. This is a great course to get started with OpenCV and Computer Vision which will be very hands-on and perfect to get you started and up to speed with OpenCV. The best part, you can take it in either Python or C++, whichever you choose. You can visit the course page here to know more about it.
Summary
In this post, we learned about contours and contour detection using OpenCV. We discussed not only the theory part, but also had a complete hands-on coding in both Python and C++ programming languages to gain a first hand experience of contour detection using OpenCV.
We learned two built-in functions of OpenCV for finding and drawing contours in images.
findContours()
drawContours()
We also learned two two different types of contour detection algorithms:
CHAIN_APPROX_SIMPLE
CHAIN_APPROX_NONE
Key takeaways:
- The contour detection algorithm works really well when the image has a dark background, and an almost clear object or image in foreground
- The algorithm fails when the input image has a bright object (such as a white puppy, an image we used in the limitations section above), and the background has lots of clutter
- The method would also fail when the background of the image has lots of lines (or even a few unwanted lines) crossing the image
We would encourage you to try all the above methods using the code provided, and on different images of your choice. Try using images containing different shapes, and also try experimenting with different threshold values. Do let us know your results and observations on how contours detected differ in each case, in the comments.