In this tutorial, we will learn Object tracking using OpenCV. A tracking API that was introduced in OpenCV 3.0. We will learn how and when to use the 8 different trackers available in OpenCV 4.2 — BOOSTING, MIL, KCF, TLD, MEDIANFLOW, GOTURN, MOSSE, and CSRT. We will also learn the general theory behind modern tracking algorithms.
This problem has been perfectly solved by my friend Boris Babenko as shown in this flawless real-time face tracker below! Jokes aside, the animation demonstrates what we want from an ideal object tracker — speed, accuracy, and robustness to occlusion.
Demo of Object tracking using OpenCV
If you do not have the time to read the entire post, just watch this video and learn the usage in this section. But if you really want to learn about object tracking, read on.
What is Object Tracking?
Simply put, locating an object in successive frames of a video is called tracking.
The definition sounds straight forward but in computer vision and machine learning, tracking is a very broad term that encompasses conceptually similar but technically different ideas. For example, all the following different but related ideas are generally studied under Object Tracking
- Dense Optical flow: These algorithms help estimate the motion vector of every pixel in a video frame.
- Sparse optical flow: These algorithms, like the Kanade-Lucas-Tomashi (KLT) feature tracker, track the location of a few feature points in an image.
- Kalman Filtering: A very popular signal processing algorithm used to predict the location of a moving object based on prior motion information. One of the early applications of this algorithm was missile guidance! Also as mentioned here, “the on-board computer that guided the descent of the Apollo 11 lunar module to the moon had a Kalman filter”.
- Meanshift and Camshift: These are algorithms for locating the maxima of a density function. They are also used for tracking.
- Single object trackers: In this class of trackers, the first frame is marked using a rectangle to indicate the location of the object we want to track. The object is then tracked in subsequent frames using the tracking algorithm. In most real-life applications, these trackers are used in conjunction with an object detector.
- Multiple object track finding algorithms: In cases when we have a fast object detector, it makes sense to detect multiple objects in each frame and then run a track finding algorithm that identifies which rectangle in one frame corresponds to a rectangle in the next frame.
Multiple Object Tracking has come a long way. It uses object detection and novel motion prediction algorithms to get accurate tracking information. For example, DeepSort uses the YOLO network to get blazing-fast inference speed. It is based on SORT.
Tracking vs Detection
If you have ever played with OpenCV face detection, you know that it works in real-time and you can easily detect the face in every frame. So, why do you need tracking in the first place? Let’s explore the different reasons you may want to track objects in a video and not just do repeated detection.
- Tracking is faster than Detection: Usually tracking algorithms are faster than detection algorithms. The reason is simple. When you are tracking an object that was detected in the previous frame, you know a lot about the appearance of the object. You also know the location in the previous frame and the direction and speed of its motion. So in the next frame, you can use all this information to predict the location of the object in the next frame and do a small search around the expected location of the object to accurately locate the object. A good tracking algorithm will use all information it has about the object up to that point while a detection algorithm always starts from scratch. Therefore, while designing an efficient system usually an object detection is run on every nth frame while the tracking algorithm is employed in the n-1 frames in between. Why don’t we simply detect the object in the first frame and track it subsequently? It is true that tracking benefits from the extra information it has, but you can also lose track of an object when they go behind an obstacle for an extended period of time or if they move so fast that the tracking algorithm cannot catch up. It is also common for tracking algorithms to accumulate errors and the bounding box tracking the object slowly drifts away from the object it is tracking. To fix these problems with tracking algorithms, a detection algorithm is run every so often. Detection algorithms are trained on a large number of examples of the object. They, therefore, have more knowledge about the general class of the object. On the other hand, tracking algorithms know more about the specific instance of the class they are tracking.
- Tracking can help when detection fails: If you are running a face detector on a video and the person’s face gets occluded by an object, the face detector will most likely fail. A good tracking algorithm, on the other hand, will handle some level of occlusion. In the video below, you can see Dr. Boris Babenko, the author of the MIL tracker, demonstrate how the MIL tracker works under occlusion.
- Tracking preserves identity: The output of object detection is an array of rectangles that contain the object. However, there is no identity attached to the object. For example, in the video below, a detector that detects red dots will output rectangles corresponding to all the dots it has detected in a frame. In the next frame, it will output another array of rectangles. In the first frame, a particular dot might be represented by the rectangle at location 10 in the array, and in the second frame, it could be at location 17. While using detection on a frame we have no idea which rectangle corresponds to which object. On the other hand, tracking provides a way to literally connect the dots!
Recently, re-identification has become the focus in multiple object tracking. FairMOT uses joint detection and re-ID tasks to get highly efficient re-identification and tracking results. Its detection pipeline is an anchor-less approach based on CenterNet. FairMOT is not as fast as the traditional OpenCV tracking algorithms, but it lays the groundwork for future Deep Learning based trackers.
Object tracking using OpenCV 4 – the Tracking API
OpenCV 4 comes with a tracking API that contains implementations of many single object tracking algorithms. There are 8 different trackers available in OpenCV 4.2 — BOOSTING, MIL, KCF, TLD, MEDIANFLOW, GOTURN, MOSSE, and CSRT.
Note: OpenCV 3.2 has implementations of these 6 trackers — BOOSTING, MIL, TLD, MEDIANFLOW, MOSSE, and GOTURN. OpenCV 3.1 has implementations of these 5 trackers — BOOSTING, MIL, KCF, TLD, MEDIANFLOW. OpenCV 3.0 has implementations of the following 4 trackers — BOOSTING, MIL, TLD, MEDIANFLOW.
Update: In OpenCV 3.3, the tracking API has changed. The code checks for the version and then uses the corresponding API.
Before we provide a brief description of the algorithms, let us see the setup and usage. In the commented code below we first set up the tracker by choosing a tracker type — BOOSTING, MIL, KCF, TLD, MEDIANFLOW, GOTURN, MOSSE, or CSRT. We then open a video and grab a frame. We define a bounding box containing the object for the first frame and initialize the tracker with the first frame and the bounding box. Finally, we read frames from the video and just update the tracker in a loop to obtain a new bounding box for the current frame. Results are subsequently displayed.
Object tracking using OpenCV – C++ Code
#include <opencv2/opencv.hpp>
#include <opencv2/tracking.hpp>
#include <opencv2/core/ocl.hpp>
using namespace cv;
using namespace std;
// Convert to string
#define SSTR( x ) static_cast< std::ostringstream & >( \
( std::ostringstream() << std::dec << x ) ).str()
int main(int argc, char **argv)
{
// List of tracker types in OpenCV 3.4.1
string trackerTypes[8] = {"BOOSTING", "MIL", "KCF", "TLD","MEDIANFLOW", "GOTURN", "MOSSE", "CSRT"};
// vector <string> trackerTypes(types, std::end(types));
// Create a tracker
string trackerType = trackerTypes[2];
Ptr<Tracker> tracker;
#if (CV_MINOR_VERSION < 3)
{
tracker = Tracker::create(trackerType);
}
#else
{
if (trackerType == "BOOSTING")
tracker = TrackerBoosting::create();
if (trackerType == "MIL")
tracker = TrackerMIL::create();
if (trackerType == "KCF")
tracker = TrackerKCF::create();
if (trackerType == "TLD")
tracker = TrackerTLD::create();
if (trackerType == "MEDIANFLOW")
tracker = TrackerMedianFlow::create();
if (trackerType == "GOTURN")
tracker = TrackerGOTURN::create();
if (trackerType == "MOSSE")
tracker = TrackerMOSSE::create();
if (trackerType == "CSRT")
tracker = TrackerCSRT::create();
}
#endif
// Read video
VideoCapture video("videos/chaplin.mp4");
// Exit if video is not opened
if(!video.isOpened())
{
cout << "Could not read video file" << endl;
return 1;
}
// Read first frame
Mat frame;
bool ok = video.read(frame);
// Define initial bounding box
Rect2d bbox(287, 23, 86, 320);
// Uncomment the line below to select a different bounding box
// bbox = selectROI(frame, false);
// Display bounding box.
rectangle(frame, bbox, Scalar( 255, 0, 0 ), 2, 1 );
imshow("Tracking", frame);
tracker->init(frame, bbox);
while(video.read(frame))
{
// Start timer
double timer = (double)getTickCount();
// Update the tracking result
bool ok = tracker->update(frame, bbox);
// Calculate Frames per second (FPS)
float fps = getTickFrequency() / ((double)getTickCount() - timer);
if (ok)
{
// Tracking success : Draw the tracked object
rectangle(frame, bbox, Scalar( 255, 0, 0 ), 2, 1 );
}
else
{
// Tracking failure detected.
putText(frame, "Tracking failure detected", Point(100,80), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(0,0,255),2);
}
// Display tracker type on frame
putText(frame, trackerType + " Tracker", Point(100,20), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(50,170,50),2);
// Display FPS on frame
putText(frame, "FPS : " + SSTR(int(fps)), Point(100,50), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(50,170,50), 2);
// Display frame.
imshow("Tracking", frame);
// Exit if ESC pressed.
int k = waitKey(1);
if(k == 27)
{
break;
}
}
}
Object tracking using OpenCV – Python Code
import cv2
import sys
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
if __name__ == '__main__' :
# Set up tracker.
# Instead of MIL, you can also use
tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
tracker_type = tracker_types[2]
if int(minor_ver) < 3:
tracker = cv2.Tracker_create(tracker_type)
else:
if tracker_type == 'BOOSTING':
tracker = cv2.TrackerBoosting_create()
if tracker_type == 'MIL':
tracker = cv2.TrackerMIL_create()
if tracker_type == 'KCF':
tracker = cv2.TrackerKCF_create()
if tracker_type == 'TLD':
tracker = cv2.TrackerTLD_create()
if tracker_type == 'MEDIANFLOW':
tracker = cv2.TrackerMedianFlow_create()
if tracker_type == 'GOTURN':
tracker = cv2.TrackerGOTURN_create()
if tracker_type == 'MOSSE':
tracker = cv2.TrackerMOSSE_create()
if tracker_type == "CSRT":
tracker = cv2.TrackerCSRT_create()
# Read video
video = cv2.VideoCapture("videos/chaplin.mp4")
# Exit if video not opened.
if not video.isOpened():
print "Could not open video"
sys.exit()
# Read first frame.
ok, frame = video.read()
if not ok:
print 'Cannot read video file'
sys.exit()
# Define an initial bounding box
bbox = (287, 23, 86, 320)
# Uncomment the line below to select a different bounding box
bbox = cv2.selectROI(frame, False)
# Initialize tracker with first frame and bounding box
ok = tracker.init(frame, bbox)
while True:
# Read a new frame
ok, frame = video.read()
if not ok:
break
# Start timer
timer = cv2.getTickCount()
# Update tracker
ok, bbox = tracker.update(frame)
# Calculate Frames per second (FPS)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);
# Draw bounding box
if ok:
# Tracking success
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
else :
# Tracking failure
cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
# Display tracker type on frame
cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);
# Display FPS on frame
cv2.putText(frame, "FPS : " + str(int(fps)), (100,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2);
# Display result
cv2.imshow("Tracking", frame)
# Exit if ESC pressed
k = cv2.waitKey(1) & 0xff
if k == 27 : break
Object tracking using OpenCV – the Algorithms
In this section, we will dig a bit into different tracking algorithms. The goal is not to have a deep theoretical understanding of every tracker, but to understand them from a practical standpoint.
Let me begin by first explaining some general principles behind tracking. In tracking, our goal is to find an object in the current frame given we have tracked the object successfully in all ( or nearly all ) previous frames.
Since we have tracked the object up until the current frame, we know how it has been moving. In other words, we know the parameters of the motion model. The motion model is just a fancy way of saying that you know the location and the velocity ( speed + direction of motion ) of the object in previous frames. If you knew nothing else about the object, you could predict the new location based on the current motion model, and you would be pretty close to where the new location of the object is.
But we have more information than just the motion of the object. We know how the object looks in each of the previous frames. In other words, we can build an appearance model that encodes what the object looks like. This appearance model can be used to search in a small neighborhood of the location predicted by the motion model to more accurately predict the location of the object.
The motion model predicts the approximate location of the object. The appearance model fine tunes this estimate to provide a more accurate estimate based on appearance.
If the object was very simple and did not change it’s appearance much, we could use a simple template as an appearance model and look for that template. However, real life is not that simple. The appearance of an object can change dramatically. To tackle this problem, in many modern trackers, this appearance model is a classifier that is trained in an online manner. Don’t panic! Let me explain in simpler terms.
The job of the classifier is to classify a rectangular region of an image as either an object or background. The classifier takes in an image patch as input and returns a score between 0 and 1 to indicate the probability that the image patch contains the object. The score is 0 when it is absolutely sure the image patch is the background and 1 when it is absolutely sure the patch is the object.
In machine learning, we use the word “online” to refer to algorithms that are trained on the fly at run time. An offline classifier may need thousands of examples to train a classifier, but an online classifier is typically trained using very few examples at run time.
A classifier is trained by feeding it positive ( object ) and negative ( background ) examples. If you want to build a classifier for detecting cats, you train it with thousands of images containing cats and thousands of images that do not contain cats. This way the classifier learns to differentiate what is a cat and what is not. While building an online classifier, we do not have the luxury of having thousands of examples of the positive and negative classes.
Let’s look at how different tracking algorithms approach this problem of online training.
BOOSTING Tracker
This tracker is based on an online version of AdaBoost — the algorithm that the HAAR cascade based face detector uses internally. This classifier needs to be trained at runtime with positive and negative examples of the object. The initial bounding box supplied by the user ( or by another object detection algorithm ) is taken as a positive example for the object, and many image patches outside the bounding box are treated as the background.
Given a new frame, the classifier is run on every pixel in the neighborhood of the previous location and the score of the classifier is recorded. The new location of the object is the one where the score is maximum. So now we have one more positive example for the classifier. As more frames come in, the classifier is updated with this additional data.
Pros: None. This algorithm is a decade old and works ok, but I could not find a good reason to use it especially when other advanced trackers (MIL, KCF) based on similar principles are available.
Cons: Tracking performance is mediocre. It does not reliably know when tracking has failed.
MIL Tracker
This tracker is similar in idea to the BOOSTING tracker described above. The big difference is that instead of considering only the current location of the object as a positive example, it looks in a small neighborhood around the current location to generate several potential positive examples. You may be thinking that it is a bad idea because in most of these “positive” examples the object is not centered.
This is where Multiple Instance Learning ( MIL ) comes to rescue. In MIL, you do not specify positive and negative examples, but positive and negative “bags”. The collection of images in the positive bag are not all positive examples. Instead, only one image in the positive bag needs to be a positive example!
In our example, a positive bag contains the patch centered on the current location of the object and also patches in a small neighborhood around it. Even if the current location of the tracked object is not accurate, when samples from the neighborhood of the current location are put in the positive bag, there is a good chance that this bag contains at least one image in which the object is nicely centered. MIL project page has more information for people who like to dig deeper into the inner workings of the MIL tracker.
Pros: The performance is pretty good. It does not drift as much as the BOOSTING tracker and it does a reasonable job under partial occlusion. If you are using OpenCV 3.0, this might be the best tracker available to you. But if you are using a higher version, consider KCF.
Cons: Tracking failure is not reported reliably. Does not recover from full occlusion.
KCF Tracker
KFC stands for Kernelized Correlation Filters. This tracker builds on the ideas presented in the previous two trackers. This tracker utilizes the fact that the multiple positive samples used in the MIL tracker have large overlapping regions. This overlapping data leads to some nice mathematical properties that are exploited by this tracker to make tracking faster and more accurate at the same time.
Pros: Accuracy and speed are both better than MIL and it reports tracking failure better than BOOSTING and MIL. If you are using OpenCV 3.1 and above, I recommend using this for most applications.
Cons: Does not recover from full occlusion.
TLD Tracker
TLD stands for Tracking, learning, and detection. As the name suggests, this tracker decomposes the long term tracking task into three components — (short term) tracking, learning, and detection. From the author’s paper, “The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary.
The learning estimates detector’s errors and updates it to avoid these errors in the future.” This output of this tracker tends to jump around a bit. For example, if you are tracking a pedestrian and there are other pedestrians in the scene, this tracker can sometimes temporarily track a different pedestrian than the one you intended to track. On the positive side, this track appears to track an object over a larger scale, motion, and occlusion. If you have a video sequence where the object is hidden behind another object, this tracker may be a good choice.
Pros: Works the best under occlusion over multiple frames. Also, tracks best over scale changes.
Cons: Lots of false positives making it almost unusable.
MEDIANFLOW Tracker
Internally, this tracker tracks the object in both forward and backward directions in time and measures the discrepancies between these two trajectories. Minimizing this ForwardBackward error enables them to reliably detect tracking failures and select reliable trajectories in video sequences.
In my tests, I found this tracker works best when the motion is predictable and small. Unlike, other trackers that keep going even when the tracking has clearly failed, this tracker knows when the tracking has failed.
Pros: Excellent tracking failure reporting. Works very well when the motion is predictable and there is no occlusion.
Cons: Fails under large motion.
GOTURN tracker
Out of all the tracking algorithms in the tracker class, this is the only one based on Convolutional Neural Network (CNN). From OpenCV documentation, we know it is “robust to viewpoint changes, lighting changes, and deformations”. But it does not handle occlusion very well.
Notice : GOTURN being a CNN based tracker, uses a Caffe model for tracking. The Caffe model and the proto text file must be present in the directory in which the code is present. These files can also be downloaded from the opencv_extra repository, concatenated, and extracted before use.
Update: GOTURN object tracking algorithm has been ported to OpenCV.
MOSSE tracker
Minimum Output Sum of Squared Error (MOSSE) uses an adaptive correlation for object tracking which produces stable correlation filters when initialized using a single frame. MOSSE tracker is robust to variations in lighting, scale, pose, and non-rigid deformations. It also detects occlusion based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears. MOSSE tracker also operates at a higher fps (450 fps and even more). To add to the positives, it is also very easy to implement, is as accurate as other complex trackers and much faster. But, on a performance scale, it lags behind the deep learning based trackers.
CSRT tracker
In the Discriminative Correlation Filter with Channel and Spatial Reliability (DCF-CSR), we use the spatial reliability map for adjusting the filter support to the part of the selected region from the frame for tracking. This ensures enlarging and localization of the selected region and improved tracking of the non-rectangular regions or objects. It uses only 2 standard features (HoGs and Colornames). It also operates at a comparatively lower fps (25 fps) but gives higher accuracy for object tracking.
Subscribe & Download Code
If you liked this article and would like to download code (C++ and Python) and example images used in this post, please click here. Alternately, sign up to receive a free Computer Vision Resource Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news. Video Credits: All videos used in this post are in the public domain — Charlie Chaplin, Race Car, and Street Scene. Dr. Boris Babenko generously gave permission to use his animation in this post.
References
- Bolme, David S.; Beveridge, J. Ross; Draper, Bruce A.; Lui, Yui Man. Visual Object Tracking using Adaptive Correlation Filters. In CVPR, 2010.
Thank you for your amazing articles, Satya.
Thanks for the kind words. Glad, you liked it.
Hi Satya,
Where can I find sample codes for multiple object tracking in Python?
I can only find sample code in C++ at official website
http://docs.opencv.org/3.1.0/d5/d07/tutorial_multitracker.html
By the way, I have two suggestions to the question about Multiple camera tracking.
1) Centroid, a centre point of ROI. The trace of the centre of the ROI should be steady. Identify an object across camera by calculating the shortest distance of the ROI between the first and second camera. If possible, calibrate the value by a man walking through a straight line across all camera.
2) Histogram of a normalised ROI. The percentage distribution of colour can help to identify an object across camera.
3) Use both Centroid and Histogram to identify an object across multiple camera.
By the way, I am developing FPGA for AI.
If you have any suggestions or comments, please feel free to let me know.
Thanks.
Thank you so much for this article! It was very informative. Just for my learning, could you point me to the chaplin.mp4 video where you had defined the bounding rectangle in the code?
Thanks, Steve.
The bounding box defined is for the very first frame of the video. Other other frames are tracked.
I changed the initial bounding rectangle in my code as (160, 25, 90, 320).
If I play the video with any video player (in Windows 10) then the chaplin’s coordinates are (287, 23, 86, 320).
But when I run the OpenCV program (C++ ver.) the first frame is somewhat different and the coordinates (160, 25, 90, 320) are more suitable.
That is so odd. Thanks for letting me know.
Great article and tutorial. I tested the code and FPS is tracked properly. However the box doesn’t show up. Guess is my environment fault.
If you are using your own video, the initial box location needs to be changed. You can do so by uncommenting the line
bbox = selectROI(frame, false);
This will allow you to select an bounding box.
For some reason when I try to compile the code I can’t find opencv2/tracking.hpp.
I have OpenCV 3.1.0 installed. Any thoughts?
Yikes! I forgot to mention that you need to compile OpenCV with opencv_contrib.
Ok, thanks 🙂
Hi Satya, first of all thank you for sharing this incredible tutorial. However, I can not figure out how to setup the opencv_contrib. Do you know or have by chance and source explaining opencv_contrib for python (anaconda-windows)?
i have the same problem…
when i run the code i face with this error:
AttributeError: ‘module’ object has no attribute ‘Tracker_create’
i have no idea of cmake or … 🙁
I compiled with opencv_contrib and the C++ sample of tracking works, but I get AttributeError: ‘module’ object has no attribute ‘Tracker_create’ when I try to run the python sample. Any Ideas?
Amazing article Satya and congratulations on becoming one of the 30 AI influencers.
Thanks!
Hi nice article! But I am confused that you does not introduced a tracking algorithm which is called Lucas-Kanade method in OpenCV.
These are tracking algorithms implemented in the tracker class in OpenCV 3. As mentioned in the introduction, there are a whole bunch of other algorithms that are implemented elsewhere in OpenCV. I will cover those in a future post.
Great stuff!! Thanks a lot for sharing this! Do you know if it is possible to pass parameters to the trackers in python? Specifically, I am hoping that I can pass a parameter allowing the bounding box to change size? In the code that you provide above, the tracker.update() always returns a bounding box of constant size, regardless of the object changing size (in my case I am tracing a person moving away from the camera).
I have not tried it in python but in C++ it is possible. The box sizes do change in case of the TLD and MEDIANFLOW trackers.
Did you manage to update the returned bounding box size? I’m using KCF and it always returns the same size regardless of the object’s distance to the camera.
Hello Satya Sir,
Thank you for your wonderful post. I wish you could have published it 2-3 months back. I could have got more marks in my research project (Which was on object tracking)..hahaha..So many things are clear now. Thank you once again.
Regards,
Siddhant Mehta
:), glad you liked it.
Hello Satya,
I am building an application using multiple cameras tracking the same objects.
They have different points of view of the same object but they are capturing the same scene synchronously.
Is there anyway to realise that we are tracking the same object in different cameras without having any previous camera-scene calibration?
I was thinking on grabbing some internal ‘classifiers’ output from the trackers and comparing them in the different camera outputs to get the matches (in the case of multiple objects being tracked is when that distinction makes sense).
Would you recommend any algorithm/paper/reference about the approach I should follow?
Thank you in advance for this marvellous blog and congratulation on becoming one of the 30 AI influencers! 😀
Thanks a bunch!
In this case tracking without calibration is extremely difficult.
Let’s say you are doing pedestrian tracking. In the most extreme case, one camera is looking at the front of the object and the other is looking at the back. So color information may be unreliable.
But if you are doing something simpler — e.g. your objects are all different color ( or you can design the objects you want to track ) then it becomes a much simpler problem.
The internal classifier of the tracker is built on only views of the object it has seen. So, it may have limited success.
Satya, it’s a nice tutorial.
There is this problem that on opencv-3.1.0 the KCF tracker has no bounding box at all with this implementation. I saw that maybe it has to do with this bug.
https://github.com/opencv/opencv_contrib/issues/640
Any thoughts?
Thanks, Andrei. I have updated the post with this information.
Hello Satya,it’s a nice tutorial.
There is a problem that KCF can be used for multiple target tracking.
My goal is using Adboost algorithm testing, to generate the vector xxx format of multiple targets,but code is defined on the test f boundbox :Rect2d bbox(287, 23, 86, 320);
Do you have a solution?
It was nice post. thanks!
Thanks!
Hi,Satya!
My goal is using Adboost algorithm testing, to generate the vector xxx format of multiple targets,but code is defined on the test f boundbox :Rect2d bbox(287, 23, 86, 320);
how rect to Rect2d????
Thanks.
Hi Satya, Thanks for the great tutorial. I have one problem though. I am using Python 3 with the latest OpenCV on a mac. I get the following error. I am not sure whether you have answered this before. But this is the forts time I am trying this on a mac. Sorry if it is a duplicate.
cv2.imshow(“Tracking”, frame)
cv2.error: /Users/travis/build/skvark/opencv-python/opencv/modules/highgui/src/window.cpp:583: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage
I am not sure, but it looks like OpenCV may not be installed properly. How did you install OpenCV ?
Satya,
I’ve recently been exploring tracking in my research and you have really nailed a lot of the points, well done. A very well thought out article thank you for this.
Kyle
Thanks!
Satya, thanks for the great post.
I m working on object detection + tracking, the detection is color based, I m able to detect my object of interest and get an enclosed rectangular from it’s contour and pass that as an ROI to the Tracker. I need to set the binding box ‘bbox’ every n frames, in case tracking fails. it seems that calling
tracker.init(frame, current_bbox) does not update the tracker. Any advice?
Many thanks
Thanks, Sami.
I just checked OpenCV’s source code for the tracker and init does not do anything if it is already initialized. Similarly, update does not use the passed bounding box. There is no other function to reset initialization. So the only way to make it work at this time is copy the tracking module and make changes. From what I can tell it is not that hard. You can just force an init even if it has been initialized before. Of course, you can request a feature and see if they accept it.
Sorry about that.
I found a simple fix was to comment out the first if condition from Tracker::init in opencv_contrib-master/modules/tracking/src/tracker.cpp.
Hi Satya, thanks for this great tutorial. In such a short blog-post you covered a wide range of topics – motion, detection, tracking – hats off to you. But the best part is that under 5 minutes of video where you have evaluated various openCV tracking options under different use-cases. That was very compact & precise. One can learn so much from that video in so less time. Looking forward to see more tutorials from your on computer vision & tracking from you in future.
Thanks for the kind words, Kunal.
Dear Satya :
DetectMultiScale (InputArray image, vector & objects, double scaleFactor = 1.1, int minNeighbors = 3, int flags = 0, Size minSize = Size (), Size maxSize = Size ()Where the target is vector type, however,the tracking function MultiTracker :: add (const String & trackerType, const Mat & image, const Rect2d & boundingBox) is the target of the Rect2d type. Do you know how to deal with it?
thank you very much
Would you please tell where program crashed in opencv with goturn?
Same thing here.
” tracker = cv2.Tracker_create(“MIL”)
AttributeError: module ‘cv2’ has no attribute ‘Tracker_create'”
Any help?
You need OpenCV 3.2 compiled with opencv_contrib.
I did it!
My seetings for the cmake were:
cmake -D CMAKE_BUILD_TYPE=RELEASE
-D CMAKE_INSTALL_PREFIX=/usr/local
-D OPENCV_EXTRA_MODULES_PATH= /Users/MatheusTorquato/opencv/opencv_contrib/modules
-D PYTHON3_LIBRARY=/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/config-3.6m-darwin/libpython3.6.dylib
-D PYTHON3_INCLUDE_DIR=/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/include/python3.6m/
-D PYTHON3_EXECUTABLE=$VIRTUAL_ENV/bin/python
-D BUILD_opencv_python2=OFF
-D BUILD_opencv_python3=ON
-D INSTALL_PYTHON_EXAMPLES=ON
-D INSTALL_C_EXAMPLES=OFF
-D BUILD_EXAMPLES=ON
-D BUILD_opencv_cvv=OFF
-D BUILD_opencv_adas=OFF ..
Everything looks fine.
– Configuring: https://www.dropbox.com/s/7plzmetg2hng70p/configuring_output.txt?dl=0
– Making: https://www.dropbox.com/s/fxpdvfvj5q181qr/making_output.txt?dl=0
– Installing: https://www.dropbox.com/s/k3slaghx4tx4f84/Installing_output.txt?dl=0
Python Version – 3.6.0
OpenCV Version – ‘3.2.0-dev’
Operating System – macOS Sierra
I’ve installed everything using this tutorial: http://www.pyimagesearch.com/2016/12/05/macos-install-opencv-3-and-python-3-5/#comment-420398
Can u please share the link where to follow steps to install it because I have tried but still getting the same error.
I have OPENCV 3.2 and also compiled with opencv_contrib but then also getting same error..
It shows this error. Where you able to run the code with GOTURN ?
=== ERROR ===
OpenCV Error: Assertion failed (input.dims() == 4 && (input.type() == CV_32F || input.type() == CV_64F)) in allocate, file /tmp/opencv3-20170207-60337-1ativmf/opencv-3.2.0/opencv_contrib/modules/dnn/src/layers/convolution_layer.cpp, line 90
OpenCV Error: Assertion failed (The following error occured while making allocate() for layer “conv11”: input.dims() == 4 && (input.type() == CV_32F || input.type() == CV_64F)) in allocate, file /tmp/opencv3-20170207-60337-1ativmf/opencv-3.2.0/opencv_contrib/modules/dnn/src/layers/convolution_layer.cpp, line 90
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /tmp/opencv3-20170207-60337-1ativmf/opencv-3.2.0/opencv_contrib/modules/dnn/src/layers/convolution_layer.cpp:90: error: (-215) The following error occured while making allocate() for layer “conv11”: input.dims() == 4 && (input.type() == CV_32F || input.type() == CV_64F) in function allocate
Hi,
I am new to openCV and python. If I try to run the code sample from Github I get the following error:
File “tracker.py”, line 11, in
tracker = cv2.Tracker_create(tracker_type)
AttributeError: ‘module’ object has no attribute ‘Tracker_create’
What am I doing wrong ?
You need OpenCV 3.2 complied with opencv_contrib. You are probably missing the opencv_contrib module.
I did it.
My seetings for the cmake were:
cmake -D CMAKE_BUILD_TYPE=RELEASE
-D CMAKE_INSTALL_PREFIX=/usr/local
-D OPENCV_EXTRA_MODULES_PATH= /Users/MatheusTorquato/opencv/opencv_contrib/modules
-D PYTHON3_LIBRARY=/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/config-3.6m-darwin/libpython3.6.dylib
-D PYTHON3_INCLUDE_DIR=/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/include/python3.6m/
-D PYTHON3_EXECUTABLE=$VIRTUAL_ENV/bin/python
-D BUILD_opencv_python2=OFF
-D BUILD_opencv_python3=ON
-D INSTALL_PYTHON_EXAMPLES=ON
-D INSTALL_C_EXAMPLES=OFF
-D BUILD_EXAMPLES=ON
-D BUILD_opencv_cvv=OFF
-D BUILD_opencv_adas=OFF ..
Everything looks fine.
– Configuring: https://www.dropbox.com/s/7plzmetg2hng70p/configuring_output.txt?dl=0
– Making: https://www.dropbox.com/s/fxpdvfvj5q181qr/making_output.txt?dl=0
– Installing: https://www.dropbox.com/s/k3slaghx4tx4f84/Installing_output.txt?dl=0
Python Version – 3.6.0
OpenCV Version – ‘3.2.0-dev’
Operating System – macOS Sierra
I’ve installed everything using this tutorial: http://www.pyimagesearch.com/2016/12/05/macos-install-opencv-3-and-python-3-5/#comment-420398
Hello,
I get the same error, and I compiled OpenCV 3.2.0 with opencv_contrib 3.2.0.
What do you think is wrong ? :/
Can it be the Python version ? I’m using 3.4.
If the code where you’re creating your tracker looks like this:
tracker = cv2.Tracker_create(“MIL”)
Do this instead:
tracker = cv2.TrackerMIL_create()
The method for creating a tracker seems to have changed in the more recent versions of OpenCV.
THANK YOU A LOT!
i thought i was getting crazy!
Who the hell just randomly changes method names without any kind of backward compatibility. Problems like this with OpenCV drive me nuts
I tried that also , but still I am getting the same error .
Is there any alternative way for triggering the tracker ?
I got this same error.I am using open cv 2.4.13 . Do any of these trackers work with my version of opencv?
They were released in OpenCV 3. So unfortunately they wont work with your version.
what do you suggest if am to implement tracking with body detection in open CV 2.4.13. I want to track bodies once detected so I do not have to process every frame everytime.
You can either extract the code out from OpenCV 3 and make it work on OpenCV 2.4. The other option is to use MeanShift and CamShift.
I got this same error.I am using open cv 2.4.13 . Do any of these trackers work with my version of opencv??
That way of creating a tracker is deprecated, I found searching through Google the most updated way: Ptr tracker = TrackerMIL::create(); (it worked for me)
Thank you very much for your awsome article.
I tested your code and it works very well.
would you please tell me if any of these algorithms could track multiple objects?
Thanks in advance.
Thanks. You can try the multi tracker in OpenCV
http://docs.opencv.org/trunk/d5/d07/tutorial_multitracker.html
Oh ! 😀 Thaanks a lot 😀
hi sir
can you sent me the source code of how to track multiple person in a video
Thanks
Your code worked fine.
But the multi tracker gave me error:-
tracker = cv2.MultiTracker_create()
AttributeError: ‘module’ object has no attribute ‘MultiTracker_create’
Can you please help?
Hi Satya, thank you for your great tutorial! I*m working on a detection and tracking system: Did you implemented any logic to compare and update tracks every n frame with detections from for example a haar detector? Isn’t there a track manager in openCV or anything comparable?
[email protected]:~/OpenCV_Examples_Jag/test $ pkg-config –cflags –libs opencv
-I/usr/local/include/opencv -I/usr/local/include -L/usr/local/lib -lopencv_stitching -lopencv_superres -lopencv_videostab -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dpm -lopencv_freetype -lopencv_fuzzy -lopencv_line_descriptor -lopencv_optflow -lopencv_reg -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_rgbd -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_text -lopencv_face -lopencv_plot -lopencv_dnn -lopencv_xfeatures2d -lopencv_shape -lopencv_video -lopencv_ximgproc -lopencv_calib3d -lopencv_features2d -lopencv_flann -lopencv_xobjdetect -lopencv_objdetect -lopencv_ml -lopencv_xphoto -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_photo -lopencv_imgproc -lopencv_core
[email protected]:~/OpenCV_Examples_Jag/test $ g++ -ggdb `pkg-config –cflags –libs opencv` objtrk.cpp -o objtrk
[email protected]:~/OpenCV_Examples_Jag/test $ ls
objtrk objtrk.cpp
[email protected]:~/OpenCV_Examples_Jag/test $ ./objtrk
Could not read video file
Can you tell me what this problem means!! I have installed all the necessary packages and libraries. I am a newbie to OpenCV.
Thanks anyways!!
Thanks!
I already subscribled but unable to download the C++ source code. Please help
hi , how can we track multiple object ?
Hi Satya and thanks for the post,
Are any of those algorithms suitable for multi target tracking like pedestrians? Or are they more one object tracker? Thanks
Hi
Do you know how I can measure the movements of object?
hi, im new to opencv, are these algorithms simply track any moving objects? Could it possible to track specific type objects.
What is the best way to track mario in this video ?
I tried many algorithms like CAMShift .. and tried Object Tracking Algorithms in opencv 3 like BOOSTING, MIL, TLD, MEDIANFLOW but all of them failed with super mario
you can see the video from here (https://drive.google.com/open?id=0B95Sp237mrsTT3drTlNPdElJOXlOb1gtQjBwWkNiZzBpTXlr )
Hello Satya, Good day!
Can is ask something?
What algorithm can I use to track set of points and measure its velocity as an output?
This is for my thesis.
Thank you! I am hoping for immediate response Sir.
“Can is ask something?”
Did you notice that in asking this question you have already asked something and therefore made this question pointless by its very existence?
😀
What might have you said that would fix this inexplicable blend of contradiction and paradox?
Let us see:
1. Can I ask you something else?
Less troublesome than “Can is ask something?” but then you would have to wait for permission for asking your real question, which you haven’t done in your comment. So I assume this won’t work.
2. Can you answer my question?
This might work. You don’t ask permission for asking your question because you are going to ask anyway. But this is a yes/no question and look carefully at each of the answers. Saying no would result in a paradox. We don’t want to mess with a paradox. So the question essentially forces you to answer ‘Yes.’ (Or ignore it in perpetuity and thus escaping the paradox while still choosing not to answer.)
3. Can you answer my next question?/Can you answer my following question.
Ah! We might have hit the jackpot. This seems to work. While still not obligating the other person to answer your main question, it is in line with the general practice of saying “Can you fetch me a glass of water?” which somehow we all interpret as requests/commands.
Why not just *not ask* such a question instead? You really want your main question answered, and you are not going to wait for permission to ask it, so just ask it right away. If you really do want to prepend something maybe some form of a request should do.
“Please answer my following question.”
Disclaimer: No offense intended to anyone. The author of this post just spent about 10-15 mins pondering over the repercussions of this reply’s title and wanted to inflict the experience of moving through a multitude of such curious rabbit holes on the rest of the world.
http://lmgtfy.com/?q=rethorical+question
lol Wondermagnet, ..
Im new copy and paste
Line 13 Error
Ptr tracker = Tracker::create( “MIL” );
Visual Studio 2017 WIN64 https://uploads.disquscdn.com/images/dd4fd983019bc6494d8eaec72e154dcd7f796211efdc2d75b577437e94c6b10b.jpg
OpenCV 3.2 complied with opencv_contrib
Please help.
Facing the same issue.
I had a problem like that too, using Python though. I think the issue here is a syntax change from OpenCV 3.0.0 and 3.3.0 (or whatever version after 3.0.0 that they introduced this in).
In Python, instead of doing:
tracker = cv2.Tracker_create(“MIL”)
I wrote:
tracker = cv2.TrackerMIL_create()
Which worked as expected. Could be a similar problem in your case.
I think it’s either “TrackerMIL::create();” or “cv::TrackerMIL::create();” that you should be writing.
So what about the other trackers? There are six: ‘BOOSTING’, ‘MIL’,’KCF’, ‘TLD’, ‘MEDIANFLOW’, ‘GOTURN’
In the updated version of OpenCV, I believe the tracker functions become:
cv2.TrackerMIL_create()
cv2.TrackerKCF_create()
cv2.TrackerTLD_create()
cv2.TrackerGOTURN_create()
What about ‘BOOSTING’ and ‘MEDIANFLOW’?
Could you share the code somewhere? I’m having a rough time fixing the issues with the versions 🙂
use python3.6 and pycharm IDE that will solve your version error because in ubuntu16.04 after you update python3 it will get upto python3.5.x and u have to install python3.6 explicitly and python2.7.x is default , and handling pip of every version is very tedious so use PYCHARM IDE , it will create venv(virtual environment) in this select python3.6 as a interpretor and add package according to your need , very easy to use instead of using pip,pip3,pip3.6 separately
hii can u help me with a program bro please getting error with kcf and vedeo isnt opening
New to python and openCV. I am using openCV 3.3. But having issue with the above create calls.
cv2.TrackerMIL_create() or cv2.TrackerKCF_create() etc returns message like
AttributeError: module ‘cv2’ has no attribute ‘TrackerKCF_create’
What I may be doing wrong?
i was getting the same error , i dont know how to solve it, tried every thing available on internet then i started to use python3.6 and PYCHARM IDE ,
it will create venv(virtual environment) in this select python3.6 as a
interpretor and add package according to your need , very easy to use
instead of using pip,pip3,pip3.6 separately
Had the same error. Just do a
pip install opencv-contrib-python
or
pip2.7 install opencv-contrib-python
Helped me 🙂
Due to change in documentation of tracking algorithms, the C++ code does not work anymore. You would have to use TrackerMIL::create() instead of Tracker::create(“MIL”).
Hi Satya, thank for the tutorial! I really new to object tracking. I was wondering whether any of these algorithms can show the orientation of the tracked object , like the homography method?
No. They just track a bounding box around the object.
I am trying to track juggling balls. I tried each of these six tracking algorithms, and found that none of them performed well at all; but, regular old camshift did work quite well!
Here is a video showing the performance of my tracking algorithm: https://youtu.be/TCct–xtKp0
Link to code (tracking_camshift.py in Python Tutorials folder):https://drive.google.com/open?id=0B7QqDexrxSfwcmROb1ByNkhqOEU
Do you have any recommendations for tracking a juggling ball in highly optimized video?
I am on the same page with you. No expected result.
Thamks a lot
it really helped a lot
You are welcome!
Awesome tutorial!! I’m teaching myself OpenCV and I want to play with implementing the object tracker in Android. The official OpenCV 3.2 documentation says that there’s support for the full library, however when I look at the JNI folder in the Android SDK, the tracking module is missing. I found opencv2/objdetect module but it doesn’t implement all the trackers you’re talking about. They are all working C++ so it should be easy to port them to JAVA through JNI, should I do that myself? It’d be great if you could point me to any reference of using the KCF tracker in Android? Thanks!!
Due to change in documentation of tracking algorithms, the C++ code does not work anymore. You would have to use TrackerMIL::create() instead of Tracker::create(“MIL”)
Thanks, yes things have changed in OpenCV 3.3. Will fix this after testing.
Special thank about this article!
When size of object change realtime, how to track object?
Tks!
One of the tracking methods in that list is MedianFlow. It allows for size change.
Thanks for this nice overview of OpenCV tracking models.
I’m particularly interested in tracking multiple small objects accurately, including when they touch. At the moment I’m doing this manually which works reasonably well but is far from perfect.
In your section ‘Tracking preserves identity’, there is a video showing various dots being tracked. What tracker & model was used for this? I’d greatly appreciate any detail you can share!
That video is just a demo. No tracker was used. However, opencv does have a multi tracker API. Check it out here
http://docs.opencv.org/trunk/d5/d07/tutorial_multitracker.html
So since this post OpenCV 3.3 has come out. However, I am still having problems with the GOTURN tracker.
I get an error: “OpenCV Error: Unspecified error (FAILED: fs.is_open(). Can’t open “goturn.prototxt”) in cv::dnn::ReadProtoFromTextFile, file C:projectsopencv-pythonopencvmodulesdnnsrccaffecaffe_io.cpp, line 1113
C:projectsopencv-pythonopencvmodulesdnnsrccaffecaffe_io.cpp:1113: error:
(-2) FAILED: fs.is_open(). Can’t open “goturn.prototxt” in function cv::dnn::Re
adProtoFromTextFile”
Has anyone got this working?
Even i am getting the same error. How did u resolve it?
I am using the KCF tracker, to track an object and combine something with the motion, but problem is I can not move the object fast because it goes out of bounding box and I can not track,
Is there any way to make this more smooth ???
Thanks
OpenCV implementation is not a good one. Try this https://github.com/joaofaro/KCFcpp. I find this one more reliable.
Thanks a bunch!
Hi. I’m working on a project in which I have to count the number objects passing over a conveyor belt. I was wondering if you consider one of the methods mentioned above suitable to keep track of the objects as they appear, move forward and finally disappear, taking into account I’m running the software on a Raspberry pi 3
Thanks for this great post!
I installed OpenCV 3.3.0 recently and tested the Tracking API demo, the bug in GOTURN tracker is still there, it reports a ‘network loading error’.
So basically, the output bounding boxes are the trackers output so KCF , BOOSTING and MIL don’t support scale and size changes? because my ROI size don’t change even if the size of the tracked object does.
Also I was wondering why there are no trackers based on Kalman filters or particle filters? Would developing similar trackers a good idea or a waste of time?
Which trackers are able to handle moving cameras? and which ones are more suitable for people tracking?
Any recommendations?
Thanks!
i’m on opencv 3.3.0 and i have that error :
line 20, in
tracker = cv2.TrackerMIL_create()
AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerMIL_create’
someone know what’s wrong?
delete cv2
Hello, I’m using OpenCV 3.3.1. Sadly the two most useful trackers MIL and KCF are quite buggy. MIL will always return true on update(), KCF only accept Rect2D without decimals (or will crash) and also will crash when using grayscale Mats as input on update()
Also KCF is quite fragile. So I’m still searching for an easy tracking method to implement something similar to the Youtube video here: https://www.youtube.com/watch?v=AhGHMBCCFN4
The example above is relatively easy because of the solid background and the fact that the balls do not change size or appearance. You can use camshift or even meanshift.
https://docs.opencv.org/trunk/db/df8/tutorial_py_meanshift.html
Hi everyone, I’m also a beginner in opencv… My problem is that how can I stop the bounding box going outside form the frame(when the moving object goes out side of the camera view, the bounding box also goes out..)? I want to keep the tracking box inside the frame. So how can I do that? thankyou
Hi,
Did you try MOSSE tracking algorithm recently implemented on openCV ? What do you think of this algorithm ?
Just copied and pasted and receiving the below error :
Reloaded modules: cv2
Could not open video
An exception has occurred, use %tb to see the full traceback.
SystemExit
Any remedy please ?
For my application running on iPhone I am using OpenCV using opencv2.framework in Xcode with objective-c and C++ as languages. My development worked well until based on this post I wanted to introduce cv::Tracker that is not included in opencv2.framework .
1. I rebuilt and installed from the source opencv-3.4.0 with opencv_contrib-master using CMake and terminal command make and install.
2. In Xcode I have set
HEADER_SEARCH_PATHS = /usr/local/include
LIBRARY_SEARCH_PATHS = /usr/local/lib
When compiling I get a long list of errors of the style:
Undefined symbols for architecture x86_64:
“cv::error(int, cv::String const&, char const*, char const*, int)”, referenced from:
cv::Mat::Mat(int, int, int, void*, unsigned long) in OpenCVRenderer.o
“cv::Mat::operator=(cv::Scalar_ const&)”, referenced from:
cv::Mat::Mat(int, int, int, cv::Scalar_ const&) in OpenCVRenderer.o
cv::Mat::Mat(cv::Size_, int, cv::Scalar_ const&) in OpenCVRenderer.o
“cv::Mat::deallocate()”, referenced from:
cv::Mat::release() in OpenCVRenderer.o
“cv::polylines(cv::_InputOutputArray const&, cv::_InputArray const&, bool, cv::Scalar_ const&, int, int, int)”, referenced from:
.
.
.
Any help?
Thank you.
Hello Mr Staya Mallick, I’m actually using OpenCV 3.4 and I want to use its Tracking API. Do you have a tutorial on how to use tracking.hpp, where to download it ? Maybe from Opencv_Contrib from Github ?
The problem that I’m facing is that when I compile my program I get a LNK2019 unresolved external symbol, so I may have a problem in the Linker input, but I can’t resolve it.
Thank you for the very helpful post!
You may want to add that starting from **OpenCV 3.4.1**, you also have the CSRT tracking, and you can create it in Python using `cv2.TrackerCSRT_create()`
Soory for this dumb question but how are You compiling and running the c++ code beacuse i am getting an error for shared library.
Error is ” error while loading shared libraries: libopencv_tracking.so.3.4: cannot open shared object file: No such file or directory
”
Anyone encountered this type of error.
Thanx in advance!
Trying to compile tracker.cpp from opencv examples and getting an error on feature.hpp saying CV_OVERRIDE do not name any type
i’m using python 3.6 and these errors are showing up https://uploads.disquscdn.com/images/6455cc216cc81771a06eaf295c942ca0564ab4c35073e83e708437ba13ba5022.jpg after installing opencv-contrib-python.
i’m using python 3.6 and these errors are showing up after installing opencv-contrib-python. please help!
https://uploads.disquscdn.com/images/678332c161257376bbb29992feccd2fcc7847e220ec14c0c6a844b9075ae0ce4.jpg
File “C:/Users/ayush/untitled4.py”, line 4
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split(‘.’)
invalid syntax help me!!!!!
Thank you for the informative introduction on this topic.
A tiny typo here: “KFC stands for…”. It’s KCF, not KFC. :-p
Great article. Which version of OpenCV allows me to try all (or the most) tracking algorithms??
They are all in the current version of the contrib 3.4.1.
Further there are 2 more: MOSSE and CSRT.
GOTURN is still not working…
https://docs.opencv.org/3.4.1/d0/d0a/classcv_1_1Tracker.html
Look here, OpenCV:
https://github.com/opencv/opencv/tree/3.4.1
and here, contrib:
https://github.com/opencv/opencv_contrib/tree/3.4.1
Installation can be tricky… 🙁
Would be nice if information about these two could be updated on the page. (?)
Thanks for the excellent suggestion. Will definitely update this page soon.
Great 🙂
Could you please tell me on which data your pros and cons are based? Or are these information based on your personal experience?
What are your sources?
Would be nice to know, because I maybe want to refer to your page.
Would you help me with the steps for configuring opencv contrib?
Fantastic article! Could you kindly tell me how I could create an algorithm out of this that can track buildings? Can it be done in an unsupervised manner? i was looking at selectROI(). I am currently attempting to detect features BASED on a static image (a building) and try to detect these features over a video stream. Am not too sure if I should go for HAAR cascade classifier instead. Any pointers?
I ran the whole code through command line by downloading the code. But after running, only a window with label ‘ROI select’ is opened. After marking the ROI, nothing happens. Kindly help!
When I pressed escape key, the prerecorded video of chaplin.mp4 started but tracking was there even after selected the ROI. Why is this happening?
I have again solved the problem myself with the help of google. Necessary codecs were missing and also some indentation problems were present in the code. After both these issues were solved, the tracking was going normal.
I’m facing a few errors, can you help ?
https://uploads.disquscdn.com/images/4fee74a55ca5646e1272568bcb1f796a9e3bb9f6344491f16718220e7421a165.jpg
Thanks a lot for this excellent article!! I’ve implemented KCF and worked like charming. However, the bounding box is not localizing the object i.e it has fixed width and height starting from position x,y.
How to localize the bounding box within the object only when it is moving away from the camera? Or the algorithm will take neighbourhood of fixed bouding box i.e. bbox but not localizes the particular object like shrinking or expanding the bounding box?
Is it possible to modify the algorithm in that way?
Any suggestions please!!
Again,
Thanks a lot!!
Great post. MIL and KCF seem to be the better ones, but Opencv 3.4.1 does have two new additional trackers as well. Any example code with tracking multiple objects (e.g. people)? I assume a separate tracker for each would be required – can this be dynamic assuming you won’t know how many people are going to enter the scene?) I saw some vids on youtube, but it looks the trackers do get mixed up when people cross paths (minor occlusion) sometimes. Also, I understand DLIB also has a correlation tracker? Have you had a chance to check that out yet? Cheers.
Hi Sata,
thanks a lot for your several posts, really a great help.
Still I’m facing a problem with cv2.TrackerKCF_create() when I run your script I have this error occuring each time:
” module object has no attribute ‘Tracker_create()’
Can you tell me how to fix it ? I am using Jetson TX2 with OpenCV3.4.0, Python2.7.12 and Python3.5.2.
Thanks in advance
Vincent
you need to install the contrib libraries of opencv, https://github.com/opencv/opencv_contrib
Hi Satya,
I am curious to know if the object trackers are supported on Android platform? Would it be feasible to build the trackers for Android target?
HI,
Thanks for your nice and detailed post. I appreciate not only this post but the whole site you are managing with great passion.
However, I think the CPP Code on this page has a minor issue. The tracker is not initialized in the code. I guess there should be tracker->init(frame, bbox); before the while loop.
Thanks,
Meer
Hi Meer Sadeq Billah!
Thanks for your comment. It’s great to hear that the blog has been useful for you 🙂
I have fixed the initialisation issue. Thanks for bringing it to our notice.
Thanks
Vishwesh
Fantastic Article Thanks for the help!!
I would like to add something to it.
while True:
ok, frame = video.read()
cv2.imshow(“Select ROI”, frame)
#optical_flow = fg.apply(frame)
#box = cv2.selectROI(frame, False)
key = cv2.waitKey(1) & 0xff
if key == ord(“i”):
bbox = cv2.selectROI(frame, False)
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
pt = ((p1[0]+p2[0])/2,(p1[1]+p2[1])/2)
cv2.rectangle(frame, p1, p2, (255,255,255), 2, 1)
cv2.imshow(‘Frame ROI’,frame)
#cv2.destroyAllWindows()
break
This piece of code will help us to select ROI in real time and not instantly
Thanks, Saurabh.
Sir,
i got this error
“csrt”: cv2.TrackerCSRT_create,
AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerCSRT_create’
Does anyone know how KCF detects tracking failures? Also, although MIL and OLB are both pretty bad at it, do they have any failure detection mechanisms?
hey Satya, Nice post. I was wondering that how can I detect the object automatically? With that I don’t have to provide the ROI manually. I think that will help a lot in solving real world problems. I know that I must use detection algorithm. But what kind of detection will be best for this car tracking kind of things? I have tested tensorflow / caffe model but they are heavy and eat up more cpu. What can be done in this regard?
Unfortunately, you have to use a DL model for good accuracy. Try YOLO 3. It gives about 5 FPS on a CPU. Here is the link
https://learnopencv.com/deep-learning-based-object-detection-using-yolov3-with-opencv-python-c/
Thanks for the quick reply. But a mere 5 FPS would be really really small for production level. I see that in the examples of trackers somewhere you have mentioned that FPS is higher. I have no idea how you achieved that or is it just because of predefined ROI?
Thanks again..
Tracking is different from detection. You can do tracking at very high FPS as mentioned in the post. You can do detection ( which is expensive ) very second or so and track intermediate frames, thus effectively getting real time performance.
Thanks for the differentiation and suggestion. I would like to know about CPU usage in real-time. With both detection and tracking implemented what could be a possible CPU usage percentage?
Thanks..
Its nice reading good post but i tried it but it work but i cant get any bounding box or tracking the car is just going when i press Enter…Or what key can i press to track
Thanks
Hi, Satya! I trcrive error while try your code: File “Object_tracking_webcam.py”, line 100, in
tracker = cv2.Tracker_create(tracker_type)
AttributeError: module ‘cv2.cv2’ has no attribute ‘Tracker_create’ Version of cv2 is 3.4.2.
Do you know if there is a bug in OpenCV 3.4.1 with the MIL and OLB trackers? I am using YOLOv2 to detect and reset the tracker, but every once in a while it fails during initialization. I get an error similar to this:
https://stackoverflow.com/questions/50462466/could-not-initialized-tracker-in-opencv
Thanks.
Nevermind. Fixed.
I’m using opevcv 3.3.0 and I cannot initialize the tracker. I get an error stating ‘NameError: name ‘tracker’ is not defined.’ It is talking about ok = tracker.init(frame,bbox). Any ideas?
You need to install OpenCV contrib libraries, look at https://www.youtube.com/watch?v=MMDABTypnZg This kod are in part 2