In this post, we will learn how to Read, Write and Display a video using OpenCV. Code in C++ and Python is shared for study and practice.
Before we do that, allow me a digression into a bit of history of video capture.
On June 15, 1898, in Palo Alto, California, a remarkable experiment was conducted to determine whether a galloping horse ever had all four feet off the ground at the same time. This historic experiment by photographer Eadweard Muybridge was the first time a motion sequence was captured in real time. It was financed by Leland Stanford of the Standford University fame.
Eadweard placed multiple cameras, 27 inches apart along the side of the race track. To every camera’s shutter was connected a thread that ran across the track. When the horse ran on the track, it broke one thread after the other triggering the camera shutters in series and exposing the films for one-thousandth of a second!
This remarkable story almost did not happen. Just a few years before this achievement, Muybridge shot and killed his wife’s lover. The jury acquited him on grounds of “justifiable homicide!” But we have digressed a bit too far.
So, first up, what is a video? A video is a sequence of fast moving images. The obvious question that follows is how fast are the pictures moving? The measure of how fast the images are transitioning is given by a metric called frames per second(FPS).
When someone says that the video has an FPS of 40, it means that 40 images are being displayed every second. Alternatively, after every 25 milliseconds, a new frame is displayed. The other important attributes are the width and height of the frame.
Reading a Video
In OpenCV, a video can be read either by using the feed from a camera connected to a computer or by reading a video file. The first step towards reading a video file is to create a VideoCapture object. Its argument can be either the device index or the name of the video file to be read.
In most cases, only one camera is connected to the system. So, all we do is pass ‘0’ and OpenCV uses the only camera attached to the computer. When more than one camera is connected to the computer, we can select the second camera by passing ‘1’, the third camera by passing ‘2’ and so on.
Python
# Create a VideoCapture object and read from input file
# If the input is taken from the camera, pass 0 instead of the video file name.
cap = cv2.VideoCapture('chaplin.mp4')
C++
// Create a VideoCapture object and open the input file
// If the input is taken from the camera, pass 0 instead of the video file name
VideoCapture cap("chaplin.mp4");
After the VideoCapture object is created, we can capture the video frame by frame.
Displaying a video
After reading a video file, we can display the video frame by frame. A frame of a video is simply an image and we display each frame the same way we display images, i.e., we use the function imshow().
As in the case of an image, we use the waitKey() after imshow() function to pause each frame in the video. In the case of an image, we pass ‘0’ to the waitKey() function, but for playing a video, we need to pass a number greater than ‘0’ to the waitKey() function. This is because ‘0’ would pause the frame in the video for an infinite amount of time and in a video we need each frame to be shown only for some finite interval of time. So, we need to pass a number greater than ‘0’ to the waitKey() function. This number is equal to the time in milliseconds we want each frame to be displayed.
While reading the frames from a webcam, using waitKey(1) is appropriate because the display frame rate will be limited by the frame rate of the webcam even if we specify a delay of 1 ms in waitKey.
While reading frames from a video that you are processing, it may still be appropriate to set the time delay to 1 ms so that the thread is freed up to do the processing we want to do.
In rare cases, when the playback needs to be at a certain framerate, we may want the delay to be higher than 1 ms.
The Python and C++ implementation of reading and displaying a video file follows.
Python
import cv2
import numpy as np
# Create a VideoCapture object and read from input file
# If the input is the camera, pass 0 instead of the video file name
cap = cv2.VideoCapture('chaplin.mp4')
# Check if camera opened successfully
if (cap.isOpened()== False):
print("Error opening video stream or file")
# Read until video is completed
while(cap.isOpened()):
# Capture frame-by-frame
ret, frame = cap.read()
if ret == True:
# Display the resulting frame
cv2.imshow('Frame',frame)
# Press Q on keyboard to exit
if cv2.waitKey(25) & 0xFF == ord('q'):
break
# Break the loop
else:
break
# When everything done, release the video capture object
cap.release()
# Closes all the frames
cv2.destroyAllWindows()
C++
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(){
// Create a VideoCapture object and open the input file
// If the input is the web camera, pass 0 instead of the video file name
VideoCapture cap("chaplin.mp4");
// Check if camera opened successfully
if(!cap.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
while(1){
Mat frame;
// Capture frame-by-frame
cap >> frame;
// If the frame is empty, break immediately
if (frame.empty())
break;
// Display the resulting frame
imshow( "Frame", frame );
// Press ESC on keyboard to exit
char c=(char)waitKey(25);
if(c==27)
break;
}
// When everything done, release the video capture object
cap.release();
// Closes all the frames
destroyAllWindows();
return 0;
}
Writing a video
After we are done with capturing and processing the video frame by frame, the next step we would want to do is to save the video.
For images, it is straightforward. We just need to use cv2.imwrite(). But for videos, we need to toil a bit harder. We need to create a VideoWriter object. First, we should specify the output file name with its format (eg: output.avi). Then, we should specify the FourCC code and the number of frames per second (FPS). Lastly, the frame size should be passed.
Python
# Define the codec and create VideoWriter object.The output is stored in 'outpy.avi' file.
# Define the fps to be equal to 10. Also frame size is passed.
out = cv2.VideoWriter('outpy.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))
C++
// Define the codec and create VideoWriter object.The output is stored in 'outcpp.avi' file.
// Define the fps to be equal to 10. Also frame size is passed.
VideoWriter video("outcpp.avi",CV_FOURCC('M','J','P','G'),10, Size(frame_width,frame_height));
FourCC is a 4-byte code used to specify the video codec. The list of available codes can be found at fourcc.org. There are many FOURCC codes available, but in this post, we will work only with MJPG.
Note: Only a few of the FourCC codes listed above will work on your system based on the availability of the codecs on your system. Sometimes, even when the specific codec is available, OpenCV may not be able to use it. MJPG is a safe choice.
The Python and C++ implementation of capturing live stream from a camera and writing it to a file follows.
Python
import cv2
import numpy as np
# Create a VideoCapture object
cap = cv2.VideoCapture(0)
# Check if camera opened successfully
if (cap.isOpened() == False):
print("Unable to read camera feed")
# Default resolutions of the frame are obtained.The default resolutions are system dependent.
# We convert the resolutions from float to integer.
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
# Define the codec and create VideoWriter object.The output is stored in 'outpy.avi' file.
out = cv2.VideoWriter('outpy.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))
while(True):
ret, frame = cap.read()
if ret == True:
# Write the frame into the file 'output.avi'
out.write(frame)
# Display the resulting frame
cv2.imshow('frame',frame)
# Press Q on keyboard to stop recording
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Break the loop
else:
break
# When everything done, release the video capture and video write objects
cap.release()
out.release()
# Closes all the frames
cv2.destroyAllWindows()
C++
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(){
// Create a VideoCapture object and use camera to capture the video
VideoCapture cap(0);
// Check if camera opened successfully
if(!cap.isOpened()){
cout << "Error opening video stream" << endl;
return -1;
}
// Default resolutions of the frame are obtained.The default resolutions are system dependent.
int frame_width = cap.get(cv::CAP_PROP_FRAME_WIDTH);
int frame_height = cap.get(cv::CAP_PROP_FRAME_HEIGHT);
// Define the codec and create VideoWriter object.The output is stored in 'outcpp.avi' file.
VideoWriter video("outcpp.avi", cv::VideoWriter::fourcc('M','J','P','G'), 10, Size(frame_width,frame_height));
while(1){
Mat frame;
// Capture frame-by-frame
cap >> frame;
// If the frame is empty, break immediately
if (frame.empty())
break;
// Write the frame into the file 'outcpp.avi'
video.write(frame);
// Display the resulting frame
imshow( "Frame", frame );
// Press ESC on keyboard to exit
char c = (char)waitKey(1);
if( c == 27 )
break;
}
// When everything done, release the video capture and write object
cap.release();
video.release();
// Closes all the frames
destroyAllWindows();
return 0;
}
Summary
In this post we have learned the basic aspects of how to Read, Write and Display a video using OpenCV. These basic steps are the very foundation for many interesting and problem solving Computer Vision and Machine Learning applications such as Video Classification and Human Activity Recognition, and Help robots with a vision to navigate autonomously, grasp different objects or avoid collisions while moving.
Subscribe & Download Code
If you liked this article and would like to download code (C++ and Python) and example images used in this post, please click here. Alternately, sign up to receive a free Computer Vision Resource Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.Key takeaways:
- A video can be read either by using the feed from a camera connected to a computer or by reading a video file.
- Displaying a video is done frame by frame. A frame of a video is simply an image and we display each frame the same way we display images.
- To write a video we need to create a VideoWriter object.
- First, specify the output file name with its format (eg: output.avi).
- Then, we should specify the FourCC code and the number of frames per second (FPS).
- Lastly, the frame size should be passed.
Pitfall: If the video file you are reading is in the same folder as your code, simply specify the correct file name. Else, you would have to specify the complete path to the video file.
The tutorial is clean and clear.
But in the last piece of C++ code, line 14, you forgot to break the line after the comment.
Thanks for the post.
Thanks a bunch. We have fixed the problem.
Avinab,
Satya,
Great post as well as sense for history.
Is there any way to compile the Python + numpy + opencv into a standalone file over mac?
Tried several: http://python-guide-pt-br.readthedocs.io/en/latest/shipping/freezing/
But they could compile only plain Python without numpy or opencv.
Thanks very much
OpenCV has a C++ API and that is the right one to use if you want to create a standalone application.
Hello,
Thank you very much for this code. It works perfectly with OpenCV 3.0.0 but when I’m using version 3.2.0, I get this error :
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Then it puts your sentence, line 10 : Error opening video stram or file.
How can it happen ?
Thank you in advance,
Mathilde
Is it possible you compiled OpenCV 3.2 without ffmpeg support ?
I just recompiled it, to be sure, and it doesn’t change anything… Do you have any other idea of what might be the problem ?
Why is it waitKey(25) in the first example and waitKey(1) in the second?
You can use waitKey(1) everywhere. It is the pause / delay. Because your webcam most likely works at 30 FPS, any number less than that is fine.
I tried to capture the video from wireless A/v camera through TV card…. for above python code,at index 10.. it asks to select tv card or webcam ….. when TV card was selected, it didn’t captured the video.. just black frame appeared.. and when i tried to exit .. my pc automatically restarted pointing some pC issues has encountered……. … the same tv card works , when i used another software(pot player)
how to solve this issue???
thank you in advance
Sorry, no idea.
I have a question, if i want do something afer the end of the video (for example i want to create a caracter who can move in the screen) where i should write the other code?
You can simply open the video, go to the last frame, create new frames with the character and append those frames to the end of the video.
You haven’t understeand my question…
I want that when the video finishes I can write other code.
Where should I put it?
In the code above ( for reading and displaying video file ), you can put in on line 37 (C++ ) or line 28 ( Python ).
Hi I am trying to make a video from JPEG images. Is that the same procedure I will do (except for the video_capture object creation)?
Yes, you can read in one image at a time using imread and then add that as a frame to the video.
Very great tutorial well explained !
Thanks, Charles.
How to put any text on live video (on a live video I want add any text)how to do that:
import numpy as np
import cv2
capp = cv2.VideoCapture(‘Picture Maker’)
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.putText(frame,”try try”,(50, 50),cv2.FONT_HERSHEY_COMPLEX_SMALL, .7, (0, 0, 255))
cv2.imshow(‘frame’,gray)
key = cv2.waitKey(1) & 0xFF
if key == ord(‘q’):
break
# When everything done, release the capture
cap.release()
Why do you have
capp = cv2.VideoCapture(‘Picture Maker’)
in the top?
How can I automatically extract the time of the video when I pause it?
cap.get(CV_CAP_PROP_POS_MSEC)
Should give you the current position in milliseconds.
Thanks a lot! But I still have some question. I just read a tutorial book about openCV. In the waitkey part on the book is:
if( cv::waitKey(33) >= 0) break;
It doesn’t work. I replace it with your code:
char c=(char)waitKey(25);
if(c==27) break;
and it works. Why? Thanks a lot 🙂
Have a look at https://stackoverflow.com/questions/29411301/opencv-waitkey-method-return-type
a)better english than mine :
b) avoids implicit conversion
c) has a safe (typo preventing) way to test equality …..
Hey, i have a question that is bugging me, i tried to define those functions as: start and end, so that i could control it automatically. for example:
vc.start #start’s recording
… #some commands
vc.end #stop’s recording
conclusion: i would like to know how to setup start and end of recording without pressing ‘Q’ (i would like to automatize the programme)
PS. Great tutorial !
Hi Jan,
The question is when would you like to trigger a start? For example, you can use motion detection to trigger video capture.
Satya
Hello Sir, i have a question related to skip frames. How can be get only specific frames from a video for e.g. i have a 1280*720 video file with 25FPS and i want to apply skip frames to it means get frames after every 10 frames until the end of video? Take a look at the code below:
import cv2
vidcap = cv2.VideoCapture(‘video.mp4’)
success,image = vidcap.read()
count = 0
success = True
while success:
success,image = vidcap.read()
cv2.imwrite(“frame%d.jpg” % count, image)
count += 1
by using time or local counter to skip frame within while loop
if condition :
cv2.imwrite(“frame%d.jpg” % count, image)
I tried to read a video(.mp4 file)from my pc but the output is always ‘Error opening file’.The file is in C:Python27 itself What should I do?
sir I am getting same error.
have you any information?
I am working to detect smoke, if you have any idea please share with me.
It is tough to guess. Here are a few things that could be wrong
1. It is not finding the file in path.
2. Your system does not have the necessary codecs.
Satya
Hi Satya Mallick,
Have you experienced with QT opencv videowrite from camera captured? I couldnt succed.. any idea?
Hi Sevgi,
Yes, I have tried it before. Here is the link that may be helpful.
https://learnopencv.com/configuring-qt-for-opencv-on-osx/
Satya
Can this record raw video? Thanks so much.
Do you mean video without any compression? If the webcam compresses the video on the device then it is not possible. However, if the webcam sends raw frames, you can simply collect the frames and write to disk in a non-lossy format like PNG.
Hi Satya
Sorry, one more question – where exactly will I install this program to capture images from my camera?
Thanks so much.
Very good tutorial! Thank you very much
Thanks, Mattia.
Hi, thanks for the tutorial.
One question: When reading the video from the file, I cannot reproduce the audio, it happens also when reading from a camera. How ca I fix this??
Regards,
OpenCV does not deal with audio.
Another question: Is it possible to connect my phone camera to this algorithm instead of a real camera??
I have never tried it myself, but you can try this link
https://www.makeuseof.com/tag/use-smartphone-webcam-computer/
to turn your camera into a USB or IP camera and it might just work.
hi, how do we record video from 1 kinect and 2 portable camera simultaneously and store it as a training data (video_database) using python
If you have multiple USB cameras, cap0 = cv2.VideoCapture(0) and cap1 = cv2.VideoCapture(1) should give you access to the two cameras.
Thanks for the tutorial.
I’m using Python 2.7. and opencv 3.4.
When I try reading video file It always shows “Error opening video stream or file” message
The video file is in the same path of the source code. I tried the absolute and the relative path,
and I tried to copy the .dll file “opencv_ffmpeg341_64.dll” to python directory but nothing seems to work.
Any help please??
i have same problem
please chk your CWD , i think that’s something you missing by assuming the default path while internally it has changed to something else .
What is the value will be stored in variable ret in line 20
Hi Satya,
Thanks for this tutorial! I tried the above code and it works well in my local machine. But when I try to implement this on Heroku PaaS with FIask micro services using Python, I face challenges in spite of changing the IP to 0.0.0.0. Do you have any reference implementation of OpenCV with Flask and Heroku using Python?
Thanks!
Sorry, I don’t, but I can’t see how it would be related to OpenCV. It is some integration bug.
but what about the project file, is there anything that needs to be added for C++
Hi Everyone,
“Live Video Streaming” is working fine from below code:
But I want to audio after play the video. I hope that OpenCV does not allow the audio. So Kindly provide the another solution.
Thanks,
Pravin Yadav
import numpy as np
import cv2, time
cap = cv2.VideoCapture(0)
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*’MJPG’)
out = cv2.VideoWriter(‘output.avi’,fourcc, 5.0, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
frame = cv2.flip(frame,180)
# write the flipped frame
out.write(frame)
cv2.imshow(‘frame’,frame)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()
Compilator can’t find description of functions VideoCapture::read() and VideoCapture::imshow()
Where they described?
You need to install OpenCV.
cool cool cool my frend
At first i want to thank you for your amazing tutorial. I have a little bit problem on following 2 lines
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
Here why we use 3 and 4 as a parameter ?
Hi Satya! I have a little bit problem. I install openvc and opencv_contrib by Cmake. I connected all nessesary lib files , but compilator still doesn’t see 2 externals : cv::CascadeClassifier::CascadeClassifier() and imshow()
I also tried Release mode but it didn’t help.
Here’s a list of lib files which I connected:
opencv_face400d.lib
opencv_calib3d400d.lib
opencv_core400d.lib
opencv_highgui400d.lib
opencv_img_hash400d.lib
opencv_imgproc400d.lib
opencv_objdetect400d.lib
opencv_line_descriptor400d.lib
opencv_photo400d.lib
opencv_shape400d.lib
opencv_video400d.lib
opencv_videoio400d.lib
opencv_videostab400d.lib
opencv_features2d400d.lib
also I included this headers:
#include “opencv2/opencv.hpp”
#include “opencv2/face.hpp”
#include “drawLandmarks.hpp”
I using x64 version of machine
how to save video after background subtraction?
e.g. the below code displays the video, but the saved video file doesn’t open. Tried all formats.
import cv2
capture = cv2.VideoCapture(‘video1.avi’)
size = (int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)),
int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT)))
fourcc = cv2.VideoWriter_fourcc(‘M’,’J’,’P’,’G’)
video = cv2.VideoWriter(‘newFile2.avi’,cv2.VideoWriter_fourcc(‘M’,’J’,’P’,’G’), 10,size)
fgbg= cv2.createBackgroundSubtractorMOG2()
while True:
ret, frame = capture.read()
if ret==True:
fgmask = fgbg.apply(frame)
cv2.imshow(‘frame’,fgmask)
video.write(fgmask)
#if(cv2.waitKey(30)==27):
if cv2.waitKey(1) | 0xFF == ord(‘q’):
break
else:
break
print(“releasing”)
capture.release()
video.release()
cv2.waitKey(1)
cv2.destroyAllWindows()
print(“released”)
cv2.waitKey(1)
very well written tutorial. Thanks!
Hello Dear Satya,
I have installed OpenCV for my Ubuntu successfully. However, there is something wrong with streaming.
In python I get:
VIDEOIO ERROR: V4L: can’t open camera by index 0
Unable to read camera feed
In Cpp I get:
GStreamer-CRITICAL **: 10:04:58.776: gst_element_get_state: assertion ‘GST_IS_ELEMENT (element)’ failed
VIDEOIO ERROR: V4L: can’t open camera by index 0
Error opening video stream
When I run, sudo apt-get install gtstreamer, I get:
E: Unable to locate package gtstreamer
I will appreciate if you share with me your ideas 🙂