In this tutorial we will learn how to swap out a face in one image with a completely different face using OpenCV and DLib in C++ and Python.
Ladies and gentlemen, let me present Ted Trump, Donald Clinton and Hillary Cruz. Do you like any of them ? Me neither! I know, I know, I know, the above images are pretty disturbing, but so are the original choices of presidential candidates. It is a race among clowns, so let’s have some fun at their expense.
This post builds on previous posts on Facial Landmark Detection, Delaunay Triangulation, Face Morphing and Seamless Cloning.
Why is Face-Swap difficult ?
The human brain treats human faces as a special category and has specialized machinery to process faces. We are very good at analyzing faces and can easily detect a fake face. It is easy to computationally replace a face in one image with a different face if you want to do it for giggles, but extremely difficult to do if you want to do it completely automatically at a quality that will fool people consistently. After all, we are trying to fool one of the most advanced cognitive machinery in the human brain.
Consider the images of top three presidential candidates in Figure 2.
The three images are pretty different. Yes, Donald Trump is very ugly, but that is not what I mean.
First, the facial geometry of regular human beings like Secretary Hillary Clinton and Senator Ted Cruz varies quite a bit. Add Donald Trump to the mix, and you now have to deal with outliers that lie on the intersection of homo sapiens and some unknown primate with funny hair.
Second, the lighting on the face combined with the tone of the skin can make the images look very different. E.g. Secretary Hillary Clinton’s image looks yellow, while Senator Ted Cruz’s image looks red, and Donald Trump continues to look ugly.
Third, the pose of the face ( or camera angle if you will ) can vary significantly.
And finally, the texture of the skin can vary from smooth to almost leathery ( i.e. Clinton to Trump ).
The technique proposed in this post will address the first two problems but not the last two.
FaceSwap : Step by Step using OpenCV
- Face Alignment : To replace one face with another, we first need place one face approximately on top of the other so that it covers the face below. An example is shown in Figure 3.
- Facial Landmark Detection The geometry of the two faces are very different and so we need to warp the source face a bit so that it covers the target face, but we also want to make sure we do not warp it beyond recognition.To achieve this we first detect facial landmarks on both images using dlib. However, unlike in Face Morphing, we do not and should not use all the points for face alignment. We simply need the points on the outer boundary of the face as show in the image.
- Find Convex Hull In Computer Vision and Math jargon, the boundary of a collection of points or shape is called a “hull”. A boundary that does not have any concavities is called a “Convex Hull”. In Figure 3. the image on the left shows facial landmarks detected using dlib in red and the convex hull of the points is shown in blue. The convex hull of a set of points can be calculated using OpenCV’s convexHull function.
Python
# points is numpy array of points obtained using dlib.
hullIndex = cv2.convexHull(points, returnPoints = False)
# hullIndex is a vector of indices of points
# that form the convex hull.
C++
vector<int> hullIndex;
// points is of type vector<Point2f> obtained using dlib.
convexHull(points, hullIndex, false, false);
// hullIndex is a vector of indices of points
// that form the convex hull.
- Delaunay Triangulation The next step in alignment is to do a Delaunay triangulation of the points on the convex hull. The triangulation is shown in the middle image in Figure 3. This allows us to divide the face into smaller parts. My previous post that explains Delaunay triangulation in detail can be found here
- Affine warp triangles The final steps of face alignment to to consider corresponding triangles between the source face and the target face, and affine warp the source face triangle onto the target face. More details can be found in my post about Face Morphing. However, as you can see in the right image of Figure 3, aligning the face and slapping one face on top of the other hardly looks unnatural. The seams are visible because of lighting and skin tone differences between the two images. The next step shows how to seamlessly combine the two images.
2. Seamless Cloning : Good technical ideas are like good magic. Good magicians use a combination of physics, psychology and good old sleight of hand to achieve the incredible. Image warping alone looks pretty bad. Combine it with Seamless Cloning and the results are magical! I had written a post explaining the details here.
It is a feature in OpenCV 3 that allows you to seamlessly clone parts of the source image ( identified by a mask ) onto a destination image.
Python
output = cv2.seamlessClone(src, dst, mask, center, cv2.NORMAL_CLONE)
C++
seamlessClone(src, dst, mask, center, output, NORMAL_CLONE);
The src image in the above usage is the shown in Figure 3. ( Right ). The dst image is image onto which we want to blend the source image (i.e. the image of Donald Trump ). The mask is calculated by filling the convex hull with white using fillConvexPoly and the center is the center of the bounding box that contains the mask.
Subscribe & Download Code
If you liked this article and would like to download code (C++ and Python) and example images used in this post, please click here. Alternately, sign up to receive a free Computer Vision Resource Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.Image Credits
The images of Secretary Hillary Clinton and Senator Ted Cruz are in the Public Domain, while the image of Donald Trump is licensed under Creative Commons Attribution-Share Alike 2.0 Generic license.
Hello, I am working on simillar task now with OpenCV, Dlib in C++. Now I’m facing problem with seamlessClone function. I can’t properly align faces (can’t figure out how to choose right center argument).
To clarify: I have one image with 2 faces, detected them, found face landmarks using dlib, than made 1 bit masks than transformed masks and faces to right positions so they are perfectly placed but still not color corrected.
Now I want to correct colors. I use centroid of mask get from function cv::moments(). But this isn’t right position for all cases and I can’t figure out which position should I use for general purposes. Can you guide me little bit? I uploaded my results with color correction and without.
Hi Ondrej,
Sorry for this late reply. I was on a vacation.
Yes, the center is tricky. It is actually bad API design choice.
The right position for the centroid is the center of the bounding box that contains the masked region.
In the code I have shared, you will notice I find a bounding rectangle around the mask and then use it’s center.
Rect r = boundingRect(mask);
Point center = (r.tl() + r.br()) / 2;
I had to dig in the opencv source code to figure this out.
Thanks for reply Satya.
Yes I noticed that you use center of the bouding rectangle but in my case it is not a solution because I have 2 faces on a photo. And also it isn’t a general solution even for you. It will properly work just when faces on both photos are aligned equally.
I tried to use centers of bouding rectangles of detected faces yet. But again not a general solution because sometimes face detector makes funny rectangle around face and again everything is moved 🙁
For now there is just one solution if I want to make it work properly – code own color correction method – utopia 😀 Maybe not impossible it could be just some color hist equalization, but I think I don’t have enough appropriate knowledge for this now.
Anyway I appreciate your willingness.
Can you share the images and points and let me see if I can come up with a solution ? Will try this on Sunday.
Hello I wanted to pin zip archive but it is not permited. So I uploaded them one by one. Didn’t know what exactly you want so I uploaded all which can be relevant for you. Points are computed from both warped_masks as centroids of them using moments:
Moments m = moments((warped_mask >= 50), true);
Point center(m.m10 / m.m00, m.m01 / m.m00);
hello, i have some problem when i try to run code from this page on visual studio, the program always give me output like this
`Project2.exe (Win32): Loaded C:WindowsSystem32ucrtbased.dll. Cannot find or open the PDB file`
and
`Project2.exe (Win32): Loaded C:OpenCV310buildx64vc14binopencv_world310d.dll. Cannot find or open the PDB file.`
i really dont know what to do,if you have any sollution please help me 🙁
Hi Faik,
Sorry, I don’t use Windows, and I am not familiar with this error message.
Satya
oke thankyou:)
Have you properly set your project with OpenCV?
of course i have to properly set my project with OpenCV3.1.0 but i dont know why this happen to me, if you have any solution ill so thankyou:)
I´m not really sure. I just used OpenCV this semester. I also had some errors with PDB files not found but I can’t remember how I solved them. I build OpenCV by myself to get extra modules. So I don’t use world310d.dll.
But as I read on stackoverflow I remembered that PDB files are just some debug information files and what you get in your output is not an error but warning and don’t influence your program, it is probably working even if you get this warning in output. check some questions on SO to get answer on it. http://stackoverflow.com/questions/12954821/cannot-find-or-open-the-pdb-file-in-visual-studio-c-2010 http://stackoverflow.com/questions/21918816/vs-2013-opencv-error-cannot-find-or-open-the-pdb-file
There was one image file missing. Can you please take an update and try again ? Thanks.
please tell me how you run this code in visual studio
hello, i have some problem using it in visual-studio 2015 and opencv 3.1. when i start debugging it show warning System.Runtime.InteropServices.SEHException: External component has thrown an exception. any one can help me?
Hi Muhammad,
Sorry, I do not usually use Visual Studios and am not familiar with this problem.
There was one image file missing. I have added that file to the repo. Can you please take an update and try again ? Thanks.
Hi, I tried running the faceswap program on Mac, but I received this error:
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat, file /Users/pixelfilmstudiosp-1/Downloads/opencv-3.1.0/modules/core/src/matrix.cpp, line 508
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/pixelfilmstudiosp-1/Downloads/opencv-3.1.0/modules/core/src/matrix.cpp:508: error: (-215) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function Mat
Is the new version of OpenCV not compatible with this code or something? Any help would be greatly appreciated.
I have tested this on 2.4.x and 3.x. Can upload the images you are using and the points ?
I was using the same images in the FaceSwap directory of the learnopencv-master. Perhaps I’m compiling incorrectly? I used this as my guide for compiling on Mac: http://blogs.wcode.org/2014/10/howto-install-build-and-use-opencv-macosx-10-10/
I made a CMakeLists.txt in the same FaceSwap directory:
cmake_minimum_required(VERSION 2.8)
project( FaceSwap )
find_package( OpenCV )
include_directories( ${OpenCV_INCLUDE_DIRS} )
add_executable( FaceSwap faceSwap.cpp )
target_link_libraries( FaceSwap ${OpenCV_LIBS} )
While it does compile, I get an exception when I run the binary.
If it compiles, you should be getting the right results. I noticed you have changed the files from ted_cruz.jpg to donald_trump.jpg . The only thing I can think of is incorrect file name. Can you please check if you have the correct file name ?
I edited it for hillary_clinton.jpg and it worked fine. The FaceSwap directory in your GitHub has the ted_cruz.jpg.txt, but it doesn’t have the actual ted_cruz.jpg. Thanks for your help!
You are right. I just added the file. Thanks for bringing that to my attention. Maybe people with Windows are running into this issue as well.
Hi, I tried running it on android device everything working fine seamlessclone is doing a great job but its taking lot of time approx. 15 sec. If I apply seamlessclone on just ROI by cutting faces time comes down to 3-5 sec which is too much if we are doing live face swapping. Resolution of image is 640×960. can we reduce this time or we can use some other function/technique that gives us good results. Thanks
Good to know that it works on Android. Unfortunately SeamlessCloning is expensive because it requires integrating the gradient field. For a faster method you may have to match histograms ( https://en.wikipedia.org/wiki/Histogram_matching ) instead of trying seamless cloning.
hello, kind sir. can you give some guidance and/or example for android implementation? 😀
hello sir, would you mind to also give me the guidance or example for android implementation? I am doing something about this for my final year project. It would be helpful to me. Thank you very much.
Hi Arslan, Do you have any solutions to reduce the time? I am also doing an Android project and I face up to the same problem like you.
Thank you. I have done IOS faceswap app by your help..
can i have the ios src code.
can i have the face swap ios src code.
Thats very nice! Thanks for letting me know.
Can you plz share ios code with me its urgent to implement thanks
Can you plz share your code with me it will be helpfull for me
can i develop same scenario for android using open cv?
i am not able to download samples.
when i try to put this code in Visual studio 2015, I have found this error
Exception thrown at 0x00007FF9B2BB7788 in Project.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x00000054BBAFD430.
can you help me to resolve this?
Sorry, I have not worked in the windows environment for a very long time and don’t have an idea.
I am geting the same issue, what to do?
Hi Sir,
Thanks for your detailed articles to learn opencv. I have compiled successfully face swap example in ubuntu using c++ code. The result is same as you described in your article.
1. The problem is this that the mask does not change its expression before overlay. If user mouth is open OR one or both eyes are closed, than there is no change in mask after apply.
Actually, I want to develop app like msqrd for android in which mask is changed its express like actual image. Can u help me to do so using face swap code?
2. When I apply mask on face, actual eyes was replaced with mask eyes. I want to display actual eyes of face. How is it possible?
3.How can I apply half mask on actual face?
Thanks and regards
All those are completely possible. It is tough to explain in a comment, but let me try.
1. In OpenCV, given a bunch of points on a polygon, you can create an mask by filling the polygon with white.
http://docs.opencv.org/3.0-beta/modules/imgproc/doc/drawing_functions.html#fillpoly
2. So now, you can create masks for the mouth and eye region and subtract this mask from the original mask. Use this modified mask in the application and you should be able to achieve what you want.
Could you please show an example using python on how to remove mouth and eyes from the original mask?
Is it possible to Morph two handwritten alphabets with it ?
Yes. If you know the point correspondences, then you can easily modify the code to morph one into the other.
SATYA,Hello from Russia. Thank for your job.
could you help me?
I want to put an object on top of the face, without carving the face on the mask.
All attempts at corrections in the code from your example – turned out to be a failure.
I need it to create superhero masks on faces. https://uploads.disquscdn.com/images/29fdb1bfccaed5b4f8950ebbfd2199601dfe9c5c05c720a00343f56860ff6f24.jpg
Good job, blending 2 faces was awesome.
Is there any way to get the c++ source code?
Thank you.
If you register for the the newsletter, you will receive link to the code.
I tried signing up but the email never came. I checked spam folders too. Is there something I’m missing?
An email requesting confirmation was sent to you on 05/16/17. You need to click on the confirmation link.
In case you did not receive the confirmation email, you unfortunately will have to register using a different email address. In gmail, you can put a dot in your username anywhere and it is still the same email address. E.g. [email protected] and [email protected] both go to the same gmail address. So you can actually use your current email address if it is gmail and just put a dot somewhere in the username.
Hope that helps.
Hello, Satya.
Thanks for this tutorial. It was crucial for my own implementation of a face swap application. I’m a student from Brazil and i like to use Computer Vision as a hobby.
Thanks!
Thanks, Diego. Hope you are having fun with CV
Okay, now that I’ve gotten the code for this, I have a couple of questions:
1. I’m trying to generate more examples by changing the images used, but I’m having trouble getting any implementation of the facial landmark detector (Such as the example dlib provides) to output me a nice text file with all the landmarks. Ultimately the pipeline for the project I’m working on would probably build the landmark detection in so that storing the points in an intermediate file wouldn’t be necessary, but is there any trick to getting the examples to give me output that’s usable by this example, so that I could get a better sense for how the system works?
2. For the C++ examples, there isn’t a lot of instruction on the build process. As someone new to openCV, I’m running into a lot of dependency-wrangling issues, and would like to know what linker flags, etc. are needed. Do you have a makefile for the examples, or can you list the linker flags a normal compiler needs?
Thanks, these tutorials are very well-written and I appreciate how responsive you are to questions.
Hi every one,
I am try to detect the face landmark and crop the face landmark area. I have successful detect the landmark area using dlib , my next step is to crop the detect area, I ready lots of blogs ,official documents of OpenCv, but I am unable to understand what i need to do for cropping the area .
My request please help me out from this problem ,
Code : face Landmark points:
List landmarks = ret.getFaceLandmarks();
for (Point point : landmarks) {
int pointX = (int) (point.x * resizeRatio);
int pointY = (int) (point.y * resizeRatio);
canvas.drawCircle(pointX, pointY, 2, mFaceLandmardkPaint);
}
Note :
i am using android technology . Please helpt me out .
Thanks
Hello, Satya.
Thanks for this tutorial. It’s awesome. But I have a big problem. The cv2.seamlessClone() function is incredibly slow. It used about 4 second. https://uploads.disquscdn.com/images/0623c255f7dcd09f7439477ed45902afdc8478c49058914e453572e97ef34af3.png
Is there any way to speed up the seamlessClone function?
Thanks very much!
Why do you need to convert the jpg files into txt files? and what do you do to convert them?
How to convert image to image.txt ????
Hi. Thank you for your great tutorials!
How can I generate a new file.txt for using my own images? I’ve tried with dlib using this and it doesn’t work. Thank you!
filename1 = “mypic.jpg”
predictor_path = “shape_predictor_68_face_landmarks.dat”
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path)
txt1 = open(filename1 + ‘.txt’ , “w+”)
pic1 = io.imread(filename1)
dets = detector(pic1)
for k, d in enumerate(dets):
shape = predictor(pic1, d)
vec = np.empty([68, 2], dtype = int)
for b in range(68):
vec[b][0] = shape.part(b).x
vec[b][1] = shape.part(b).y
print(vec)
txt1.write(vec)
txt1.close()
Hi. First thank you for this great tutorials.
I’ve tried the example and it works fine.
With dlib I’ve generated automatically the points and my donald_trump.txt is different from yours without resizing the pic. I’ve also used the same donald pic with another pic https://photoshoptrainingchannel.com/wp-content/uploads/2012/09/young-old-aging-photoshop-after.jpg and this is the result!
https://uploads.disquscdn.com/images/5a38c8c2a8e1fe2b5d610528910d4cbb4b02ef1a09c859fb5094c10210d62ea0.jpg
It doesn’t look well. I also see that seamlessClone doesn’t work very fine.
This are the changes I’ve done!
# Read images
filename1 = sys.argv[1]
filename2 = sys.argv[2]
img1 = cv2.imread(filename1)
img2 = cv2.imread(filename2)
predictor_path = “shape_predictor_68_face_landmarks.dat”
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path)
txt1 = open(filename1 + ‘.txt’ , “w+”)
txt2 = open(filename2 + ‘.txt’ , “w+”)
dets = detector(img1)
for k, d in enumerate(dets):
shape = predictor(img1, d)
vec = np.empty([68, 2], dtype = int)
for b in range(68):
vec[b][0] = shape.part(b).x
vec[b][1] = shape.part(b).y
txt1.write(“n”.join([‘ ‘.join(map(str, item)) for item in vec]))
txt1.close()
dets2 = detector(img2)
for k2, d2 in enumerate(dets2):
shape2 = predictor(img2, d2)
vec2 = np.empty([68, 2], dtype = int)
for b2 in range(68):
vec2[b2][0] = shape2.part(b2).x
vec2[b2][1] = shape2.part(b2).y
txt2.write(“n”.join([‘ ‘.join(map(str, item2)) for item2 in vec2]))
txt2.close()
img1Warped = np.copy(img2);
# # Read array of corresponding points
points1 = readPoints(filename1 + ‘.txt’)
points2 = readPoints(filename2 + ‘.txt’)
and saving the pic to my server.
# Clone seamlessly.
output = cv2.seamlessClone(np.uint8(img1Warped), img2, mask, center, cv2.NORMAL_CLONE)
swapedFace = cv2.cvtColor(output, cv2.COLOR_RGB2BGR)
io.imsave(“swaped.jpg”, swapedFace)
I get the wrong points or something?
Thank you!
Hello Sir, Sir you have made tutorial of face swapping,
it is good for swapping of two faces, but i want to get only mask of face image only from, one image, how can i get, face mask from only one facial image,
we stuck here from many days,
please provide your valuable support, and code for that.
Many Thanks
Dear Satya,
my goal is to fit the target image(missing the face mask) to the extracted face mask of the source image or in other words i do not wish the extracted source face mask to adjust its dimensions to the extracted target face mask.
Example: After morphing a face(average dimensions of the 2 contributors), i want to maintain its structure after the swaping and adjust the outer face region(destination image without its face mask) to fit the source face mask .
I am looking forward to any advice, hint, direction and i thank you in advance.
Thanks for another great tutorial. I am guessing the text file for each face corresponds to their landmark ? In other words I am trying to produce my txt file for the image I would like to work on “points1 = readPoints(filename1 + ‘.txt’)”.
I looked into your other post related to facial land mark detection https://learnopencv.com/facial-landmark-detection/ – and made https://github.com/davisking/dlib/blob/master/python_examples/face_landmark_detection.py into work – however, don’t know how to get those coordinates (the text file out). I appreciate you input. (in summary, how to generate the txt file ?)