• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Learn OpenCV

OpenCV, PyTorch, Keras, Tensorflow examples and tutorials

  • Home
  • Getting Started
    • Installation
    • PyTorch
    • Keras & Tensorflow
    • Resource Guide
  • Courses
    • Opencv Courses
    • CV4Faces (Old)
  • Resources
  • AI Consulting
  • About

cvui: A GUI lib built on top of OpenCV drawing primitives

Fernando Bevilacqua
June 21, 2017 16 Comments
Application GUI UI

June 21, 2017 By 16 Comments

Often the development of a computer vision project involves tweaking parameters of a technique to achieve the desired outcome. These parameters could be the thresholds of an edge detection algorithm or the brightness of an image, for instance. If you don’t use any graphical user interface (GUI) for tweaking these parameters, you need to stop your application, adjust your code, run the application again, evaluate, and repeat until it is good. That is tedious and time-consuming.

There are plenty of great GUI libs, e.g. Qt and imgui, that can be used together with OpenCV to allow you to tweak parameters during runtime. For using Qt with OpenCV on a Mac, check out this post. There might be cases, however, where you don’t have (or don’t want) the dependencies of such libs, e.g. you have not compiled OpenCV with Qt support, or you can’t use OpenGL. In such situations, all you need is a quick and hassle-free way of creating a GUI to tweak your algorithms.

That is the purpose of cvui. It is a C++, header-only and cross-platform (Windows, Linux and OSX) UI lib built on top of OpenCV drawing primitives. It has no dependencies other than OpenCV itself (which you are probably already using).

It follows the rule

One line of code should produce one UI component on the screen.

As a result, the lib has a friendly and C-like API with no classes/objects and several components, e.g. trackbar, button, text, among others:

A few of the UI components available in cvui.

How to use cvui in your application

In order to use cvui, you just include cvui.h in your project, give it an image ( i.e. cv::Mat ) to render components and you are done!

Basic “hello world” application

Let’s take a look at the capabilities of cvui by creating a simple hello-world application with some UI interactions. The application contains a button and a visual indicator showing how many times that button was clicked. Here is the code:

#include <opencv2/opencv.hpp>
#include "cvui.h"

#define WINDOW_NAME "CVUI Hello World!"

int main(void)
{
	cv::Mat frame = cv::Mat(200, 500, CV_8UC3);
	int count = 0;

	// Init a OpenCV window and tell cvui to use it.
	cv::namedWindow(WINDOW_NAME);
	cvui::init(WINDOW_NAME);

	while (true) {
		// Fill the frame with a nice color
		frame = cv::Scalar(49, 52, 49);

		// Show a button at position (110, 80)
		if (cvui::button(frame, 110, 80, "Hello, world!")) {
			// The button was clicked, so let's increment our counter.
			count++;
		}

		// Show how many times the button has been clicked.
		// Text at position (250, 90), sized 0.4, in red.
		cvui::printf(frame, 250, 90, 0.4, 0xff0000, "Button click count: %d", count);

		// Update cvui internal stuff
		cvui::update();

		// Show everything on the screen
		cv::imshow(WINDOW_NAME, frame);

		// Check if ESC key was pressed
		if (cv::waitKey(20) == 27) {
			break;
		}
	}
	return 0;
}

The result of the code above is the following:

Basic cvui application featuring a button and a text.

To ensure cvui works properly with your project

  1. Call the initialization function cvui::init() before rendering any components.
  2. Call cvui::update() once after all components are rendered.

Regarding the components used in the code above, the cvui::button() function returns true everytime the button is clicked, so you can conveniently use it in if statements. The cvui::printf() function works similarly to the standard C printf() function, so you can easily render texts and numbers on the screen using notations as %d and %s. You can also choose the color of the text using hex values as 0xRRGGBB, e.g. 0xFF0000 (red), 0x00FF00 (green) and 0x0000FF (blue).

A more advanced application

Now let’s build something a bit more sophisticated, but as easily as before. The application applies the Canny Edge algorithm to an image, allowing the user to enable/disable the technique and adjust its threshold values.

Step 1: Foundation

We start by creating an application with no UI elements. The use of the Canny Edge algorithm is defined by a boolean variable (use_canny), while the algorithm thresholds are defined by two integers (low_threshold and high_threshold). Using that approach, we must recompile the code every time we want to enable/disable the technique or adjust its thresholds.

The code for that application is the following:

#include <opencv2/opencv.hpp>

#define WINDOW_NAME "CVUI Canny Edge"

int main(int argc, const char *argv[])
{
	cv::Mat lena = cv::imread("lena.jpg");
	cv::Mat frame = lena.clone();
	int low_threshold = 50, high_threshold = 150;
	bool use_canny = false;

	cv::namedWindow(WINDOW_NAME);

	while (true) {
		// Should we apply Canny edge?
		if (use_canny) {
			// Yes, we should apply it.
			cv::cvtColor(lena, frame, CV_BGR2GRAY);
			cv::Canny(frame, frame, low_threshold, high_threshold, 3);
		} else {
			// No, so just copy the original image to the displaying frame.
			lena.copyTo(frame);
		}

		// Show everything on the screen
		cv::imshow(WINDOW_NAME, frame);

		// Check if ESC was pressed
		if (cv::waitKey(30) == 27) {
			break;
		}
	}
	return 0;
}

The result is an application that either shows the original image (use_canny is false) or shows the detected edges (use_canny is true):

The activation of the Canny edge algorithm requires changes to the code and a new compilation

Step 2: Dynamically enable/disable the edge detection

Let’s improve the workflow by using cvui and adding a checkbox to control the value of use_canny. Using that approach, the user can enable/disable the use of Canny Edge while the application is still running. We add the required cvui code and use the cvui::checkbox function:

#include <opencv2/opencv.hpp>
#include "cvui.h"

#define WINDOW_NAME "CVUI Canny Edge"

int main(void)
{
	cv::Mat lena = cv::imread("lena.jpg");
	cv::Mat frame = lena.clone();
	int low_threshold = 50, high_threshold = 150;
	bool use_canny = false;

	// Init a OpenCV window and tell cvui to use it.
	cv::namedWindow(WINDOW_NAME);
	cvui::init(WINDOW_NAME);

	while (true) {
		// Should we apply Canny edge?
		if (use_canny) {
			// Yes, we should apply it.
			cv::cvtColor(lena, frame, CV_BGR2GRAY);
			cv::Canny(frame, frame, low_threshold, high_threshold, 3);
		} else {
			// No, so just copy the original image to the displaying frame.
			lena.copyTo(frame);
		}
		
		// Checkbox to enable/disable the use of Canny edge
		cvui::checkbox(frame, 15, 80, "Use Canny Edge", &use_canny);

		// Update cvui internal stuff
		cvui::update();

		// Show everything on the screen
		cv::imshow(WINDOW_NAME, frame);

		// Check if ESC was pressed
		if (cv::waitKey(30) == 27) {
			break;
		}
	}
	return 0;
}

This small modification alone is already a time saver for testing the application without recompiling everything:

Basic UI to allow the use (or not) of Canny Edge.

It might be difficult to see the rendered checkbox and its label depending on the image being used, e.g. image with a white background. We can prevent that problem by creating a window using cvui::window() to house the checkbox.

cvui renders each component at the moment the component function is called, so we must call cvui::window() before cvui::checkbox(), otherwise the window will be rendered in front of the checkbox:

#include <opencv2/opencv.hpp>
#include "cvui.h"

#define WINDOW_NAME "CVUI Canny Edge"

int main(void)
{
	cv::Mat lena = cv::imread("lena.jpg");
	cv::Mat frame = lena.clone();
	int low_threshold = 50, high_threshold = 150;
	bool use_canny = false;

	// Init a OpenCV window and tell cvui to use it.
	cv::namedWindow(WINDOW_NAME);
	cvui::init(WINDOW_NAME);

	while (true) {
		// Should we apply Canny edge?
		if (use_canny) {
			// Yes, we should apply it.
			cv::cvtColor(lena, frame, CV_BGR2GRAY);
			cv::Canny(frame, frame, low_threshold, high_threshold, 3);
		} else {
			// No, so just copy the original image to the displaying frame.
			lena.copyTo(frame);
		}

		// Render the settings window to house the UI
		cvui::window(frame, 10, 50, 180, 180, "Settings");
		
		// Checkbox to enable/disable the use of Canny edge
		cvui::checkbox(frame, 15, 80, "Use Canny Edge", &use_canny);

		// Update cvui internal stuff
		cvui::update();

		// Show everything on the screen
		cv::imshow(WINDOW_NAME, frame);

		// Check if ESC was pressed
		if (cv::waitKey(30) == 27) {
			break;
		}
	}
	return 0;
}

The result is a more pleasant UI:

A more pleasant UI with the use of cvui’s window component.

Step 3: Tweak threshold values

It is time to allow the user to select the values for low_threashold and high_threashold during runtime as well. Since those parameters can vary within an interval, we can use cvui::trackbar() to create a trackbar:

#include <opencv2/opencv.hpp>
#include "cvui.h"

#define WINDOW_NAME "CVUI Canny Edge"

int main(void)
{
	cv::Mat lena = cv::imread("lena.jpg");
	cv::Mat frame = lena.clone();
	int low_threshold = 50, high_threshold = 150;
	bool use_canny = false;

	// Init a OpenCV window and tell cvui to use it.
	cv::namedWindow(WINDOW_NAME);
	cvui::init(WINDOW_NAME);

	while (true) {
		// Should we apply Canny edge?
		if (use_canny) {
			// Yes, we should apply it.
			cv::cvtColor(lena, frame, CV_BGR2GRAY);
			cv::Canny(frame, frame, low_threshold, high_threshold, 3);
		} else {
			// No, so just copy the original image to the displaying frame.
			lena.copyTo(frame);
		}

		// Render the settings window to house the UI
		cvui::window(frame, 10, 50, 180, 180, "Settings");
		
		// Checkbox to enable/disable the use of Canny edge
		cvui::checkbox(frame, 15, 80, "Use Canny Edge", &use_canny);

		// Two trackbars to control the low and high threshold values
		// for the Canny edge algorithm.
		cvui::trackbar(frame, 15, 110, 165, &low_threshold, 5, 150);
		cvui::trackbar(frame, 15, 180, 165, &high_threshold, 80, 300);

		// Update cvui internal stuff
		cvui::update();

		// Show everything on the screen
		cv::imshow(WINDOW_NAME, frame);

		// Check if ESC was pressed
		if (cv::waitKey(30) == 27) {
			break;
		}
	}
	return 0;
}

The cvui::trackbar() function accepts parameters that specify the minimum and maximum values allowed for the trackbar. In the example above, they are [5, 150] for low_threshold and [80, 300] for high_threshold, respectively.

The result is a fully interactive application that allows users to quickly and easily explore the tweaking of Canny Edge parameters, as well as enable/disable its use:

Final result of using cvui to create a UI to adjust Canny Edge thresholds.

Below is the complete code for this application, without the comments. It shows that you don’t need many lines of code to produce a minimal (and useful) UI for your application:

#include <opencv2/opencv.hpp>
#include "cvui.h"

#define WINDOW_NAME "CVUI Canny Edge"

int main(void)
{
	cv::Mat lena = cv::imread("lena.jpg");
	cv::Mat frame = lena.clone();
	int low_threshold = 50, high_threshold = 150;
	bool use_canny = false;

	cv::namedWindow(WINDOW_NAME);
	cvui::init(WINDOW_NAME);

	while (true) {
		if (use_canny) {
			cv::cvtColor(lena, frame, CV_BGR2GRAY);
			cv::Canny(frame, frame, low_threshold, high_threshold, 3);
		} else {
			lena.copyTo(frame);
		}

		cvui::window(frame, 10, 50, 180, 180, "Settings");
		cvui::checkbox(frame, 15, 80, "Use Canny Edge", &use_canny);
		cvui::trackbar(frame, 15, 110, 165, &low_threshold, 5, 150);
		cvui::trackbar(frame, 15, 180, 165, &high_threshold, 80, 300);

		cvui::update();
		cv::imshow(WINDOW_NAME, frame);

		if (cv::waitKey(30) == 27) {
			break;
		}
	}
	return 0;
}

Conclusion

The cvui lib was created out of a necessity. It was not designed to be a full-blown solution for the development of complex graphical applications. It is simple and limited in many ways. However, it is practical, easy to use and can save you several hours of frustration and tedious work.

If you like cvui, don’t forget to check out its repository on Github, its documentation and all example applications (buildable with cmake).

Subscribe & Download Code

If you liked this article and would like to download code (C++ and Python) and example images used in all posts of this blog, please subscribe to our newsletter. You will also receive a free Computer Vision Resource Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.

Subscribe Now

Tags: C++ edge detector trackbar

Filed Under: Application, GUI, UI

Comments

  1. Shan says

    June 21, 2017 at 11:56 am

    Looks elegant

    Reply
  2. Karsten Burger says

    June 22, 2017 at 3:15 am

    If you want to use “make” instead of cmake, here is an example “Makefile”: (assuming cvui.h and lena.tiff are in the current directory):

    all: cvuitest

    CXXFLAGS += -std=c++11 -I. -g

    LDFLAGS += -lopencv_imgproc -lopencv_highgui -lopencv_core

    cvuitest: cvuitest.cpp
    g++ $(CXXFLAGS) -o cvuitest cvuitest.cpp $(LDFLAGS)

    Reply
  3. Shashika Chathuranga says

    June 24, 2017 at 3:36 am

    Dear Satya,

    Should we build CVUI as this link https://dovyski.github.io/cvui//build/ says, or can we directly use cvui.h file ? im confuse how to build this library. Also i want to know that are there any changes should do in MakeList file ?

    Reply
    • dovyski says

      June 24, 2017 at 12:04 pm

      Hi! cvui is a header-only lib, which in practice means you just put cvui.h along with your code and it will work. You don’t have to build it, only compile your code normally. The link you mentioned contains instructions to build the example programs that come along with cvui. You don’t need them for your application.

      Reply
  4. J Johan Romuald says

    July 11, 2017 at 4:59 am

    Dear, Satya,

    Thank you for sharing and I tired it and did not have any problem in compiling and running

    Reply
    • Satya Mallick says

      July 11, 2017 at 5:04 am

      Thanks for letting me know.

      Reply
  5. Madhusudan Govindraju says

    May 23, 2018 at 8:22 am

    It does not run properly for me, I get two windows with the same WINDOW_NAME and the buttons dont work properly. I am just trying the example. https://uploads.disquscdn.com/images/53f6366ab8ee933d0132047e1a5832c14c9af8f3225cc5cc561de404cfdc0e30.png

    Reply
    • Madhusudan Govindraju says

      May 23, 2018 at 8:25 am

      Nvm, that was in the debug mode in VS2017, I changed it to release mode and it is working perfectly. Any idea, why it is behaving so in debug mode ? are we rendering a “cvui” over the OpenCV window to get the functionality ?

      Reply
      • dovyski says

        October 8, 2018 at 5:09 am

        Are you still facing that problem? If so, could you please open an issue on cvui’s Github repo (https://github.com/Dovyski/cvui/issues)?

        Reply
  6. TaSheen Mayberry says

    October 5, 2018 at 8:56 am

    Impressed CVUI works great so far. Except I need it to not only feature controls but host video frames with a thread. Right now they appear side by size when sized correctly but becomes annoying if they overlap.

    Do you have any advice or examples to feature video in the window?

    Reply
    • dovyski says

      October 8, 2018 at 5:07 am

      cvui works by rendering all its components into a frame, e.g. cv::Mat. What you could do is read each frame of your video and then display into the cvui frame using cvui::image(). I think that could work for your needs.

      Reply
      • TaSheen Mayberry says

        October 8, 2018 at 3:23 pm

        I’ve got the cv::Mat.: stored as “rawImage” frame.

        Then
        drawKeypoints(rawImage, keypoints, im_with_keypoints, Scalar(gui_v->valueBlue, gui_v->valueGreen, gui_v->valueRed), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);

        Then
        imshow(“keypoints”, im_with_keypoints);

        How would I adjust this to accomodate the cvui::image

        Reply
        • dovyski says

          October 9, 2018 at 12:16 am

          Assuming you have a cv::Mat, e.g. screen_mat, that represents your screen (the one being used by cvui), you can render im_with_keypoints to it like:

          // Render im_with_keypoints to position (10, 10) in screen_mat
          image(screen_mat, 10, 10, im_with_keypoints);

          Reply
          • TaSheen Mayberry says

            October 9, 2018 at 6:34 pm

            thank you – exactly; but I am only having difficulty with screen_mat
            Where do es that come from. I’m using for e.g. “frame_x” sor screen mat, dont know how to make it

          • dovyski says

            October 10, 2018 at 1:12 am

            Take a look at this guide: https://dovyski.github.io/cvui/usage/ , in particular section 3 (Render cvui components). In that guide, the “screen mat” is called frame instead of screen_mat.

          • TaSheen Mayberry says

            October 10, 2018 at 1:00 pm

            cv::Mat frame = cv::Mat(cv::Size(640, 480), CV_8UC3);
            image(frame, 10, 10, im_with_keypoints);
            getting exceptions everytime

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

About

I am an entrepreneur with a love for Computer Vision and Machine Learning with a dozen years of experience (and a Ph.D.) in the field.

In 2007, right after finishing my Ph.D., I co-founded TAAZ Inc. with my advisor Dr. David Kriegman and Kevin Barnes. The scalability, and robustness of our computer vision and machine learning algorithms have been put to rigorous test by more than 100M users who have tried our products. Read More…

Getting Started

  • Installation
  • PyTorch
  • Keras & Tensorflow
  • Resource Guide

Resources

Download Code (C++ / Python)

ENROLL IN OFFICIAL OPENCV COURSES

I've partnered with OpenCV.org to bring you official courses in Computer Vision, Machine Learning, and AI.
Learn More

Recent Posts

  • Background Subtraction with OpenCV and BGS Libraries
  • RAFT: Optical Flow estimation using Deep Learning
  • Making A Low-Cost Stereo Camera Using OpenCV
  • Optical Flow in OpenCV (C++/Python)
  • Introduction to Epipolar Geometry and Stereo Vision

Disclaimer

All views expressed on this site are my own and do not represent the opinions of OpenCV.org or any entity whatsoever with which I have been, am now, or will be affiliated.

GETTING STARTED

  • Installation
  • PyTorch
  • Keras & Tensorflow
  • Resource Guide

COURSES

  • Opencv Courses
  • CV4Faces (Old)

COPYRIGHT © 2020 - BIG VISION LLC

Privacy Policy | Terms & Conditions

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.AcceptPrivacy policy