• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Learn OpenCV

OpenCV, PyTorch, Keras, Tensorflow examples and tutorials

  • Home
  • Getting Started
    • Installation
    • PyTorch
    • Keras & Tensorflow
    • Resource Guide
  • Courses
    • Opencv Courses
    • CV4Faces (Old)
  • Resources
  • AI Consulting
  • About

Live CV : A Computer Vision Coding Application

Dinu SV
July 17, 2016 2 Comments
Application

July 17, 2016 By 2 Comments

 

In this article I will present Live CV, a computer vision coding application, and describe a few of it’s implementation details along the way.

The Motivation

Live CV started as an idea I had at a time I was working on configuring a computer vision algorithm. I thought of a tool where people would be able to code and integrate algorithms while seeing results update live as soon as the code changes. I thought this would simplify the way people learn, configure and interact with computer vision algorithms.

This is a guest article by Dinu SV. You can see other cool projects he has done at http://dinusv.com

Choices and Challenges

I started to implement this idea around the time qt5 was released together with the qml language, which had a few advantages for me:

  • It was fast to interpret and compile
  • It allowed me to create links between qml elements and already implemented computer vision algorithms in C++
  • It provided access to a cross platform GUI library

Since the language and library were barely released, I had trouble finding documentation for more specific functionality I was looking for, and had to go through a lot of qt’s code just to find how to display an OpenCV matrix in the GUI. The code editor and qml interpreter were easier to integrate, so after that I could start planning the first set of components and grouping them into associated modules. Most components are treated as filters linked between themselves through their inputs and outputs, allowing each transformation to be displayed on the canvas. The functionality of the application itself like file management and error handling was being developed as I was progressively adding each element.

There were a few more difficulties I’ve encountered along the way, for example managing multi-threaded components like the VideoCapture during code updates, where I didn’t want the video to restart every time the code was changed. Matrix allocations were another issue, since they had to be efficient in order for Live CV to support a large number of components. There was a lot of time spent in areas I wouldn’t expect, but after getting everything to work properly, I finally released the first version of Live CV.

The version runs on both linux and windows systems. Most of the algorithms used are from OpenCV library, hence I tried to match OpenCV’s naming scheme as close as possible, making it easy to use for people with a background in OpenCV. The Qml language is a json like declarative language, where image transformations are handled as filter components and the ones that contain displayable information are directly shown in Live CV’s canvas.

For example, applying a blur to an image is done using the following 2 components:

import lcvcore 1.0
import lcvimgproc 1.0

Grid{
    ImRead{ id : read; file : 'piano.jpg' }
    Blur{ input : read.output; anchor: Qt.point(3, 3); ksize: "5x5" }
}

The Blur component is linked to the ImRead component by accessing its input property through its id.

Live CV Piano Blur Sample

Besides algorithms for processing, there are components people can use to additionally debug and view different results from the declared filters like histograms, pan & zoom viewers, configuration boxes and many others.

Files are saved as qml files, which can run on any platform through Live CV. This guarantees that each file will be run the same way and produce the same results independent of the platform it is run on. This means that you can share the code and results of your code with your friends, without each of you having to go through a lot of configuration steps.

Components for Object Recognition

The last module I have released for Live CV provides examples in object recognition through feature detection. The most basic parts involved in this are

  • Feature Detection
  • DescriptorExtraction
  • DescriptorMatching
  • Filtering Matches
  • Identifying matched objects
  • Computing homography for found objects

The first 2 are used to extract descriptors from multiple training images and a query image. The trained descriptors are used to find matching counterparts in the query image.

In Live CV, each component is identifiable in the following declarations:

ImRead{id: trainRead; ...}
FastFeatureDetector{id: trainFeatureDetect; ...}
BriefDescriptorExtractor{id: trainDescriptorExtract; ...}

ImRead{id: queryRead; ...}
FastFeatureDetector{id: queryFeatureDetect; ...}
BriefDescriptorExtractor{id: queryDescriptorExtract; ...}

FlannBasedMatcher{id: matcher; ...}
DescriptorMatchFilter{id: matchFilter; ...}
MatchesToLocalKeypoint{id: objectIdentifier; ...}
KeypointHomography{id: homography; ...}

The above sample uses a single training image for demonstration purposes. The two feature detectors receive an image as input, and send the detected keypoints to the BriefDescriptorExtractors. The first extractor is used to train the matcher, while the second is used to match against the trained data. Then unwanted matches are filtered out by specific criteria defined in the matchFilter, and the final found matches are grouped into objects registered in the MatchesToLocalKeypoint component. In the final component, the homography is computed for any found object. The homography component draws the found object contours within the queried image.

livecv_clock_homography

Changes to each component are done in code, so we can see how different features are detected by just switching our detectors. We can view multiple detectors side by side and choose the best option:

ImRead{ id: trainImageLoader; visible : false; file : "clock.jpg" }

FastFeatureDetector{ input: trainImageLoader.output; }
BriskFeatureDetector{ input: trainImageLoader.output; }
OrbFeatureDetector{ input: trainImageLoader.output; }
StarFeatureDetector{ input: trainImageLoader.output; }

The image below shows outputs for different detectors:

livecv_clock_features

Contribution and further development

I want to continue adding a lot of new functionality to Live CV, from the integration of OpenCV algorithms to components and new tools to interact and configure them. I want to also add a new code editor that automates adding and configuring components.

I’m looking for people to collaborate with on this project, from developers to people who can help me promote the project. I think it has a lot of potential as a learning tool for anyone who wants to get started in computer vision and also as a cross-platform tool to run computer vision applications.

My github profile is http://github.com/dinusv, there’s also a forum available on Live CV’s website where I’m happy to help anyone interested.

Subscribe & Download Code

If you liked this article and would like to download code (C++ and Python) and example images used in this blog, please subscribe to our newsletter. You will also receive a free Computer Vision Resource guide. In our newsletter we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.

Subscribe Now

Tags: application coding homography live cv OpenCV qml qt

Filed Under: Application

About

I am an entrepreneur with a love for Computer Vision and Machine Learning with a dozen years of experience (and a Ph.D.) in the field.

In 2007, right after finishing my Ph.D., I co-founded TAAZ Inc. with my advisor Dr. David Kriegman and Kevin Barnes. The scalability, and robustness of our computer vision and machine learning algorithms have been put to rigorous test by more than 100M users who have tried our products. Read More…

Getting Started

  • Installation
  • PyTorch
  • Keras & Tensorflow
  • Resource Guide

Resources

Download Code (C++ / Python)

ENROLL IN OFFICIAL OPENCV COURSES

I've partnered with OpenCV.org to bring you official courses in Computer Vision, Machine Learning, and AI.
Learn More

Recent Posts

  • RAFT: Optical Flow estimation using Deep Learning
  • Making A Low-Cost Stereo Camera Using OpenCV
  • Optical Flow in OpenCV (C++/Python)
  • Introduction to Epipolar Geometry and Stereo Vision
  • Depth Estimation using Stereo matching

Disclaimer

All views expressed on this site are my own and do not represent the opinions of OpenCV.org or any entity whatsoever with which I have been, am now, or will be affiliated.

GETTING STARTED

  • Installation
  • PyTorch
  • Keras & Tensorflow
  • Resource Guide

COURSES

  • Opencv Courses
  • CV4Faces (Old)

COPYRIGHT © 2020 - BIG VISION LLC

Privacy Policy | Terms & Conditions

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.AcceptPrivacy policy