【size_t】 Vs. 【size_type】

  • size_type是容器概念,没有容器不能使用,size_type类型无法转换
  • typedef unsigned int size_t;

size_t实际上就是unsigned int(用于没有负数情况的变量),如用于数组的下标值类型,也可以用来“接收”sizeof操作符的返回值。

Advertisements

【putText】

putText( image, "Testing text rendering", org, rng.uniform(0,8),
         rng.uniform(0,100)*0.05+0.1, randomColor(rng), rng.uniform(1, 10), lineType);

Draws the text “Testing text rendering” in image
The bottom-left corner of the text will be located in the Point org
The font type is a random integer value rng.uniform(0,8) in the range: [0,8)
The scale of the font is denoted by the expression rng.uniform(0, 100)x0.05 + 0.1 (meaning its range is: [0.1,5.1)

The text color is random (denoted by randomColor(rng))
The text thickness ranges between 1 and 10, as specified by rng.uniform(1,10)

More basic drawing example in OpenCV sample.

【keypoint】in OpenCV

Data structure for salient point detectors in OpenCV:

KeyPoint
{
public:
    // default constructor
    KeyPoint();
    // two complete constructors
    KeyPoint(Point2f _pt, float _size, float _angle=-1,
            float _response=0, int _octave=0, int _class_id=-1);
    KeyPoint(float x, float y, float _size, float _angle=-1,
             float _response=0, int _octave=0, int _class_id=-1);
    // coordinate of the point
    Point2f pt;
    // feature size
    float size;
    // feature orintation in degrees (has negative value if the orientation is not defined/not computed)
    float angle;
    // feature strength (can be used to select only the most prominent key points)
    float response;
    // scale-space octave in which the feature has been found; may correlate with the size
    int octave;
    // point (can be used by feature classifiers or object detectors)
    int class_id;
};

// reading/writing a vector of keypoints to a file storage
void write(FileStorage& fs, const string& name, const vector& keypoints);
void read(const FileNode& node, vector& keypoints);

Each of these interest points contains a vector of descriptors ,

  • an Point2f pt (x coordinate (int), an y coordinate (int)),
  • the scale (float response)
  • the orientation (float angle)
  • the scale-space octave (int octave)
  • the feature size (float size)

if (showDetail)
sbResult.AppendFormat("第{0}点(坐标:{1},尺寸:{2},方向:{3}°,响应:{4},octave:{5}),",
idx, keypoint.Point, keypoint.Size, keypoint.Angle, keypoint.Response, keypoint.Octave);

Each SURF image have X features (descriptors) of 64 dimensions.

Match EACH feature for the image you want to compare to the flann tree of all images finding features with lowest euclidean distance. Then take all the features you found and identify the images in which they belong (means there is minimum 1 feature match in this image). Then do an individual SURF compare to all images with matching SURF features, and select the image with the best match. Also, to gain better matches, you can use the Lowe optimization

Compare the interest points from one image to a list of images in a database ( a list of interest points) –  to build an FLANN index while at the same time keeping track of where the interest points comes from.

  •  construct a tree for every image and then use these tree’s for comparison, OR
  • construct one big tree with the descriptors for all the images which you then use to match a single image against
  • construct ONE flann index for all the images using 4 randomized kd-tree’s, THEN match against this tree
  • match one images to a flann index of all the other images’ SURF descriptors, tracking descriptors identified.  Then take all the images with one or more matching descriptors and do an individual matching. The best image is the found match.
  • Using the Lowe optimization, dists.Data[i, 0] < 0.6 * dists.Data[i, 1])
  • FLANN (Fast Approximate Nearest Neighbor Search Library )

“In keypoint matching step, the nearest neighbor is defined as the keypoint with minimum Euclidean distance for the invariant descriptor vector”.

It seems as the best method for single image SURF comparison is for one image1 with X interest points to search for similar interest point in image2 comparing descriptors. That is:
for (int i=0; i < 64; i++) {
(Descriptor(image1[i])-Descriptor(image2[i]) += DIST;
}

and then select the point with the lowest distance and sum it all up at the end.

【Feature detector】SURF feature point matching

via http://morf.lv/modules.php?name=tutorials&lasit=2#.UCmL0J2PXh4

What findfeature algorithms do, is they then find keypoints in those two images and calculate their descriptors. The feature finding process is usually composed of 2 steps:

  1. first, find the interest points in the image which might contain meaningful structures; this is usually done by comparing the Difference of Gaussian (DoG) in each location in the image under different scales. A major orientation is also calculated when a point is considered a feature point.
  2. The second step is to construct the scale invariant descriptor on each interest point found in the previous step. To achieve rotation invariant, we align a rectangle to the major orientation. The size of the rectangle is proportional to the scale where the interest point is detected. The rectangle is then cropped into a 4 by 4 grid. Different informations such as gradient or absolute value of gradient are then subtracted from each of these sub square and composed into the interest point descriptor.

the descriptors are the ones which we will compare to each other and determine whether the object was found in  the scene or not.

  • First you have to include “features2d” library into your project:

#include "opencv2\features2d\features2d.hpp"
#include "opencv2\nonfree\features2d.hpp" //This is where actual SURF and SIFT algorithm is located
#include <opencv2/legacy/legacy.hpp> //This is where BruteForceMatcher is located

  •  Extract keypoints and calculate their descriptors  To do that, declare vector of keypoints and matrix of descriptors.

vector keypointsO; //keypoints for object
vector keypointsS; //keypoints for scene
//Descriptor matrices
Mat descriptors_object, descriptors_scene;

  • Declare an SURF object which will actually extract keypoints
  • Calculate descriptors and save them in memory. When decleraing SURF object you have to provide the minimum hessian value, the smaller it is the more keypoints your program would be able to find with the cost of performance.

SurfFeatureDetector surf(1500); //1500 is low enough most of the times, but it may vary from application to application.
surf.detect(sceneMat,keypointsS);
surf.detect(objectMat,keypointsO);

  • Calculate the descriptors:

SurfDescriptorExtractor extractor;
extractor.compute( sceneMat, keypointsS, descriptors_scene );
extractor.compute( objectMat, keypointsO, descriptors_object );

  • Do the actual comparison(object detection) choose the matcher,e.g. FlannBasedMatcher (the fastest) or Brute Force matcher

//Declering flann based matcher
FlannBasedMatcher matcher;
//BFMatcher for SURF algorithm can be either set to NORM_L1 or NORM_L2.
//But if you are using binary feature extractors like ORB, instead of NORM_L* you use "hamming"
BFMatcher matcher(NORM_L1);

  • Donearest neighbor matching, which is built in OpenCV library:

vector< vector > matches;
matcher.knnMatch( descriptors_object, descriptors_scene, matches, 2 ); // find the 2 nearest neighbors

  • After matching, discard invalid results. Basically we have to filter out the good matches by use of Nearest Neighbor Distance Ratio.

vector< DMatch > good_matches;
good_matches.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
if (matches[i].size() < 2)
continue;
const DMatch &m1 = matches[i][0];
const DMatch &m2 = matches[i][1];
if(m1.distance <= nndrRatio * m2.distance)
good_matches.push_back(m1);
}

  • Assumption: when you have 7 and more good_matches you can assume the object has been found, and do whatever you want to do, e.g. draw boundty around the detected object.
    Ok now let’s extract the coordinates of the good matches from object and scene so we can find homography, which we are going to use to find the boundry of object in scene.

std::vector obj;
std::vector scene;
for( unsigned int i = 0; i < good_matches.size(); i++ )
{ //-- Get the keypoints from the good matches
obj.push_back( keypointsO[ good_matches[i].queryIdx ].pt );
scene.push_back( keypointsS[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( objectP.cols, 0 );
obj_corners[2] = cvPoint( objectP.cols, objectP.rows ); obj_corners[3] = cvPoint( 0, objectP.rows );
std::vector scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( outImg, scene_corners[0] , scene_corners[1], color, 2 ); //TOP line
line( outImg, scene_corners[1] , scene_corners[2], color, 2 );
line( outImg, scene_corners[2] , scene_corners[3], color, 2 );
line( outImg, scene_corners[3] , scene_corners[0] , color, 2 );

BRC_1BRC_0

基于特征点的图像匹配——2维特征Feature2D

比较经典常用的特征点自动提取的办法有Harris特征、SIFT特征、SURF特征。

1. SURF特征的特征描述

该操作封装在类SurfFeatureDetector中,利用类内的detect函数可以检测出SURF特征的关键点,保存在vector容器中。

2. 利用SurfDescriptorExtractor类进行特征向量的相关计算

将第一部中vector变量变成向量矩阵形式保存在Mat中。

3. 强行匹配两幅图像的特征向量——使用类BruteForceMatcher中的函数match,但效果不好
/**
* @file SURF_descriptor
* @brief SURF detector + descritpor + BruteForce Matcher + drawing matches with OpenCV functions
* @author A. Huaman
*/
#include <stdio.h>
#include
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace cv;
void readme();
/**
* @function main
* @brief Main function
*/
int main( int argc, char** argv )
{
if( argc != 3 )
{ return -1; }
Mat img_1 = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BruteForceMatcher< L2 > matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );
//-- Show detected matches
imshow("Matches", img_matches );
waitKey(0);
return 0;
}
void readme()
{ std::cout << " Usage: ./SURF_descriptor " << std::endl; }

4.FLANN特征匹配算法——用FlannBasedMatcher类进行特征匹配,并只保留好的特征匹配点
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist; if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance < 2*min_dist )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
imshow( "Good Matches", img_matches );

5. 利用Homography映射找出已知物体

在FLANN特征匹配的基础上,利用findHomography函数利用匹配的关键点找出相应的变换,再利用perspectiveTransform函数映射点群。
//-- Localize the object from img_1 in img_2
std::vector obj;
std::vector scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
Point2f obj_corners[4] = { cvPoint(0,0), cvPoint( img_1.cols, 0 ), cvPoint( img_1.cols, img_1.rows ), cvPoint( 0, img_1.rows ) };
Point scene_corners[4];
//-- Map these corners in the scene ( image_2)
for( int i = 0; i < 4; i++ )
{
double x = obj_corners[i].x;
double y = obj_corners[i].y;
double Z = 1./( H.at(2,0)*x + H.at(2,1)*y + H.at(2,2) );
double X = ( H.at(0,0)*x + H.at(0,1)*y + H.at(0,2) )*Z;
double Y = ( H.at(1,0)*x + H.at(1,1)*y + H.at(1,2) )*Z;
scene_corners[i] = cvPoint( cvRound(X) + img_1.cols, cvRound(Y) );
}
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0], scene_corners[1], Scalar(0, 255, 0), 2 );
line( img_matches, scene_corners[1], scene_corners[2], Scalar( 0, 255, 0), 2 );
line( img_matches, scene_corners[2], scene_corners[3], Scalar( 0, 255, 0), 2 );
line( img_matches, scene_corners[3], scene_corners[0], Scalar( 0, 255, 0), 2 );
//-- Show detected matches
imshow( "Good Matches & Object detection", img_matches );

Harris特征检测

在计算机视觉中,通常需要找出两帧图像的匹配点,如果能找到两幅图像如何相关,就能提取出两幅图像的信息。我们说的特征的最大特点就是它具有唯一可识别这一特点,图像特征的类型通常指边界、角点(兴趣点)、斑点(兴趣区域)。角点就是图像的一个局部特征,应用广泛。harris角点检测是一种直接基于灰度图像的角点提取算法,稳定性高,尤其对L型角点检测精度高,但由于采用了高斯滤波,运算速度相对较慢,角点信息有丢失和位置偏移的现象,而且角点提取有聚簇现象。具体实现就是使用函数cornerHarris实现。
除了利用Harris进行角点检测,还可以利用Shi-Tomasi方法进行角点检测。使用函数goodFeaturesToTrack对角点进行检测,效果也不错。也可以自己制作角点检测的函数,需要用到cornerMinEigenVal函数和minMaxLoc函数,最后的特征点选取,判断条件要根据自己的情况编辑。如果对特征点,角点的精度要求更高,可以用cornerSubPix函数将角点定位到子像素。
via http://blog.csdn.net/yang_xian521/article/details/6901762

SurfFeatureDetector

The classes SurfFeatureDetector and FastFeatureDetector are inherited from Detector and can be exchanged. But I couldn’t find a matching class for SurfDescriptorExtractor I expected to find something like FastDescriptorExtractor but a class like this isn’t available. What seems to be strange is that if I only change the Detector to FastFeatureDetector the example seems to work correctly.

Solution: I’m using 2.4.2, currently, it is located at: “OPENCV\include\opencv2\nonfree\features2d.hpp”. so in the code all need to do is:

#include <opencv2/nonfree/features2d.hpp>

SiftDescriptorExtractor siftExtractor;
//Later on in the file, after a frame has been grabbed, keypoints found, etc.
Mat siftDescriptors;
siftExtractor.compute(frame,roiKP,siftDescriptors);
 

SURF uses a Hessian matrix-based measure for the detection of interest points and a distribution of Haar wavelet responses within the interest point neighborhood as descriptor. An image is analyzed at several scales, so interest points can be extracted from both global and local image details. The dominant orientation of each of the interest points is determined to support rotation-invariant matching.

  1. retrieval is performed with the aid of an indexing scheme and matching strategy (e.g. The KD-tree with the Best Bin First (BBF) algorithm is used to index and match the similarity of the features of the images)
  2. first order and second order colour moments is calculated for the SURF key points to provide the maximum distinctiveness for the key points
  • SURF
  1. The key points are detected by using a Fast-Hessian matrix. The determinant of the Hessian matrix is used to determine the location and scale of the descriptor.
  2. The descriptor describes a distribution of Haar-wavelet responseswithin the interest point neighborhood.
    • Assigning an orientation based on the information of a circular region around the
      detected interest points, then they are weighted with a Gaussian with σ = 2.5s centered at the interest points.
    • The dominant orientation is estimated by summing the horizontal and vertical wavelet responses within a rotating wedge which covering an angle of π/3 in the wavelet response space.
    • The resulting maximum is then chosen to describe the orientation of the interest point
      descriptor.
  3. The region is split up regularly into smaller square sub-regionsand a few simple features at regularly spaced sample points are computed for each sub-region.The horizontal and vertical wavelet responses are summed up over each sub-region to form a first set of entries to the feature vector. The responses of the Haar-wavelets are weighted with a Gaussian centered at the interest point in order to increase robustness to geometric deformations and the wavelet responses in horizontal dx and vertical Directions dy are summed up over each sub-region.The most of the information is concentrated on the low order moments:
    • the first moment (mean)
    • the second moments (variance)
  4.  Indexing and matching
    • KD-tree algorithm is used to match the features of the query image with those of the database images
    • The BBF algorithm uses a priority search order to traverse the KD-tree so that bins in feature space are searched in the order of their closest distance from the query. The k-approximate and reasonable nearest matches can be returned with low cost by cutting off further search after a specific number of the nearest bins have been explored. The Voting scheme algorithm is used to rank and retrieved the matched images.
    • Match:To evaluate the similarity between the 2 images i use the ratio :
      number of good points / total number of descriptors

FAST – finding an object (detection) match

http://www.yongblog.com/archives/160.html
http://blog.csdn.net/sangni007/article/details/7547350

Mat src1,src2;
src1 = imread(img_filename1,1);
src2 = imread(img_filename2,1);
// vector of keyPoints
vector keys1;
vector keys2;
// construction of the fast feature detector object
FastFeatureDetector fast1(40); // 检测的阈值为40
FastFeatureDetector fast2(40);
// feature point detection
fast1.detect(src1,keys1);
cout<<"KeyPoint Size:"<<keys2.size()<<endl;
drawKeypoints(src1, keys1, src1, Scalar::all(-1), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("FAST feature1", src1);
SurfDescriptorExtractor Extractor;//Run:BruteForceMatcher< L2<float> > matcher
BriefDescriptorExtractor Extractor;//RUN:BruteForceMatcher< Hamming > matcher
Mat descriptors1, descriptors2;
Extractor.compute(src1,keys1,descriptors1);
FlannBasedMatcher matcher;
vector matches;
matcher.match( descriptors1, descriptors2, matches );
Mat img_matches;
drawMatches( src1, keys1, src2, keys2, matches, img_matches,
Scalar::all(-1), Scalar::all(-1),vector(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
imshow("draw",img_matches);

fast.detect(image,keyPoints);
drawKeypoints(image, keyPoints, image, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow(“FAST feature”, image);

Android.mk & Application.mk

makefile定义了一系列的规则来指定,哪些文件需要先编译,哪些文件需要后编译,哪些文件需要重新编译,甚至于进行更复杂的功能操作,因为 makefile就像一个Shell脚本一样,其中也可以执行操作系统的命令。 makefile带来的好处就是——“自动化编译”,一旦写好,只需要一个make命令,整个工程完全自动编译,极大的提高了软件开发的效率。make是一个解释makefile中指令的命令工具,一般来说,大多数的IDE都有这个命令,比如:Delphi的make,Visual C++的nmake,Linux下GNU的make。 1. Android.mk文件为Android NDK(原生开发)描述了C/C++源文件

  • 这个文件实际上是GNU Make文件的一小片段,它会被生成系统解析一次或多次。
  • 该文件的目的是为了让你能将你的源代码组织为组件(module) – 静态库(static library) or 共享库(shared library)

An example:  jni/Anroid.mk文件为NDK生成系统描述了这个共享库。它的内容为: ---------- cut here ------------------ 1.清除旧变量 LOCAL_PATH := $(call my-dir)//每个Android.mk文件都必须以定义LOCAL_PATH变量开始to locate the source file;宏函数’my-dir’, 由编译系统提供,用于返回当前路径(即包含Android.mk file文件的目录) include $(CLEAR_VARS)//CLEAR_VARS变量是生成系统提供的,它指向一个特殊的GNU Makefile.这个Makefile将会为你自动清除许多名为LOCAL_XXX的变量(因为所有的编译控制文件都在同一个GNU make 执行环境中,所有的变量都是全局的,E.g., LOCAL_MODULE,LOCAL_SRC_FILES,LOCAL_STATIC_LIBRARIES等, LOCAL_PATH例外) 2.设置新变量 LOCAL_MODULE := hello-jni //名称必须是唯一的,而且不包含任何空格。编译系统会自动生成libhello-jni.so文件 LOCAL_SRC_FILES := hello-jni.c //源文件路径(需要编译的文件),多个文件用 ‘\’ 隔开 LOCAL_C_INCLUDES := $(LOCAL_PATH)/extra_inc$(LOCAL_PATH)/main_inc 3.调用编译函数 include $(BUILD_SHARED_LIBRARY)//加入所需要包含的头文件路径 ---------- cut here ------------------ 其实Android.mk所做的就是

  • 清除旧变量,
  • 设置新变量,
  • 调用编译函数(其实就是include一个固定的mk文件)

要将C\C++代码编译为SO文件,光有Android.mk文件还不行,还需要一个Application.mk文件。  2. Application.mk目的是描述在应用程序中所需要的模块(即静态库或动态库) Application.mk文件通常被放置在 $PROJECT/jni/Application.mk 例子: APP_STL := gnustl_static//默认情况下,NDK的编译系统为最小的C++运行时库(/system/lib/libstdc++.so)提供C++头文件。然而,NDK的C++的实现,可以让你使用或着链接在自己的应用程序中。  APP_CPPFLAGS := -frtti -fexceptions //编译c++文件 APP_ABI := armeabi-v7a //默认情况下,NDK的编译系统返回"armeabi"ABI生成机器代码。如为了支持IA-32指令集,可以使用 APP_ABI := x86;支持多种APP_ABI := armeabi armeabi-v7a x86 3. 在 c 文件中,函数名这样定义: Java_testNDK_android_HelloJni_stringFromJNI ,是因为这个是 JNI 的标准,定义需要按照如下格式: Java_packagename_classname_methodname , 例如: Java_testNDK_android_HelloJni_stringFromJNI extern "C" { JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial3_Sample3View_FindFeatures(JNIEnv* env, jobject obj, jint width, jint height, jbyteArray yuv, jintArray bgra) {}//JNIEXPORT 和JNICALL是必须要加的关键字命名规则如下:

  1. 函数名:Java_(包路径)_(类名)_(函数名) (JNIEnv *env, jobject obj, 自己定义的参数…),包路径中的”.”用”_”(下划线)代替,类名就是java中调用该态库函数的类的名字,最后一个才是真正的函数名;
  2. 变量名:C\C++类型前面加上j,如果是数组,则在后面加上Array:如jintArray就是int[], jbyteArray就是byte[].
  3. JNIEnv *env和jobject obj这两个参数时必须的,用来调用JNI环境下的一些函数
  4. jbyteArray yuv, jintArray bgra代表传进来的图像数据,jint width, jint height是宽和高

Please make attention about BGRA byte order, ARGB stored in java as int array becomes BGRA at native level

模块-动态链接库

尽管 C/C++ 的执行效率很高,却不拥有 scripting language 里面那些最让人觉得好用的东西,比如不严格的类型检查、garbage collection 等等;

可是很多脚本语言还是需要有 C/C++ 模块的支持的,所谓的模块无非就是对某些 C/C++ 类或者函数的 wrapper,使得原来的一些函数能与脚本语言的解释器进行交互,因此本质上所谓的模块就是一个解释器能够动态载入动态链接库。类似 Matlab 的 mex(每个 mex 的入口函数都是 mexFunction)

  • C++ Dynamic Link Libraries(.dll)
  • linux based libraries(.so)

SWIG(Simplified Wrapper and Interface Generator)  的设计就是通过 swig 命令进行预处理,用一个模块说明文件(一般是 .i 后缀)产生 wrapper,和对应的 C/C++ 程序一起编译连接成为动态链接库。

运行opencv sample,配置参见以前博文

虽然从2.2开始, OpenCV就号称支持Android平台, 但真正能让OpenCV在Android上运行起来还是在2.3.1版本上. 在这个版本上, 我们可以使用Android的Camera,也可以使用OpenCV自带的VideoCapture. 大部分重要的API已经封装成了JAVA接口,可以在Android中直接调用. 比如imread, VideoCapture, Mat等.

OCV T0-Android Camera和 OCV T2 – OpenCV Camera:

人觉得还是Android Camera比较好, 使用起来比较灵活, 用户可以方便的设置各种属性, 比如视频格式, 分辨率, 帧率, 白平衡, 曝光等等. Android的Camera类允许用户设置PreviewCallback, 可以在这里面调OpenCV的api对每帧图像进行处理, 把结果实时显示到屏幕上.

 

【Java】native关键字

使用native关键字说明这个方法是原生函数,也就是这个方法是用C/C++语言实现的,并且被编译成了DLL,由java去调用。
这些函数的实现体在DLL中,JDK的源代码中并不包含,你应该是看不到的。对于不同的平台它们也是不同的。这也是java的底层机制,实际上java就是在不同的平台上调用不同的native方法实现对操作系统的访问的。

java的跨平台是以牺牲一些对底层的控制为代价,若java要实现对底层的控制,就需要一些其他语言的帮助,这个就是native的作用了

生活在西班牙

自己动手丰衣足食

BlueAsteroid

Just another WordPress.com site

Jing's Blog

Just another WordPress.com site

Start from here......

我的心情魔方

天才遠私廚

希望能做一個分享各種資訊的好地方

语义噪声

西瓜大丸子汤的博客

笑对人生,傲立寰宇

Just another WordPress.com site

Where On Earth Is Waldo?

A Project By Melanie Coles

the Serious Computer Vision Blog

A blog about computer vision and serious stuff

Cauthy's Blog

paper review...

Cornell Computer Vision Seminar Blog

Blog for CS 7670 - Special Topics in Computer Vision

datarazzi

Life through nerd-colored glasses

Luciana Haill

Brainwaves Augmenting Consciousness

槑烎

1,2,∞

Dr Paul Tennent

and the university of nottingham

turn off the lights, please

A bunch of random, thinned and stateless thoughts around the Web