(4)opencv在android平台上实现 物体跟踪_opencv android识别物品demo-程序员宅基地

技术标签: Android开发  camshift  android  物体跟踪  人脸识别  opencv  

最近项目时间很紧,抓紧时间集中精力去研究android平台的opencv里的物体跟踪技术

 

其他几篇文章有时间再去完善吧

 

从网上找到了一些实例代码,我想采取的学习方法是研究实例代码和看教程相结合,教程是ndk编程方面的编程规则等、opencv人脸识别、物体跟踪这一块的教程

 

(1)人脸检测与跟踪库 asmlibrary   分析和研究

http://www.oschina.net/p/asmlibrary

http://yaohan.sinaapp.com/topic/3/asmlibrary#comments

 

 

我下载了6.0这个版本,往eclipse里导入的时候要注意一些问题,记得导入Opencv-Library,在C++的paths and symbols的include里把需要的库加进去,然后还可能遇到其他问

题,读者自行百度或者留言给我吧,一起讨论研究一下!

 

把这个项目导入eclipse之后,观察它的结构:

 

src文件夹下有两个java文件:  ASMFit.java, ASMLibraryActivity.java 

jni文件夹下有 so文件夹,Android.mk,Application.mk,asmfitting.h,asmlibrary.h,DemoFit.cpp,vjfacedetect.h

在res文件夹下发现了一个文件夹 raw,第一次见,然后百度了一下,

 

 

assets和res下面raw文件的使用不同点

assets下面的文件不会被编译,通过路径可以去访问其中的内容。raw中文件会自动编译,我们可以在R.java文件中找到对应的ID

 

 

 

其中比较重要的是获取到Assets和Raw文件夹中的资源方法:

      Assets:     AssetManager assetManager = getAssets();

      Raw:        InputStream inputStream = getResources().openRawResource(R.raw.demo); 

 

 

 

把代码贴进来研究

 

 

java部分:

 

(1)ASMFit.java

 

package org.asmlibrary.fit;

import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;

import android.util.Log;

public class ASMFit {
	
	public ASMFit(){}
	
	/**
	 * This function can only be used after nativeInitFastCascadeDetector()
	 * @param imageGray original gray image 
	 * @param faces all faces' feature points 
	 * @return true if found faces, false otherwise
	 */
	public boolean fastDetectAll(Mat imageGray, Mat faces){
		return nativeFastDetectAll(imageGray.getNativeObjAddr(), 
				faces.getNativeObjAddr());
	}
	
	/**
	 * This function can only be used after nativeInitCascadeDetector()
	 * @param imageGray original gray image 
	 * @param face all faces' feature points 
	 * @return true if found faces, false otherwise
	 */
	public boolean detectAll(Mat imageGray, Mat faces){
		return nativeDetectAll(imageGray.getNativeObjAddr(), 
				faces.getNativeObjAddr());
	}
	
	/**
	 * This function can only be used after nativeInitCascadeDetector()
	 * @param imageGray original gray image 
	 * @param faces only one face's feature points 
	 * @return true if found faces, false otherwise
	 */
	public boolean detectOne(Mat imageGray, Mat face){
		return nativeDetectOne(imageGray.getNativeObjAddr(), 
				face.getNativeObjAddr());
	}
	
	public void fitting(Mat imageGray, Mat shapes){
		nativeFitting(imageGray.getNativeObjAddr(), 
				shapes.getNativeObjAddr());
	}
	
	public boolean videoFitting(Mat imageGray, Mat shape, long frame){
		return nativeVideoFitting(imageGray.getNativeObjAddr(),
				shape.getNativeObjAddr(), frame);
	}
	
	public static native boolean nativeReadModel(String modelName);

	/**
	 * @param cascadeName could be haarcascade_frontalface_alt2.xml 
	 * @return
	 */
	public static native boolean nativeInitCascadeDetector(String cascadeName);
	public static native void nativeDestroyCascadeDetector();
	
	/**
	 * @param cascadeName could be lbpcascade_frontalface.xml 
	 * @return
	 */
	public static native boolean nativeInitFastCascadeDetector(String cascadeName);
	public static native void nativeDestroyFastCascadeDetector();
	
	public static native void nativeInitShape(long faces);
	
	private static native boolean nativeDetectAll(long inputImage, long faces);
	private static native boolean nativeDetectOne(long inputImage, long face);
	private static native boolean nativeFastDetectAll(long inputImage, long faces);
	
	private static native void nativeFitting(long inputImage, long shapes);
	private static native boolean nativeVideoFitting(long inputImage, long shape, long frame);

}

 

 

 

 

 

(2)ASMLibraryActivity.java

 

package org.asmlibrary.fit;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;

import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;
import org.opencv.core.Scalar;
import org.opencv.core.Point;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2;
import org.asmlibrary.fit.R;
import org.asmlibrary.fit.ASMFit;

import android.app.Activity;
import android.app.AlertDialog;
import android.content.Context;
import android.os.Bundle;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.WindowManager;
import android.content.res.Configuration;

public class ASMLibraryActivity extends Activity implements CvCameraViewListener2{
    
	private static final String    TAG                 = "ASMLibraryDemo";
    
    private Mat                    	mRgba;
    private Mat                    	mGray;
    private Mat                    	mGray2;
    private File                   	mCascadeFile;
    private File                   	mFastCascadeFile;
    private File                   	mModelFile;
    private ASMFit      		   	mASMFit;
    private long				   	mFrame;
    private boolean					mFlag;
    private boolean					mPortrait = true;
    private boolean					mFastDetect = false;
    private Mat						mShape;
    private static final Scalar 	mColor = new Scalar(255, 0, 0);
    private MenuItem               	mHelpItem;
    private MenuItem               	mDetectItem;
    private MenuItem               	mOrieItem;
    private MenuItem				mCameraitem;
    private CameraBridgeViewBase   	mOpenCvCameraView;
    private int 					mCameraIndex = CameraBridgeViewBase.CAMERA_ID_ANY;
    
    public ASMLibraryActivity() {
        Log.i(TAG, "Instantiated new " + this.getClass());
    }
    
    private BaseLoaderCallback  mLoaderCallback = new BaseLoaderCallback(this) {
    	private File getSourceFile(int id, String name, String folder){
    		File file = null;
    		try {
	    		InputStream is = getResources().openRawResource(id);
	            File cascadeDir = getDir(folder, Context.MODE_PRIVATE);
	            file = new File(cascadeDir, name);
	            FileOutputStream os = new FileOutputStream(file);
	            
	            byte[] buffer = new byte[4096];
	            int bytesRead;
	            while ((bytesRead = is.read(buffer)) != -1) {
	                os.write(buffer, 0, bytesRead);
	            }
	            is.close();
	            os.close();
    		}catch (IOException e) {
                e.printStackTrace();
                Log.e(TAG, "Failed to load file " + name + ". Exception thrown: " + e);
            }
	            
            return file;
    		
    	}
    	
        @Override
        public void onManagerConnected(int status) {
            switch (status) {
                case LoaderCallbackInterface.SUCCESS:
                {
                    Log.i(TAG, "OpenCV loaded successfully");

                    // Load native library after(!) OpenCV initialization
                    System.loadLibrary("asmlibrary");
                    System.loadLibrary("jni-asmlibrary");
                    
                    mASMFit = new ASMFit();

                    mModelFile = getSourceFile(R.raw.my68_1d, "my68_1d.amf", "model");
                    if(mModelFile != null)
                    	mASMFit.nativeReadModel(mModelFile.getAbsolutePath());
                    
                    
                    mCascadeFile = getSourceFile(R.raw.haarcascade_frontalface_alt2, 
                    		"haarcascade_frontalface_alt2.xml", "cascade");
                    if(mCascadeFile != null)
                    	mASMFit.nativeInitCascadeDetector(mCascadeFile.getAbsolutePath());

                    mFastCascadeFile = getSourceFile(R.raw.lbpcascade_frontalface, 
                    		"lbpcascade_frontalface.xml", "cascade");
                    if(mFastCascadeFile != null)
                    	mASMFit.nativeInitFastCascadeDetector(mFastCascadeFile.getAbsolutePath());
                    
                    //test image alignment
                    // load image file from application resources
                	File JPGFile = getSourceFile(R.raw.gump, "gump.jpg", "image");
                	
                	Mat image = Highgui.imread(JPGFile.getAbsolutePath(), Highgui.IMREAD_GRAYSCALE);
                    Mat shapes = new Mat();
                    
                    if(mASMFit.detectAll(image, shapes) == true)
        			{
                    	/*
                    	for(int i = 0; i < shapes.row(0).cols()/2; i++)
        				{
                        	Log.d(TAG, "before points:" + 
                        			shapes.get(0, 2*i)[0] +"," +shapes.get(0, 2*i+1)[0]);
        				}
        				*/
                    	
        				mASMFit.fitting(image, shapes);
        				
        				/*
        				for(int i = 0; i < shapes.row(0).cols()/2; i++)
        				{
                        	Log.d(TAG, "after points:" + 
                        			shapes.get(0, 2*i)[0] +"," +shapes.get(0, 2*i+1)[0]);
        				}
        				*/
        			}

                    mOpenCvCameraView.enableView();
                } break;
                default:
                {
                    super.onManagerConnected(status);
                } break;
            }
        }
    };

	/** Called when the activity is first created. */
	@Override
    public void onCreate(Bundle savedInstanceState) {
        Log.i(TAG, "called onCreate");
        super.onCreate(savedInstanceState);
        getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);

        setContentView(R.layout.face_detect_surface_view);

        mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.fd_activity_surface_view);
        mOpenCvCameraView.setCvCameraViewListener(this);
        mFrame = 0;
        mFlag = false;
    }
	
	@Override
    public void onPause()
    {
        super.onPause();
        if (mOpenCvCameraView != null)
            mOpenCvCameraView.disableView();
    }

    @Override
    public void onResume()
    {
        super.onResume();
        OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this, mLoaderCallback);
        mFrame = 0;
        mFlag = false;
    }
    
    @Override
    public boolean onCreateOptionsMenu(Menu menu) {
        Log.i(TAG, "called onCreateOptionsMenu"+mFastDetect);
        mCameraitem = menu.add("Toggle Front/Back");
        mOrieItem = menu.add("Toggle Portrait");
        if(mFastDetect == true)
        	mDetectItem = menu.add("CascadeDetector");
        else
        	mDetectItem = menu.add("FastCascadeDetector");
        mHelpItem = menu.add("About ASMLibrary");
        return true;
    }
    
    @Override
    public boolean onOptionsItemSelected(MenuItem item) {
        Log.i(TAG, "called onOptionsItemSelected; selected item: " + item);
        if (item == mHelpItem)
        	new AlertDialog.Builder(this).setTitle("About ASMLibrary")
        		.setMessage("ASMLibrary -- A compact SDK for face alignment/tracking\n" +
        				"Copyright (c) 2008-2011 by Yao Wei, all rights reserved.\n" +
        				"Contact: [email protected]\n")
        				.setPositiveButton("OK", null).show();
        else if(item == mDetectItem)
        {
        	mFastDetect = !mFastDetect;
        }
        else if(item == mOrieItem)
        {
        	mPortrait = !mPortrait;
        }
        else if(item == mCameraitem)
        {
        	if(mCameraIndex == CameraBridgeViewBase.CAMERA_ID_ANY ||
        			mCameraIndex == CameraBridgeViewBase.CAMERA_ID_BACK)
        		mCameraIndex = CameraBridgeViewBase.CAMERA_ID_FRONT;
        	else
        		mCameraIndex = CameraBridgeViewBase.CAMERA_ID_BACK;
        	mOpenCvCameraView.setCameraIndex(mCameraIndex);
        }
        return true;
    }

    @Override
    public void onDestroy() {
        super.onDestroy();
        mOpenCvCameraView.disableView();
    }

    public void onCameraViewStarted(int width, int height) {
        mGray = new Mat();
        mGray2 = new Mat();
        mRgba = new Mat();
        mShape = new Mat();
        mFrame = 0;
        mFlag = false;
    }

    public void onCameraViewStopped() {
        mGray.release();
        mRgba.release();
    }

    public Mat onCameraFrame(CvCameraViewFrame inputFrame) {

        mRgba = inputFrame.rgba();
        mGray = inputFrame.gray();
        
        if(mPortrait ==true) 
        	Core.transpose(mGray, mGray2);
        else
        	mGray2 = mGray;
        
        //WindowManager manager = getWindowManager();
        //int width = manager.getDefaultDisplay().getWidth();
        //int height = manager.getDefaultDisplay().getHeight();
        //Log.d(TAG, "屏幕大小" + width + "x" + height);

        if(mFrame == 0 || mFlag == false)
		{
        	Mat detShape = new Mat();
			if(mFastDetect)
				mFlag = mASMFit.fastDetectAll(mGray2, detShape);
			else
				mFlag = mASMFit.detectAll(mGray2, detShape);
			if(mFlag)	mShape = detShape.row(0);
		}
			
		if(mFlag) 
		{
			mFlag = mASMFit.videoFitting(mGray2, mShape, mFrame);
		}
		
		if(mFlag)
		{
			if(mPortrait == true)
			{
				int nPoints = mShape.row(0).cols()/2;
				for(int i = 0; i < nPoints; i++)
				{ 
					double x = mShape.get(0, 2*i)[0];
					double y = mShape.get(0, 2*i+1)[0];
					Point pt = new Point(y, x);
					
					Core.circle(mRgba, pt, 3, mColor);
				}
			}
			else
			{
				int nPoints = mShape.row(0).cols()/2;
				for(int i = 0; i < nPoints; i++)
				{ 
					Point pt = new Point(mShape.get(0, 2*i)[0], mShape.get(0, 2*i+1)[0]);
					Core.circle(mRgba, pt, 3, mColor);
				}
			}
		}
		
		mFrame ++;

        return mRgba;
    }
}

 

 

 

 

 

C++部分(jni)

(1)Android.mk

 

LOCAL_PATH := $(call my-dir)


include $(CLEAR_VARS)
LOCAL_MODULE := asmlibrary
LOCAL_SRC_FILES := so/$(TARGET_ARCH_ABI)/libasmlibrary.so
include $(PREBUILT_SHARED_LIBRARY) 


include $(CLEAR_VARS)

#OPENCV_CAMERA_MODULES:=off
#OPENCV_INSTALL_MODULES:=off
#OPENCV_LIB_TYPE:=SHARED
include E:\android-eclipse\OpenCV-2.4.8-android-sdk/sdk/native/jni/OpenCV.mk

LOCAL_SRC_FILES  := DemoFit.cpp
LOCAL_C_INCLUDES += $(LOCAL_PATH)
LOCAL_CFLAGS    += -DOPENCV_OLDER_VISION
LOCAL_LDLIBS     += -llog -ldl  

LOCAL_MODULE     := jni-asmlibrary

LOCAL_SHARED_LIBRARIES := asmlibrary

include $(BUILD_SHARED_LIBRARY)


(2)Application.mk

 

 

APP_STL := gnustl_static
APP_CPPFLAGS := -frtti -fexceptions
APP_ABI := armeabi-v7a armeabi x86 mips
APP_PLATFORM := android-8


(3)asmfitting.h

 

 

#ifndef _ASM_FITTING_H_
#define _ASM_FITTING_H_

#include "asmlibrary.h"

/** Wrapped Class for face alignment/tracking using active shape model */
class ASMLIB asmfitting
{
public:
	/** Constructor */
	asmfitting();

	/** Destructor */
	~asmfitting();
	
	/**
     Process face alignment on image. (Only for one face box)
	 @param shape the point features that carries initial shape and also restores result after fitting
	 @param image the image resource
	 @param n_iteration the number of iteration during fitting
	*/
	void Fitting(asm_shape& shape, const IplImage* image, int n_iteration = 30);
	
	/**
     Process face alignment on image. (For multi-face boxes)
	 @param shapes all shape datas that carry the fitting result
	 @param n_shapes the number of human face
	 @param image the image resource
	 @param n_iteration the number of iteration during fitting
	*/
	void Fitting2(asm_shape* shapes, int n_shapes, const IplImage* image, int n_iteration = 30);
	
	/**
     Process face tracking on video/camera.
	 @param shape the point features that carries initial shape and also restores result after fitting
	 @param image the image resource
	 @param frame_no one certain frame number of video/camera
	 @param bopticalflow whether to use optical flow or not?
	 @param n_iteration the number of iteration during fitting
	 @return false on failure, true otherwise.
	 */
	bool ASMSeqSearch(asm_shape& shape, const IplImage* image, 
		int frame_no = 0, bool bopticalflow = false, int n_iteration = 30);
	
	/**<
     Get the Average Viola-Jone Box.
	*/
	const asm_shape GetMappingDetShape()const { return m__VJdetavshape;}
	
	/**<
     Get the width of mean face.
	*/
	const double	GetMeanFaceWidth()const{ return m_model.GetMeanShape().GetWidth();	}
	
	/**<
	 Get raw ptr of asm_model.
	*/
	const asm_model* GetModel()const { return &m_model; }
	
	/**
     Read model data from file.
	 @param filename the filename that stores the model
	 @return false on failure, true otherwise
    */
	bool Read(const char* filename);

private:

	/**
     Apply optical flow between two successive frames.
	 @param shape it carries initial shape and also restores result after fitting
	 @param grayimage the image resource.
	*/
	void OpticalFlowAlign(asm_shape& shape, const IplImage* grayimage);

private:
	asm_model	m_model;	/**<active shape model to be trained */
	int *m_edge_start; /**< Starting index of edges */
	int *m_edge_end;   /**< Ending index of edges */
	int m_nEdge;       /**< Number of edges */
	asm_shape m__VJdetavshape;    /**< average mapping shape relative to VJ detect box*/
	scale_param m_param;			/**< point index of left and right side in the face template*/
	bool m_flag;					/**< Does the image contain face? */
	double m_dReferenceFaceWidth;	/**< reference face width */

private:
	IplImage* __lastframe;  /**< Cached variables for optical flow */
	IplImage* __pyrimg1;	/**< Cached variables for optical flow */
	IplImage* __pyrimg2;	/**< Cached variables for optical flow */
	Point2D32f* __features1;	/**< Cached variables for optical flow */
	Point2D32f* __features2;	/**< Cached variables for optical flow */
	char* __found_feature;	/**< Cached variables for optical flow */
	float* __feature_error;	/**< Cached variables for optical flow */
};

#endif //_ASM_FITTING_H_


(4)asmlibrary.h

#ifndef _ASM_LIBRARY_H_
#define _ASM_LIBRARY_H_

#include <stdio.h>

class asm_shape;
class asm_profile;
class asm_model;
struct profile_Nd_model;
struct profile_lbp_model;
struct CvMat;
struct _IplImage;

typedef unsigned char uchar;
typedef struct _IplImage IplImage;

#ifdef WIN32
#ifdef ASMLIBRARY_EXPORTS
#define ASMLIB __declspec(dllexport)
#else
#define ASMLIB __declspec(dllimport)
#endif
#else
#define ASMLIB
#endif

/**
 * Predefined local texture (profile) types.
 * <ul>
 * <li>PROFILE_1D: use only the pixels along the normal vector in the contour.</li>
 * <li>PROFILE_2D: use the pixels located at the recentage.</li>
 * <li>PROFILE_LBP: use the pixels processed with LBP-operator.</li>
 * </ul>
 **/
enum ASM_PROFILE_TYPE {PROFILE_1D, PROFILE_2D, PROFILE_LBP};

#ifdef __cplusplus
extern "C"{
#endif

/**
 Initialize shape from the detected box.
 @param shape the returned initial shape
 @param det_shape the detected box calling by \a asm_vjfacedetect::\a Detect()
 @param ref_shape the average mean shape
 @param refwidth the width of average mean shape
*/
ASMLIB void InitShapeFromDetBox(asm_shape &shape, const asm_shape& det_shape, 
								const asm_shape &ref_shape, double refwidth);

#ifdef __cplusplus
}
#endif

/** Class for 2d point. */
typedef struct Point2D32f
{
    float x;
    float y;
}
Point2D32f;

/** Class for 2d shape data. */
class ASMLIB asm_shape
{
public:
    /** Constructor */
	asm_shape();
    
	/** Copy Constructor */
	asm_shape(const asm_shape &v);
    
	/** Destructor */
    ~asm_shape();

	/**
     Access elements by \a CvPoint2D32f \a pt = \a shape[\a i] to get \a i-th point in the shape.
     @param i Index of points
     @return   Point at the certain index
	*/
	const Point2D32f operator [](int i)const{ return m_vPoints[i];	}
	
	/**
     Access elements by \a CvPoint2D32f \a pt = \a shape[\a i] to get \a i-th point in the shape.
     @param i Index of points
     @return   Point at the certain index
	*/
	Point2D32f& operator [](int i){ return m_vPoints[i];	}
	
	/**
     Get the number of points.
     @return   Number of points
	*/
	inline const int NPoints()const{ return	m_nPoints; }

    /**
     Override of operator =
    */
    asm_shape&			operator =(const asm_shape &s);
    
	/**
     Override of operator =.
    */
	asm_shape&			operator =(double value);
    
	/**
     Override of operator +
    */
    const asm_shape		operator +(const asm_shape &s)const;
    
	/**
     Override of operator +=
    */
    asm_shape&			operator +=(const asm_shape &s);
    
	/**
     Override of operator -
    */
    const asm_shape     operator -(const asm_shape &s)const;
    
	/**
     Override of operator -=
    */
    asm_shape&			operator -=(const asm_shape &s);
    
	/**
     Override of operator *
    */
    const asm_shape     operator *(double value)const;
    
	/**
     Override of operator *=
    */
    asm_shape&			operator *=(double value);
    
	/**
     Override of operator *
    */
    double				operator *(const asm_shape &s)const;
    
	/**
     Override of operator /
    */
    const asm_shape     operator /(double value)const;
    
	/**
     Override of operator /=
    */
    asm_shape&			operator /=(double value);

	/**
     Release memory.
    */
    void    Clear();
    
	/**
     Allocate memory.
	 @param length Number of of shape points
    */
    void    Resize(int length);
    
	/**
     Read points from file.
	 @param filename the filename the stored shape data
     @return   true on pts format, false on asf format, exit otherwise
    */
    bool	ReadAnnotations(const char* filename);
	
	/**
     Read points from asf format file.
	 @param filename the filename the stored shape data
    */
    void    ReadFromASF(const char*filename);
	
	/**
     Read points from pts format file.
	 @param filename the filename the stored shape data
    */
    void	ReadFromPTS(const char*filename);
	
	/**
     Write shape data into file stream.
	 @param f  stream to write to
    */
	void	Write(FILE* f);
	
	/**
     Read shape data from file stream.
	 @param f  stream to read from
    */
	void	Read(FILE* f);
	
	/**
     Calculate minimum \f$x\f$-direction value of shape.
    */
	const double  MinX()const;
    
	/**
     Calculate minimum \f$y\f$-direction value of shape.
    */
	const double  MinY()const;
    
	/**
     Calculate maximum \f$x\f$-direction value of shape.
    */
	const double  MaxX()const;
    
	/**
     Calculate maximum \f$y\f$-direction value of shape.
    */
	const double  MaxY()const;
	
	/**
     Calculate the left and right index for \f$x\f$-direction in the shape.
	 @param ileft the index of points in \f$x\f$-direction which has the minimum x
	 @param iright the index of points in \f$x\f$-direction which has the maximum x
    */
	void		  GetLeftRight(int& ileft, int& iright)const;
    
	/**
     Calculate width of shape.
	 @param ileft Index of points in \f$x\f$-direction which has the minimum x
	 @param iright Index of points in \f$x\f$-direction which has the maximum x
    */
	const double  GetWidth(int ileft = -1, int iright = -1)const;
	
	/**
     Calculate height of shape.
    */
	const double  GetHeight()const { return MaxY()-MinY();	}
	
    /**
     Calculate center of gravity for shape.
	 @param x Value of center in \f$x\f$-direction
	 @param y Value of center in \f$y\f$-direction
    */
	void    COG(double &x, double &y)const;
    
	/**
     Translate the shape to make its center locate at (0, 0).
	*/
	void    Centralize();
    
	/**
	 Translate the shape.
	 @param x Value of translation factor in \f$x\f$-direction
	 @param y Value of translation factor in \f$y\f$-direction
    */
	void    Translate(double x, double y);
    
	/**
     Scale shape by an uniform factor.
	 @param s Scaling factor
	*/
	void    Scale(double s);
    
	/**
     Rotate shape by anti clock-wise.
	 @param theta Angle to be rotated
	*/
	void    Rotate(double theta);
	
	/**
     Scale shape in x and y direction respectively.
	 @param sx Scaling factor in \f$x\f$-direction
	 @param sy Scaling factor in \f$y\f$-direction
	*/
	void    ScaleXY(double sx, double sy);
	
	/**
     Normalize shape (zero_mean_unit_length) so that its center locates at (0, 0) and its \f$L2\f$-norm is 1.
	 @return the \f$L2\f$-norm of original shape
	*/
	double	Normalize();
	
	
	enum{ LU, SVD, Direct };

	/**
	 Calculate the similarity transform between one shape and another reference shape. 
	 Where the similarity transform is: 
	 <BR>
	 \f$T(a,b,tx,ty) = [a \ -b \ Tx; b \ a \ Ty ; 0 \ 0 \ 1]\f$.
	 @param ref_shape the reference shape
	 @param a  will return \f$ s \times cos(theta) \f$ in form of similarity transform
	 @param b  will return \f$ s \times sin(theta) \f$ in form of similarity transform
	 @param tx will return \f$ Tx \f$ in form of similarity transform
	 @param ty will return \f$ Ty \f$ in form of similarity transform
	 @param method  Method of similarity transform
	*/
	void    AlignTransformation(const asm_shape &ref_shape, double &a, double &b, 
								double &tx, double &ty, int method = SVD)const;
    
	/**
	 Align the shape to the reference shape. 
	 @param ref_shape the reference shape
	 @param method  method of similarity transform
	*/
	void    AlignTo(const asm_shape &ref_shape, int method = SVD);
    
	/**
	 Transform Shape using the similarity transform \f$T(a,b,tx,ty)\f$. 
	*/
	void    TransformPose(double a, double b, double tx, double ty);

	/**
	 Calculate the angular bisector between two lines \f$Pi-Pj\f$ and \f$Pj-Pk\f$. 
	 @param i the index of point vertex
	 @param j the index of point vertex
	 @param k the index of point vertex
	 @return Angular bisector vector in form of \f$(cos(x), sin(x))^T\f$
	*/
	Point2D32f CalcBisector(int i, int j, int k)const;

	/**
	 Calculate the Euclidean norm (\f$L2\f$-norm). 
	 @return Euclidean norm
	*/
	double  GetNorm2()const;

	/**
	 Calculate the normal vector at certain vertex around the shape contour. 
	 @param cos_alpha the normal vector in \f$x\f$-direction
	 @param sin_alpha the normal vector in \f$y\f$-direction
	 @param i the index of point vertex
	*/
	void	CalcNormalVector(double &cos_alpha, double &sin_alpha, int i)const;

	/**
	 Convert from OpenCV's \a CvMat to class asm_shape
	 @param mat \a CvMat that converted from
	*/
	void    CopyFrom(const CvMat* mat);
	
	/**
	 Convert from class asm_shape to OpenCV's CvMat.
	 @param mat CvMat that converted to
	*/
	void    CopyTo(CvMat* mat)const;

private:
	void    Transform(double c00, double c01, double c10, double c11);

private:
	Point2D32f* m_vPoints;	/**< point data */
	int m_nPoints;				/**< number of points */
};

/** Left and Right index in \f$x\f$-direction of shape */
typedef struct scale_param
{
	int left;	/**< Index of points in \f$x\f$-direction which has the minimum x */
	int right;	/**< Index of points in \f$x\f$-direction which has the maximum x */
}scale_param;


/** Class for active shape model. */
class ASMLIB asm_model
{
public:
	/**
	 Constructor
	*/
	asm_model();
	
	/**
	 Destructor
	*/
	~asm_model();

	/**
	 Image alignment/fitting with an initial shape.
	 @param shape the point features that carries initial shape and also restores result after fitting
	 @param grayimage the gray image resource
	 @param max_iter the number of iteration
	 @param param the left and right index for \f$x\f$-direction in the shape (Always set \a NULL )
	 @return false on failure, true otherwise
	*/
	bool Fit(asm_shape& shape, const IplImage *grayimage, 
		int max_iter = 30, const scale_param* param = NULL);	
	
	/**
     Write model data to file stream.
	 @param f  stream to write to
    */
	void WriteModel(FILE* f);
	
	/**
     Read model data from file stream.
	 @param f  stream to read from
    */
	void ReadModel(FILE* f);

	/**
	 Get mean shape of model.
	*/
	const asm_shape& GetMeanShape()const { return m_asm_meanshape;	}
	
	/**
	 Get modes of shape distribution model (Will be calculated in shape's PCA)
	*/
	const int GetModesOfModel()const { return m_nModes;}
	
	/**
	 Get the width of mean shape [Identical to \a m_asm_meanshape.\a GetWidth()].
	*/
	const double GetReferenceWidthOfFace()const { return m_dReferenceFaceWidth; }

private:

	/**
     Get the optimal offset at one certain point vertex during the process of 
	 best profile matching (work for 1d/2d profile model).
	 @param image the image resource
	 @param ilev one certain pyramid level
	 @param shape the shape data
	 @param ipoint the index of point vertex
	 @param cos_alpha the normal vector in \f$x\f$-direction
	 @param sin_alpha the normal vector in \f$y\f$-direction
	 @return offset bias from \a Shape[\a iPoint]
	*/
	int FindBestOffsetForNd(const IplImage* image, int ilev,
							const asm_shape& shape, int ipoint,
							double& cos_alpha, double& sin_alpha);

	/**
     Get the optimal offset at one certain point vertex during the process of 
	 best profile matching (work for lbp profile model).
  	 @param lbp_img the target image processed with LBP
	 @param nrows the height of \a lbp_img
	 @param ncols the width of \a lbp_img
	 @param ilev one certain pyramid level
	 @param shape the shape data
	 @param ipoint the index of point vertex
	 @param xoffset the returned offset in \f$x\f$-direction away from \a Shape[\a iPoint]
	 @param yoffset the returned offset in \f$y\f$-direction away from \a Shape[\a iPoint]
	*/
	void FindBestOffsetForLBP(const int* lbp_img, int nrows, int ncols, int ilev,
				const asm_shape& shape, int ipoint, int& xoffset, int& yoffset);

	/**
     Update shape by matching the image profile to the model profile.
	 @param update_shape the updated shape
	 @param shape  the point feature that will be matched
	 @param ilev one certain pyramid level
	 @param image the image resource
	 @param lbp_img the LBP-operator image
	 @param norm the \f$L2\f$-norm of the difference between \a shape and \a update_shape
	 @return how many point vertex will be updated?
	*/
	int MatchToModel(asm_shape& update_shape, const asm_shape& shape, 
		int ilev, const IplImage* image, const int *lbp_img, double* norm = NULL);

	/**
     Calculate shape parameters (\a a, \a b, \a Tx, \a Ty) and pose parameters \a p.
	 @param p  Shape parameters
	 @param a  \f$ s \times cos(theta) \f$ in form of similarity transform
	 @param b  \f$ s \times sin(theta) \f$ in form of similarity transform
	 @param tx \f$ Tx \f$ in form of similarity transform
	 @param ty \f$ Ty \f$ in form of similarity transform
	 @param shape  the point features data
	 @param iter_no Number of iteration
	*/
	void CalcParams(CvMat* p, double& a, double& b,	double& tx, double& ty, 
		const asm_shape& shape, int iter_no = 2);
	
	/**
     Constrain the shape parameters.
	 @param p  Shape parameters
	*/
	void Clamp(CvMat* p);

	/**
     Generate shape instance according to shape parameters p and pose parameters.
	 @param shape the point feature data
	 @param p  the shape parameters
	 @param a  Return \f$ s \times cos(theta) \f$ in form of similarity transform
	 @param b  Return \f$ s \times sin(theta) \f$ in form of similarity transform
	 @param tx  Return \f$ Tx \f$ in form of similarity transform
	 @param ty  Return \f$ Ty \f$ in form of similarity transform
	*/
	void CalcGlobalShape(asm_shape& shape, const CvMat* p, 
							double a, double b, double tx, double ty);

	/**
     Pyramid fitting at one certain level.
	 @param shape the point feature data
	 @param image the image resource
	 @param ilev one certain pyramid level
	 @param iter_no the number of iteration
	*/
	void PyramidFit(asm_shape& shape, const IplImage* image, int ilev, int iter_no);

private:

	CvMat*  m_M;   /**< mean vector of shape data */
    CvMat*  m_B;   /**< eigenvetors of shape data */
    CvMat*  m_V;   /**< eigenvalues of shape data */

	CvMat* m_SM;   /**< mean of shapes projected space */	
	CvMat* m_SSD;  /**< standard deviation of shapes projected space	*/	


	ASM_PROFILE_TYPE m_type;	/**< the type of sampling profile */
	
	/**< the profile distribution model */
	union
	{
		struct profile_lbp_model* lbp_tdm;			/**< lbp profile model */
		struct profile_Nd_model* classical_tdm;	/**< 1d/2d profile model */
	}; 
	
	
	asm_shape m_asm_meanshape;			/**< mean shape of aligned shapes */
	
	int m_nPoints;					/**< number of shape points */
	int m_nWidth;					/**< width of each landmark's profile */
	int m_nLevels;					/**< pyramid level of multi-resolution */
	int m_nModes;					/**< number of truncated eigenvalues */
	double m_dReferenceFaceWidth;	/**< width of reference face  */
	bool m_bInterpolate;			/**< whether to using image interpolate or not*/
	double m_dMeanCost;		/**< the mean of fitting cost to determine whether fitting succeed or not*/
	double m_dVarCost;		/**< the variance of fitting cost determine whether fitting succeed or not*/

private:
	CvMat*		m_CBackproject; /**< Cached variables for speed up */
	CvMat*		m_CBs;			/**< Cached variables for speed up */
	double*		m_dist;			/**< Cached variables for speed up */
	asm_profile* m_profile;		/**< Cached variables for speed up */
	asm_shape	m_search_shape;	/**< Cached variables for speed up */
	asm_shape	m_temp_shape;	/**< Cached variables for speed up */
};

/** You can define your own face detector function here
 @param shapes Returned face detected box which stores the Top-Left and Bottom-Right points, so its \a NPoints() = 2 here.
 @param image Image resource.
 @return false on no face exists in image, true otherwise.
*/
typedef	bool (*detect_func)(asm_shape& shape, const IplImage* image);

#endif  // _ASM_LIBRARY_H_

 

 

 

(5)DemoFit.cpp

 

#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/contrib/detection_based_tracker.hpp>

#include "asmfitting.h"
#include "vjfacedetect.h"

#include <string>
#include <vector>

#include <android/log.h>
#include <jni.h>

using namespace std;
using namespace cv;

#define LOG_TAG "ASMLIBRARY"
#define LOGD(...) ((void)__android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__))

#define BEGINT()	double t = (double)cvGetTickCount();
#define ENDT(exp)	t = ((double)cvGetTickCount() -  t )/  (cvGetTickFrequency()*1000.);	\
					LOGD(exp " time cost: %.2f millisec\n", t);

asmfitting fit_asm;
DetectionBasedTracker *track = NULL;

#ifdef __cplusplus
extern "C" {
#endif

JNIEXPORT jboolean JNICALL Java_org_asmlibrary_fit_ASMFit_nativeReadModel
(JNIEnv * jenv, jclass, jstring jFileName)
{
    LOGD("nativeReadModel enter");
    const char* filename = jenv->GetStringUTFChars(jFileName, NULL);
    jboolean result = false;

    try
    {
	if(fit_asm.Read(filename) == true)
		result = true;

    }
    catch (...)
    {
        LOGD("nativeReadModel caught unknown exception");
        jclass je = jenv->FindClass("java/lang/Exception");
        jenv->ThrowNew(je, "Unknown exception in JNI code");
    }

    LOGD("nativeReadModel %s exit %d", filename, result);
    return result;
}

JNIEXPORT jboolean JNICALL Java_org_asmlibrary_fit_ASMFit_nativeInitCascadeDetector
(JNIEnv * jenv, jclass, jstring jFileName)
{
    const char* cascade_name = jenv->GetStringUTFChars(jFileName, NULL);
    LOGD("nativeInitCascadeDetector %s enter", cascade_name);

    if(init_detect_cascade(cascade_name) == false)
		return false;

    LOGD("nativeInitCascadeDetector exit");
    return true;
}

JNIEXPORT jboolean JNICALL Java_org_asmlibrary_fit_ASMFit_nativeInitFastCascadeDetector
(JNIEnv * jenv, jclass, jstring jFileName)
{
    const char* cascade_name = jenv->GetStringUTFChars(jFileName, NULL);
    LOGD("nativeInitFastCascadeDetector %s enter", cascade_name);

    DetectionBasedTracker::Parameters DetectorParams;
    DetectorParams.minObjectSize = 45;
    track = new DetectionBasedTracker(cascade_name, DetectorParams);

    if(track == NULL)	return false;

    DetectorParams = track->getParameters();
    DetectorParams.minObjectSize = 64;
    track->setParameters(DetectorParams);

    track->run();

    LOGD("nativeInitFastCascadeDetector exit");
    return true;
}

JNIEXPORT void JNICALL Java_org_asmlibrary_fit_ASMFit_nativeDestroyCascadeDetector
(JNIEnv * jenv, jclass)
{
    LOGD("nativeDestroyCascadeDetector enter");

    destory_detect_cascade();

    LOGD("nativeDestroyCascadeDetector exit");
}

JNIEXPORT void JNICALL Java_org_asmlibrary_fit_ASMFit_nativeDestroyFastCascadeDetector
(JNIEnv * jenv, jclass)
{
    LOGD("nativeDestroyFastCascadeDetector enter");

    if(track){
    	track->stop();
    	delete track;
    }

    LOGD("nativeDestroyFastCascadeDetector exit");
}

inline void shape_to_Mat(asm_shape shapes[], int nShape, Mat& mat)
{
	mat = Mat(nShape, shapes[0].NPoints()*2, CV_64FC1); 

	for(int i = 0; i < nShape; i++)
	{
		double *pt = mat.ptr<double>(i);  
		for(int j = 0; j < mat.cols/2; j++)
		{
			pt[2*j] = shapes[i][j].x;
			pt[2*j+1] = shapes[i][j].y;		
		}
	}
}

inline void Mat_to_shape(asm_shape shapes[], int nShape, Mat& mat)
{
	for(int i = 0; i < nShape; i++)
	{
		double *pt = mat.ptr<double>(i);  
		shapes[i].Resize(mat.cols/2);
		for(int j = 0; j < mat.cols/2; j++)
		{
			shapes[i][j].x = pt[2*j];
			shapes[i][j].y = pt[2*j+1];
		}
	}
}

JNIEXPORT jboolean JNICALL Java_org_asmlibrary_fit_ASMFit_nativeFastDetectAll
(JNIEnv * jenv, jclass, jlong imageGray, jlong faces)
{
	if(!track)	return false;

	BEGINT();

	vector<Rect> RectFaces;
	try{
		Mat image = *(Mat*)imageGray;
		LOGD("image: (%d, %d)", image.cols, image.rows);
		track->process(image);
		track->getObjects(RectFaces);
	}
	catch(cv::Exception& e)
	{
		LOGD("nativeFastDetectAll caught cv::Exception: %s", e.what());
		jclass je = jenv->FindClass("org/opencv/core/CvException");
		if(!je)
			je = jenv->FindClass("java/lang/Exception");
		jenv->ThrowNew(je, e.what());
	}
	catch (...)
	{
		LOGD("nativeFastDetectAll caught unknown exception");
		jclass je = jenv->FindClass("java/lang/Exception");
		jenv->ThrowNew(je, "Unknown exception in JNI code");
	}

	int nFaces = RectFaces.size();
	if(nFaces <= 0){
		ENDT("FastCascadeDetector CANNOT detect any face");
		return false;
	}

	LOGD("FastCascadeDetector found %d faces", nFaces);

	asm_shape* detshapes = new asm_shape[nFaces];
	for(int i = 0; i < nFaces; i++){
		Rect r = RectFaces[i];
		detshapes[i].Resize(2);
		detshapes[i][0].x = r.x;
		detshapes[i][0].y = r.y;
		detshapes[i][1].x = r.x+r.width;
		detshapes[i][1].y = r.y+r.height;
	}

	asm_shape* shapes = new asm_shape[nFaces];
	for(int i = 0; i < nFaces; i++)
	{
		InitShapeFromDetBox(shapes[i], detshapes[i], fit_asm.GetMappingDetShape(), fit_asm.GetMeanFaceWidth());
	}

	shape_to_Mat(shapes, nFaces, *((Mat*)faces));

	delete []detshapes;
	delete []shapes;

	ENDT("FastCascadeDetector detect");

	return true;
}

JNIEXPORT jboolean JNICALL Java_org_asmlibrary_fit_ASMFit_nativeDetectAll
(JNIEnv * jenv, jclass, jlong imageGray, jlong faces)
{
	IplImage image = *(Mat*)imageGray;
	int nFaces;
	asm_shape *detshapes = NULL;

	LOGD("image: (%d, %d)", image.width, image.height);

	BEGINT();

	bool flag =detect_all_faces(&detshapes, nFaces, &image);
	if(flag == false)	{
		ENDT("CascadeDetector CANNOT detect any face");
		return false;
	}

	LOGD("CascadeDetector found %d faces", nFaces);
	asm_shape* shapes = new asm_shape[nFaces];
	for(int i = 0; i < nFaces; i++)
	{
		InitShapeFromDetBox(shapes[i], detshapes[i], fit_asm.GetMappingDetShape(), fit_asm.GetMeanFaceWidth());
	}

	shape_to_Mat(shapes, nFaces, *((Mat*)faces));
	free_shape_memeory(&detshapes);	
	delete []shapes;
	
	ENDT("CascadeDetector detect");

	return true;
}

JNIEXPORT void JNICALL Java_org_asmlibrary_fit_ASMFit_nativeInitShape(JNIEnv * jenv, jclass, jlong faces)
{
	Mat faces1 = *((Mat*)faces);
	int nFaces = faces1.rows;
	asm_shape* detshapes = new asm_shape[nFaces];
	asm_shape* shapes = new asm_shape[nFaces];

	Mat_to_shape(detshapes, nFaces, faces1);

	for(int i = 0; i < nFaces; i++)
	{
		InitShapeFromDetBox(shapes[i], detshapes[i], fit_asm.GetMappingDetShape(), fit_asm.GetMeanFaceWidth());
	}

	shape_to_Mat(shapes, nFaces, *((Mat*)faces));

	delete []detshapes;
	delete []shapes;
}


JNIEXPORT jboolean JNICALL Java_org_asmlibrary_fit_ASMFit_nativeDetectOne
(JNIEnv * jenv, jclass, jlong imageGray, jlong faces)
{
	IplImage image = *(Mat*)imageGray;
	asm_shape shape, detshape;
	
	BEGINT();

	bool flag = detect_one_face(detshape, &image);

	if(flag == false)	{
		ENDT("CascadeDetector CANNOT detect any face");
		return false;
	}

	InitShapeFromDetBox(shape, detshape, fit_asm.GetMappingDetShape(), fit_asm.GetMeanFaceWidth());

	shape_to_Mat(&shape, 1, *((Mat*)faces));
	
	ENDT("CascadeDetector detects central face");

	return true;
}


JNIEXPORT void JNICALL Java_org_asmlibrary_fit_ASMFit_nativeFitting
(JNIEnv * jenv, jclass, jlong imageGray, jlong shapes0)
{
	IplImage image = *(Mat*)imageGray;
	Mat shapes1 = *(Mat*)shapes0;	
	int nFaces = shapes1.rows;	
	asm_shape* shapes = new asm_shape[nFaces];
	
	BEGINT();

	Mat_to_shape(shapes, nFaces, shapes1);

	fit_asm.Fitting2(shapes, nFaces, &image);

	shape_to_Mat(shapes, nFaces, *((Mat*)shapes0));

	ENDT("nativeFitting");

	//for(int i = 0; i < shapes[0].NPoints(); i++)
	//	LOGD("points: (%f, %f)", shapes[0][i].x, shapes[0][i].y);

	delete []shapes;
}

JNIEXPORT jboolean JNICALL Java_org_asmlibrary_fit_ASMFit_nativeVideoFitting
(JNIEnv * jenv, jclass, jlong imageGray, jlong shapes0, jlong frame)
{
	IplImage image = *(Mat*)imageGray;
	Mat shapes1 = *(Mat*)shapes0;	
	bool flag = false;
	if(shapes1.rows == 1)
	{
		asm_shape shape;
	
		LOGD("nativeVideoFitting %d x %d", image.width, image.height);
		BEGINT();

		Mat_to_shape(&shape, 1, shapes1);

		flag = fit_asm.ASMSeqSearch(shape, &image, frame, true);

		shape_to_Mat(&shape, 1, *((Mat*)shapes0));

		ENDT("nativeVideoFitting");
	}

	return flag;
}

#ifdef __cplusplus
}
#endif


(6)vjfacedetect.h

 

 

#ifndef _VJFACE_DETECT_H_
#define _VJFACE_DETECT_H_

#include "asmlibrary.h"


/**
 Load adaboost cascade file for detect face.
 @param cascade_name Filename the cascade detector located in
 @return false on failure, true otherwise
*/
bool init_detect_cascade(const char* cascade_name = "haarcascade_frontalface_alt2.xml");


/**
 Release the memory of adaboost cascade face detector
*/
void destory_detect_cascade();

/**
 Detect only one face from image, and this human face is located as close as to the center of image
 @param shape return face detected box which stores the Top-Left and Bottom-Right points, so its \a NPoints() = 2 here
 @param image the image resource
 @return false on no face exists in image, true otherwise
*/
bool detect_one_face(asm_shape& shape, const IplImage* image);


/**
 Detect all human face from image.
 @param shapes return face detected box which stores the Top-Left and Bottom-Right points, so its \a NPoints() = 2 here
 @param n_shapes the numbers of faces to return
 @param image the image resource
 @return false on no face exists in image, true otherwise
*/
bool detect_all_faces(asm_shape** shapes, int& n_shapes, const IplImage* image);

/**
 Release the shape resource allocated by detect_all_faces().
 @param shapes the ptr of asm_shape [] 
*/
void free_shape_memeory(asm_shape** shapes);



#endif  // _VJFACE_DETECT_H_

 

 

未完待续,我发现这样去看代码,有点看不懂

 

 

 

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/lhq186/article/details/18599141

智能推荐

插入排序、冒泡排序、选择排序、快速排序、归并排序、堆排序-程序员宅基地

文章浏览阅读345次。插入排序、冒泡排序、选择排序、快速排序

【Linux】Bonding配置,管理-程序员宅基地

文章浏览阅读217次。1 通过Ifenslave手动配置Bonding该方法适用于某些发行包,它们的网络初始化脚本(sysconfig或initscripts包)没有bonding相关的知识。SuSE Linux Enterprise Server 版本8就是这样的一个发行包。对于这些系统一般的方法是,把bonding模块的参数放进/etc/modules.conf或..._echo "all" > /sys/class/net/bond0/bonding/arp_validate echo "100" > /sys

pip更新或安装包的时候出现错误:拒绝访问_pip23.3.2安装包时拒绝访问-程序员宅基地

文章浏览阅读4.7k次,点赞3次,收藏8次。pip更新安装包的时候出现错误,如下图所示:解决方法是:pip install --user [要安装的包] #加上一个–user就好了_pip23.3.2安装包时拒绝访问

计蒜客(39341):腾讯益智小游戏—矩形面积交(简单)_游戏两矩形相交计算-程序员宅基地

文章浏览阅读331次。题目链接:题目腾讯游戏开发了一款全新的编程类益智小游戏,最新推出的一个小游戏题目是关于矩形面积交的。聪明的你能解出来吗?看下面的题目接招吧。给定二维平面上 nnn 个与坐标轴平行的矩形,每个矩形是形如 {(x,y)∣x,y∈R,x1≤x≤x2,y1≤y≤y2}\lbrace (x,y) | x,y \in R, x_1 \le x \le x_2, y_1 \le y \le y_2 \rbrace{(x,y)∣x,y∈R,x1​≤x≤x2​,y1​≤y≤y2​} 的点集,你的任务是对于每个矩形,计算它与_游戏两矩形相交计算

计算机毕业设计项目:宠物店管理系统19849(开题答辩+程序定制+全套文案 )上万套实战教程手把手教学JAVA、PHP,node.js,C++、python、大屏数据可视化等-程序员宅基地

文章浏览阅读733次,点赞16次,收藏16次。免费领取项目源码,请关注●点赞收藏并私信博主,谢谢~宠物店管理系统主要功能模块包括宠物类型、宠物医生、普通挂号、会员挂号、宠物护理、护理订单、提醒信息、会员提醒、护理订单(会员)等信息维护,采取面对对象的开发模式进行软件的开发和硬体的架设,能很好的满足实际使用的需求,完善了对应的软体架设以及程序编码的工作,采取MySQL作为后台数据的主要存储单元,采用Java技术、Ajax技术进行业务系统的编码及其开发,实现了本系统的全部功能。本次报告,首先分析了研究的背景、作用、意义,为研究工作的合理性打下了基础。针对

随便推点

layui table表格带图片,图片显示不全问题_layui表格图片显示不全-程序员宅基地

文章浏览阅读9.8k次,点赞10次,收藏30次。这个平时没有注意过,今天有人问到,就记录一下吧layui的表格使用非常简单,layui文档中已经非常详细,下面直接上代码了1.jsp代码 <div class="demoTable"> <button class="layui-btn" data-type="publish">发布Banner</button> </..._layui表格图片显示不全

Android音视频开发 -> fdk-aac解码eld-aac为pcm_android aac音频解码-程序员宅基地

文章浏览阅读1.1k次。大体实例fdk-aac 解码初始化fdk-aac 开始解码公共变量//解码器对象实例HANDLE_AACDECODER aacDecoder;fdk-aac解码初始化int FdkAacDecode::fdkAacDecodeInit(JNIEnv *env) { //Java方法初始化 aacDecodeClass = env->FindClass("com/zkzj/aaclib/AacUtil"); aacDecodeId = env->GetM_android aac音频解码

基于Sphinx+MySQL的千万级数据全文检索(搜索引擎)架构设计-程序员宅基地

文章浏览阅读3.6k次。来自:http://blog.zyan.cc/post/360/前言:本文阐述的是一款经过生产环境检验的千万级数据全文检索(搜索引擎)架构。本文只列出前几章的内容节选,不提供全文内容。  在DELL PowerEdge 6850服务器(四颗64 位Inter Xeon MP 7110N处理器 / 8GB内存)、RedHat AS4 Linux操作系统、MySQL 5.1.26、MyIS

HTML嵌入JavaScript代码的三种方式_24、在html中,可以引入javascrint代码方式(3分)是()。a、a、行内式b、b、内嵌-程序员宅基地

文章浏览阅读5.4k次,点赞2次,收藏11次。HTML嵌入JavaScript代码的三种方式_24、在html中,可以引入javascrint代码方式(3分)是()。a、a、行内式b、b、内嵌

edge等浏览器打开开发者工具(F12)之后在NetWork看不到请求头等信息_浏览器开发者工具 console没有请求信息-程序员宅基地

文章浏览阅读6w次,点赞73次,收藏50次。问题打开调试器,F5刷新页面后出现下面这种情况没有出现资源等想要的信息(注:从参考1里面得到如下图)解决方法1、打开Edge浏览器里面的调试器的设置2、重置默认并刷新即可注:chrome浏览器的开发者工具的设置也在类似位置参考1、edge等浏览器打开开发者工具(F12)之后在NetWork看不到请求头等信息..._浏览器开发者工具 console没有请求信息

top level_adv7280a移植-程序员宅基地

文章浏览阅读953次。1, 调试前肩后肩的驱动,那个文件是那个设备的驱动?,lcd刷新频率。2,MMC sdiosd驱动框架看下;大概了解记忆sdio协议。复读一下wifi驱动的框架,3,i2c驱动框架。4,Makefile基本常识基本语句 1,解决过什么问题:收货什么经验,自己的review的总结: 1,解决当wifi没有连接到路由器上时,此时通过_adv7280a移植

推荐文章

热门文章

相关标签