Full source code can be found:
https://github.com/webjb/myrobot
Code is based on Android Studio sample camer2raw.
Whole idea is using class Camera2 to capture video, class ImageReader to obtain capture video frame, then send each frame image to NDK to process video -- recognition, draw lines with OpenCV functions, then draw processed frame to surface using class TextureView.
1) class Camera2 control and enable capture:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | private void createCameraPreviewSession() { CaptureRequest.Builder mPreviewRequestBuilder; try { SurfaceTexture texture = mSurfaceView.getSurfaceTexture(); // We configure the size of default buffer to be the size of camera preview we want. texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight()); // This is the output Surface we need to start preview. mSurface = new Surface(texture); // We set up a CaptureRequest.Builder with the output Surface. mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); //mPreviewRequestBuilder.addTarget(mSurface); mPreviewRequestBuilder.addTarget(mImageReader.get().getSurface()); BlockingSessionCallback sessionCallback = new BlockingSessionCallback(); List<Surface> outputSurfaces = new ArrayList<>(); outputSurfaces.add(mImageReader.get().getSurface()); //outputSurfaces.add(mSurface); mCameraDevice.createCaptureSession(outputSurfaces, sessionCallback, mBackgroundHandler); try { Log.d(TAG, "waiting on session."); mCaptureSession = sessionCallback.waitAndGetSession(SESSION_WAIT_TIMEOUT_MS); try { mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,CaptureRequest.CONTROL_AF_MODE_AUTO); // mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE); // Comment out the above and uncomment this to disable continuous autofocus and // instead set it to a fixed value of 20 diopters. This should make the picture // nice and blurry for denoised edge detection. // mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, // CaptureRequest.CONTROL_AF_MODE_OFF); // mPreviewRequestBuilder.set(CaptureRequest.LENS_FOCUS_DISTANCE, 20.0f); // Finally, we start displaying the camera preview. Log.d(TAG, "setting repeating request"); mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler); } catch (CameraAccessException e) { e.printStackTrace(); } } catch (TimeoutRuntimeException e) { showToast("Failed to configure capture session."); } } catch (CameraAccessException e) { e.printStackTrace(); } } |
2) class ImageReader to obtain captured frame
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable(ImageReader reader) { Image image; String result; try { image = reader.acquireLatestImage(); if( image == null) { return; } int fmt = reader.getImageFormat(); Log.d(TAG,"bob image fmt:"+ fmt); if( mTakePicture == 1) { result = JNIUtils.detectLane(image, mSurface, mFileName, mTakePicture); mTakePicture = 0; } else { result = JNIUtils.detectLane(image, mSurface, mFileName, mTakePicture); } Log.d(TAG, "bob Lane Detect result: " + result); comm.send_lane(result); } catch (IllegalStateException e) { Log.e(TAG, "Too many images queued for saving, dropping image for request: "); return; } image.close(); } }; |
image = reader.acquireLatestImage();
Then call JNI to send to image to NDK C++ for image processing.
result = JNIUtils.detectLane(image, mSurface, mFileName, mTakePicture);After image is done, close it to release buffer.
3) JNI interface
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | public static String detectLane(Image src, Surface dst, String path, int savefile) { if (src.getFormat() != ImageFormat.YUV_420_888) { throw new IllegalArgumentException("src must have format YUV_420_888."); } Plane[] planes = src.getPlanes(); // Spec guarantees that planes[0] is luma and has pixel stride of 1. // It also guarantees that planes[1] and planes[2] have the same row and // pixel stride. if (planes[1].getPixelStride() != 1 && planes[1].getPixelStride() != 2) { throw new IllegalArgumentException( "src chroma plane must have a pixel stride of 1 or 2: got " + planes[1].getPixelStride()); } return detectLane(src.getWidth(), src.getHeight(), planes[0].getBuffer(), dst, path, savefile); } |
4) OpenCV for image processing
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | JNIEXPORT jstring JNICALL Java_com_neza_myrobot_JNIUtils_detectLane( JNIEnv *env, jobject obj, jint srcWidth, jint srcHeight, jobject srcBuffer, jobject dstSurface, jstring path, jint saveFile) { char outStr[2000]; const char *str = env->GetStringUTFChars(path, NULL); LOGE("bob path:%s saveFile=%d", str, saveFile); uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer)); if (srcLumaPtr == nullptr) { LOGE("blit NULL pointer ERROR"); return NULL; } int dstWidth; int dstHeight; cv::Mat mYuv(srcHeight + srcHeight / 2, srcWidth, CV_8UC1, srcLumaPtr); uint8_t *srcChromaUVInterleavedPtr = nullptr; bool swapDstUV; ANativeWindow *win = ANativeWindow_fromSurface(env, dstSurface); ANativeWindow_acquire(win); ANativeWindow_Buffer buf; dstWidth = srcHeight; dstHeight = srcWidth; ANativeWindow_setBuffersGeometry(win, dstWidth, dstHeight, 0 /*format unchanged*/); if (int32_t err = ANativeWindow_lock(win, &buf, NULL)) { LOGE("ANativeWindow_lock failed with error code %d\n", err); ANativeWindow_release(win); return NULL; } uint8_t *dstLumaPtr = reinterpret_cast<uint8_t *>(buf.bits); Mat dstRgba(dstHeight, buf.stride, CV_8UC4, dstLumaPtr); // TextureView buffer, use stride as width Mat srcRgba(srcHeight, srcWidth, CV_8UC4); Mat flipRgba(dstHeight, dstWidth, CV_8UC4); // convert YUV -> RGBA cv::cvtColor(mYuv, srcRgba, CV_YUV2RGBA_NV21); // Rotate 90 degree cv::transpose(srcRgba, flipRgba); cv::flip(flipRgba, flipRgba, 1); #if 0 int ball_x; int ball_y; int ball_r; ball_r = 0; BallDetect(flipRgba, ball_x, ball_y, ball_r); if( ball_r > 0) LOGE("ball x:%d y:%d r:%d", ball_x, ball_y, ball_r); else LOGE("ball not detected"); #endif LaneDetect(flipRgba, str, saveFile, outStr); // copy to TextureView surface uchar *dbuf; uchar *sbuf; dbuf = dstRgba.data; sbuf = flipRgba.data; int i; for (i = 0; i < flipRgba.rows; i++) { dbuf = dstRgba.data + i * buf.stride * 4; memcpy(dbuf, sbuf, flipRgba.cols * 4); sbuf += flipRgba.cols * 4; } // Draw some rectangles Point p1(100, 100); Point p2(300, 300); cv::line(dstRgba, Point(dstWidth/2, 0), Point(dstWidth/2, dstHeight-1),Scalar(255, 255, 255)); cv::line(dstRgba, Point(0,dstHeight-1), Point(dstWidth-1, dstHeight-1),Scalar(255,255,255 )); LOGE("bob dstWidth=%d height=%d", dstWidth, dstHeight); ANativeWindow_unlockAndPost(win); ANativeWindow_release(win); return env->NewStringUTF(outStr); } } |
Frame is YUV format data, it's buffer is:
uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer));Create YUV Mat:
cv::Mat mYuv(srcHeight + srcHeight / 2, srcWidth, CV_8UC1, srcLumaPtr);Convert YUV to RGBA
// convert YUV -> RGBAcv::cvtColor(mYuv, srcRgba, CV_YUV2RGBA_NV21);For some reason, ImageReader video is rotated to 90 degree, so rotate -90:
// Rotate 90 degreecv::transpose(srcRgba, flipRgba); cv::flip(flipRgba, flipRgba, 1);
Then you can do whatever you want for image processing with powerful OpenCV functions.
After processing, you need to put buffer into TextureView for display:
// copy to TextureView surfaceuchar *dbuf; uchar *sbuf; dbuf = dstRgba.data; sbuf = flipRgba.data; int i; for (i = 0; i < flipRgba.rows; i++) { dbuf = dstRgba.data + i * buf.stride * 4; memcpy(dbuf, sbuf, flipRgba.cols * 4); sbuf += flipRgba.cols * 4; }
Please notice that stride value.
Finally, you can return back
ANativeWindow_unlockAndPost(win);
ANativeWindow_release(win);
Again, source code here:https://github.com/webjb/myrobot
Enjoy!
What's very cool here is that you've achieved something I haven't seen anywhere else - understanding the format and orientation of the input YUV from the ImageReader and the output RGBA for the TextureView, building on the Camera2Basic sample that deals with flipping the arrays as the device rotates. Many thanks for publishing - I take my hat off to you! All the best, Mike Pelton
ReplyDeleteThank you for your code. It helps me very much. I'm really appreciate.
ReplyDeleteIs it possible to explain about blocking classes? I tried to understand it but it confused me very much.
Thank you again~!!!
Thank you! this is awesome
ReplyDeleteHi,
ReplyDeleteI am trying to build this project with Android Studio v 2.3.1 (buildToolsVersion: 25.0.2). I hit the following error:
Gradle 'myrobot-master' project refresh failed
Error:Configuration with name 'default' not found.
Can you please let me know where I am going wrong or way to get around this problem?
Have you found a solution to your problem as I have the same problem.
Deletehave the same problem. any help??
DeleteI copy file build.gradle from folder "app" to root folder, and this error disappear.
DeleteBut I have new error - "Plugin with id 'com.android.model.application' not found."
:(
build.gradle in root folder should not have any module-specific Gradle commands. You should be able to use this as the full content:
Delete// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.2'
// NOTE: Do not place your application dependencies here; they belong in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
Also see here: https://developer.android.com/studio/build/ , about 1/4 the way down for an example top-level build.gradle
Delete_Also_ : maowen has forked to https://github.com/maowen/myrobot and fixed some issues, and I've rolled those into my fork tag v00.01.00 on https://github.com/willchamberlain/myrobot.git
DeleteThis comment has been removed by the author.
ReplyDeleteGreat article, I checked code and line:
ReplyDeleteif (int32_t err = ANativeWindow_lock(win, &buf, NULL))
produce error -22. How did you solve that problem?
I know it has been several years and you are probably not expecting this answer, but for anyone who has this error and accessed this page the solution is as follows:
DeleteIf you are getting this -22 error probably in your Java code you placed the TextureView as a Surface in the CameraCaptureSession, you have to delete these lines and this error disappears.
Thank you for share your code.
ReplyDeleteI'm trying to use your code with some small change and when I launch it, my app crash with 3 errors reported:
- E/BufferQueueProducer: [SurfaceTexture-0-31525-0] connect(P): already connected (cur=4 req=2)
- D/PlateNumberDetection/DetectionBasedTracker: ANativeWindow_lock failed with error code -22
- A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x315e9858 in tid 31735 (CameraBackgroun)
I have tried to close the camera before the jni call and I can capture and show only the first frame, but I'm not sure that is the right way.
Do you any idea how can I resolve it?
Thank you