This article mainly summarizes the preview API for camera 1.
Camera.open(int cameraId);
Here, cameraId represents a specific camera on the Android device, usually one of the following two values:
-
CameraInfo#CAMERA_FACING_BACK : Rear camera
-
CameraInfo#CAMERA_FACING_FRONT : Front camera
It is important to note that the API may throw exceptions in the following two cases:
-
The current application does not have camera permissions;
-
Another application is using the current cameraId’s camera;
Whether it returns null or throws an exception varies across different models and versions of the Android system, as shown in the table below:
|
Android Version |
No Camera Permission |
Camera Occupied |
Xiaomi Mix3 |
9.0 |
Throws RuntimeException |
No awareness of occupation |
Huawei Nova 3i |
8.1 |
Throws RuntimeException |
No awareness of occupation |
Huawei Nova |
7.0 |
Throws RuntimeException |
No awareness of occupation |
OPPO R9s |
6.0 |
API call successful, but no preview |
No awareness of occupation |
VIVO X7+ |
5.1 |
API call successful, but no preview |
No awareness of occupation |
Honor 3C |
4.4 |
Throws RuntimeException |
Throws RuntimeException |
OPPO R7 |
4.4 |
Throws RuntimeException |
Throws RuntimeException |
You can use the following two APIs to get and set the current camera parameters:
Get parameters:
Camera#getParameters()
Set parameters:
Camera#setParameters(Camera.Parameters params)
The camera parameters related to preview mainly include three: preview size, preview format, and auto-focus.
▐ Preview Size
The preview size indicates the height and width of each preview frame. On Android devices, the set preview size must be supported by the camera, otherwise, calling the Camera.setParameters method will throw a RuntimeException.
Different devices support different preview sizes, which can be obtained through the following API:
android.hardware.Camera.Parameters#getSupportedPreviewSizes()
Generally, if there is no strict requirement for the preview size, you can skip setting this value and use the device’s default preview size.
▐ Preview Format
Most Android devices only support two preview formats: NV21 and YV12; although both formats belong to the YUV format, there are still some differences; simply put, the differences are:
-
YV12: The storage order is Y first, then V, and finally U. YYYVVVUUU;
-
NV21: The storage order is Y first, then U, and finally V. YYYVUVUVU
The API for the list of supported preview formats for the current device is as follows:
Camera.Parameters#getSupportedPreviewFormats()
Similar to preview size, the set preview format must be supported by the camera, otherwise, calling the Camera.setParameters method will throw a RuntimeException.
▐ Auto-Focus
If auto-focus is not set, the preview interface may appear blurry as the mobile device moves back and forth or side to side.
Generally, there are two methods for auto-focus:
-
Use the camera’s built-in auto-focus mode;
-
Use sensors to monitor device movement (still or moving), and then execute the camera’s built-in touch focus API at that moment; refer to the detail page (reply page);
Currently, Android Camera 1 only supports the following four auto-focus modes:
-
FOCUS_MODE_AUTO
-
FOCUS_MODE_CONTINUOUS_PICTURE
-
Generally suitable for photo-taking scenarios
FOCUS_MODE_CONTINUOUS_VIDEO
Generally suitable for screen recording scenarios
FOCUS_MODE_MACRO
Generally suitable for close-up scenarios
In general, FOCUS_MODE_AUTO is used as the auto-focus mode; this is because FOCUS_MODE_CONTINUOUS_PICTURE and FOCUS_MODE_CONTINUOUS_VIDEO may not focus on some models; as for FOCUS_MODE_MACRO, no significant difference has been found compared to FOCUS_MODE_AUTO.
Android Camera 1 sets the focus mode through the following API:
Camera.Parameters#setFocusMode(String focusMode)
In addition to the four supported auto-focus modes mentioned above, Android Camera 1 also has other focus modes for different scenarios.
After setting the auto-focus mode, the following API is needed to complete the final auto-focus effect:
// First cancel other auto-focus operationsmCamera.cancelAutoFocus();mCamera.autoFocus(this);
The parameter of the autoFocus method is an implementation of the Camera.AutoFocusCallback interface; this interface provides a callback method for auto-focus:
public interface AutoFocusCallback{ void onAutoFocus(boolean success, Camera camera);}
The success parameter in the callback method indicates whether the focus was successful (success means that the current preview frame is clear);
It should be noted that the autoFocus method does not loop focus internally; therefore, if you want to maintain auto-focus, you need to call the autoFocus method again in the callback method; as shown below:
@overridevoid onAutoFocus(boolean success, final Camera camera){ Handler handler = new Handler(Looper.getMainLooper()); // Generally, need to delay 1 second to execute auto-focus again; // The reason for not executing immediately is that on some models, executing auto-focus immediately causes the preview screen to flicker (especially for the rear camera) handler.postDelay(new Runnable(){ @override public void run(){ camera.autoFocuse(this); } }, 1000)}
Another point to note is that the autoFocus method must be executed after the Camera#startPreview() method; otherwise, the autoFocus method will throw a RuntimeException.
▐ Camera Display Angle
Original Image
Preview image of a frame from the rear camera
Preview image of a frame from the front camera
From the three images, we can conclude the following:
-
For the rear camera, the preview frame image is rotated 90 degrees clockwise compared to the original image;
-
For the front camera, the preview frame image is rotated 90 degrees counterclockwise compared to the original image;
Therefore, when processing the preview frame, it is necessary to ensure that the preview frame’s direction is consistent with the screen (interface) direction; specifically, it can be divided into the following two scenarios:
-
For scenarios where the system renders the preview frame (SurfaceView or TextureView), the Camera#setDisplayOrientation(int degrees) method needs to be called to set the camera’s display angle;
-
For scenarios where OpenGL is used to render the preview frame (GLSurfaceView), the rotation compatibility needs to be considered when sampling the preview frame’s corresponding texture; the specific rotation angle can refer to the Camera.CameraInfo#orientation attribute (this attribute indicates how many degrees the frame image needs to be rotated clockwise to restore to the original image);
For the first scenario, the parameter value of the setDisplayOrientation method can be accurately obtained through the following standard method:
public static int setCameraDisplayOrientation(Activity activity, int cameraId) { android.hardware.Camera.CameraInfo info = new android.hardware.Camera.CameraInfo(); android.hardware.Camera.getCameraInfo(cameraId, info); int rotation = activity.getWindowManager().getDefaultDisplay() .getRotation(); // Generally, the current Activity is in portrait mode, degrees is 0; // The current Activity is in landscape mode, degrees is 90; int degrees; switch (rotation) { case Surface.ROTATION_0: degrees = 0; break; case Surface.ROTATION_90: degrees = 90; break; case Surface.ROTATION_180: degrees = 180; break; case Surface.ROTATION_270: degrees = 270; break; default: degrees = 0; break; } // info.orientation indicates the clockwise rotation angle required to align the preview frame image with the device's natural orientation int result; if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) { result = (info.orientation + degrees) % 360; // compensate the mirror // For the front camera, it will be horizontally flipped before clockwise rotation result = (360 - result) % 360; } else { // back-facing result = (info.orientation + (360 - degrees)) % 360; } return result;}
Camera.startPreview()
There are three points to note about this API:
-
Before calling the startPreview method, ensure that the Camera.setPreviewTexture method or Camera.setPreviewDisplay method has been called; otherwise, not only will there be no preview screen, but the preview callback interface will also not be called.
-
If the Camera object that calls the startPreview method has already called the release method, the following exception will be thrown:
java.lang.RuntimeException: Camera is being used after Camera.release() was called at android.hardware.Camera.startPreview(Native Method)
-
If another application has called openCamera before calling the startPreview method, the following exception will be thrown:
java.lang.RuntimeException: startPreview failed at android.hardware.Camera.startPreview(Native Method)
public interface PreviewCallback{ void onPreviewFrame(byte[] data, Camera camera);}
Android Camera 1 usually has the following ways to set the preview callback interface:
Method Name | Parameters | Description |
setOneShotPreviewCallback | PreviewCallback |
|
setPreviewCallbackAllocation | Allocation |
|
setPreviewCallbackWithBuffer | PreviewCallback |
|
setPreviewCallback | PreviewCallback |
|
To display the preview data, there are currently two ways:
-
Provide a local window (SurfaceHolder or SurfaceTexture) to Android Camera 1 for display;
-
Use OpenGL ES environment to render preview data;
Among these two methods, the former is easier to implement, but cannot render preview data individually; the latter is more complex to implement, but can flexibly handle and render preview frame data.
▐ SurfaceHolder & SurfaceTexture
Android Camera 1 uses SurfaceHolder objects and SurfaceTexture objects to send preview frame data to SurfaceView and TextureView for display.
Although the development process of the two is similar, the underlying principles are quite different.
For SurfaceHolder (SurfaceView), the following diagram can briefly illustrate its process:
First, SurfaceHolder (or SurfaceView) allocates a separate GraphicBuffer for rendering; this separate GraphicBuffer is different from the GraphicBuffer corresponding to the view tree where SurfaceView is located, the former is also recorded separately for management and composition at the SurfaceFlinger end.
Then, the camera application will pass the SurfaceHolder to the camera service process, which will create a Surface object from it;
When the camera service process starts previewing, it will draw the preview YUV data onto this Surface object, and finally communicate across processes with the SurfaceFlinger process to render the content on the screen.
In this method, you cannot use the Surface.lockCanvas method during the preview period, which will throw an IllegalArgumentException; therefore, you cannot perform personalized rendering on the preview screen.
For SurfaceTexture (TextureView), the following diagram can briefly illustrate the process:
First, TextureView creates a SurfaceTexture object; this SurfaceTexture object is associated with a hardware-accelerated Layer; this Layer object belongs to the view tree where TextureView is located.
Then, the camera application will pass the SurfaceTexture to the camera service process, which will create a Surface object from it;
When the camera service process starts previewing, it will draw the preview YUV data onto this Surface object, and finally communicate across processes with the application using the camera to render the content onto the view tree where TextureView is located; you can think of SurfaceTexture as a texture that is written across processes, with the camera service process holding the write permission for this texture.
Finally, through the view tree and the SurfaceFlinger process, the content is rendered on the screen.
In this method, you cannot use the Surface.lockCanvas method during the preview period, which will throw an IllegalArgumentException; therefore, you cannot perform additional personalized drawing on the preview screen.
Moreover, there is one more thing to note: Camera.setPreviewTexture method does not globally hold the SurfaceTexture, and once the SurfaceTexture object loses its reference and is garbage collected, the preview effect will also fail.
▐ OpenGL ES Rendering Preview Data
In the OpenGL environment, there are two ways to render preview data:
-
Render directly using the byte array of the preview frame;
-
Render using the SurfaceTexture object corresponding to the preview frame;
The main difference between the two methods is whether the Android camera automatically fills the preview data into the texture.
-
Render Directly Using the Preview Frame Byte Array
As described above, the byte array of the preview frame is in YUV format; therefore, the core idea of this method is to convert the YUV data into RGB format in the GPU.
Corresponding conversion scripts are easily found online, and the general calculation methods are consistent, as shown in the following code:
precision mediump float;uniform sampler2D tex_y;uniform sampler2D tex_uv;// Texture coordinatesvarying vec2 texture_coordinate;void main(){ float r, g, b, y, u, v; // Extract YUV color information y = 1.1643 * (texture2D(tex_y, texture_coordinate).r - 0.0625); u = texture2D(tex_uv, texture_coordinate).a - 0.5; v = texture2D(tex_uv, texture_coordinate).r - 0.5; // Convert YUV format to RGB format r = y + 1.13983 * v; g = y - 0.39465 * u - 0.58060 * v; b = y + 2.03211 * u; // Color the fragment gl_FragColor = vec4(r, g, b ,1.0);}
As shown in the code, the byte array in YUV format is filled into two textures: Y texture and UV texture; the reason for using two textures is that the Y and UV data sizes in the YUV format are inconsistent, the former equals the resolution of a frame image, while the latter equals only 1/4 of the resolution of a frame image; therefore, they cannot be reused in one texture.
Next, the conventional OpenGL ES drawing process is as follows:
-
Clear the screen (FrameBuffer);
-
Use the specified script program;
-
If necessary, create Y texture and UV texture separately;
-
Fill the two textures created in step three with the YUV byte array returned by onPreviewCallback; where one value in the Y texture represents one pixel, and two values in the UV texture represent one pixel.
-
Update the MVP matrix;
-
Update the vertex coordinates for screen drawing;
-
Update the texture coordinates shared by Y texture and UV texture; where the order of texture coordinates must be consistent with the order of vertex coordinates in step six;
-
Use glDrawArrays method to draw with GLES20.GL_TRIANGLE mode;
-
Some finishing work;
This method also needs to note that the YUV byte array has a certain rotation angle relative to the screen (details refer to section 3.4), so when updating the texture coordinates in step seven, you need to consider the corresponding rotation angle.
-
Render Using SurfaceTexture Object
When rendering using the SurfaceTexture object, the configuration, opening, and preview-related API calls for the camera are basically the same as the TextureView preview process.
To simplify the creation of the EGL environment, the demo uses GLSurfaceView (TextureView can also be used, but TextureView requires manual creation of the EGL environment) to demonstrate.
Unlike simply using TextureView to preview camera frames, we need to manually create a texture in the GL environment and create a SurfaceTexture object based on this texture, then set this SurfaceTexture object to the Camera. This step must be completed before opening the camera (specifically, before calling the Camera.setPreviewTexture method).
The specific OpenGL ES drawing process is as follows:
-
Clear the screen (FrameBuffer);
-
Use the specified script program;
-
Call SurfaceTextureView.updateTexImage method to fill the latest frame data passed from the camera service thread into the corresponding texture of this object; you can control the frequency of this method call to control the rendering rate of the camera preview frames;
-
Update the MVP matrix;
-
Update the vertex coordinates for screen drawing;
-
Update the texture coordinates of the SurfaceTexture corresponding to the texture (sampling coordinates); where the order of texture coordinates must be consistent with the order of vertex coordinates in step five;
-
Pass the matrix returned by SurfaceTexture.getTransformMatrix method to the GPU; this matrix can convert general texture coordinates into the correct sampling coordinates corresponding to the SurfaceTexture object;
-
Use glDrawArrays method to draw with GLES20.GL_TRIANGLE mode;
-
Some finishing work;
Due to step seven, there is no need to consider the rotation angle between the texture and the screen.
Although the industry has certain criticisms of the Android Camera 1 related APIs, if used properly, it can still “meet” general camera preview needs.
Throughout the entire camera preview process, camera operations (such as opening, previewing, etc.) mainly rely on the APIs provided by the system, while the display of the preview screen is still supported by a rich selection of technical solutions provided by the Android system; each technical solution has its advantages and disadvantages, covering a wide range of use cases.
Finally, the summary involved in this article can be referenced by clicking on “Read Original” for the demo project.