Android Things - Creating a Camera Preview Session Fails, and no Preview is shown










0














I am trying to deploy the Android TensorFlow-Lite Example, specifically, the Detector Activity.



I have had success in deploying it in a Tablet. The app works great, it is able to detect objects, put a bounding rectangle around it, with a label as well as a confidence level.



I then set up my Raspberry Pi 3 Model B Board, installed Android Things in it, connected via ADB, and then deployed the same program from Android Studio. However, the screen I was using for my Rπ board was blank.



Upon checking a Camera Demo For Android Things tutorial, I had this idea to enable hardware acceleration in order to support the Camera Preview. I added in:



android:hardwareAccelerated="true"


in the application tag of the Manifest.



I also added in the following within the application tag:



<uses-library android:name="com.google.android.things" />


And an intent filter in my activity tag:



<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.IOT_LAUNCHER" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>


So that the TensorFlow App runs after boot.



I deployed the application again, but the same error persists -- I am unable to configure the preview screen session.



Here is the following code that was included in the TensorFlow Example:



private void createCameraPreviewSession() 
try
final SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;

// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());

// This is the output Surface we need to start preview.
final Surface surface = new Surface(texture);

// We set up a CaptureRequest.Builder with the output Surface.
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);

LOGGER.e("Opening camera preview: " + previewSize.getWidth() + "x" + previewSize.getHeight());

// Create the reader for the preview frames.
previewReader =
ImageReader.newInstance(
previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);

previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());

// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(
Arrays.asList(surface, previewReader.getSurface()),
new CameraCaptureSession.StateCallback()

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == cameraDevice)
return;


// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
catch (final CameraAccessException e)
LOGGER.e(e, "Exception!");
LOGGER.e("camera access exception!");



@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession)
showToast("Failed");
LOGGER.e("configure failed!!");

,
null);
catch (final CameraAccessException e)
LOGGER.e("camera access exception!");
LOGGER.e(e, "Exception!");




The error log is the one that is in the onConfigureFailed override method, and the relevant error logs leading towards that statement are:



11-12 14:02:40.677 1991-2035/org.tensorflow.demo E/CameraCaptureSession: Session 0: Failed to create capture session; configuration failed
11-12 14:02:40.679 1991-2035/org.tensorflow.demo E/tensorflow: CameraConnectionFragment: configure failed!!


However, I couldn't trace the Session 0: stack trace.



Aside from turning on hardware acceleration and adding several other tags to the Manifest, I have not tried anything.



I have done my research and I have seen other examples, but they only take a photo at the click of a button. I need a working cameraPreview.



I also have that CameraDemoForAndroidThings example, but I'm afraid that I don't know a lick of Kotlin to be able to guess how it works.



If there's any one who managed to make a Java version of the TensorFlow Detection Activity run on Raspberry Pi Android Things, kindly contribute and let us know how you did it.



UPDATE:



Apparently, the camera can only support one stream configuration at a time. I was also able to infer that I have to modify the createCaptureSession() function to only use one surface, my function now looks like this:



cameraDevice.createCaptureSession(
// Arrays.asList(surface, previewReader.getSurface()),
Arrays.asList(surface),
new CameraCaptureSession.StateCallback()

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == cameraDevice)
return;


// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AF_MODE,
// CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);

previewRequestBuilder.addTarget(previewReader.getSurface());

catch (final CameraAccessException e)
LOGGER.e("exception hit while configuring camera!");
LOGGER.e(e, "Exception!");



@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession)
LOGGER.e("Configure failed!");
showToast("Failed");

,
null);


This enables me to get a live preview. However, the code does not proceed with sending the image from the preview to the processImage() block.



Has anyone successfully implemented the TensorFlow-Lite examples that involve live Camera Previews to Android Things?










share|improve this question























  • See stackoverflow.com/questions/46997776/… and stackoverflow.com/questions/51300315/…. The bottom line is, be careful with session configuration. Make sure you find a self-compatible set of parameters before you add TensorFlow to the brew.
    – Alex Cohn
    Nov 12 at 15:30










  • @AlexCohn I have updated my question. I think I have been successful in creating a camera preview, but am stuck with proceeding to sending the preview to the part where TensorFlow-Lite processes the preview.
    – Razgriz
    Nov 18 at 12:17










  • If you only have one (preview) surface, you don't receive pixels to send for processing. You need two, but make sure that they work together. See Setting multiple ImageReader surfaces
    – Alex Cohn
    Nov 18 at 12:52











  • If TensorFlow Lite needs multiple surfaces, and the Android Things can only give process one preview surface at a time, is there a work around? Maybe allow TensorFlow to process the single surface? I am not well versed in these kinds of applications.
    – Razgriz
    Nov 18 at 13:01










  • Sorry I did not express myself well. You need previewReader.getSurface() to setOnImageAvailableListener(). But parameters of this surface should be compatible with the camera and with the preview display. See the tables at developer.android.com/reference/android/hardware/camera2/… for the constraints of such settings for different device 'levels'.
    – Alex Cohn
    Nov 18 at 15:05
















0














I am trying to deploy the Android TensorFlow-Lite Example, specifically, the Detector Activity.



I have had success in deploying it in a Tablet. The app works great, it is able to detect objects, put a bounding rectangle around it, with a label as well as a confidence level.



I then set up my Raspberry Pi 3 Model B Board, installed Android Things in it, connected via ADB, and then deployed the same program from Android Studio. However, the screen I was using for my Rπ board was blank.



Upon checking a Camera Demo For Android Things tutorial, I had this idea to enable hardware acceleration in order to support the Camera Preview. I added in:



android:hardwareAccelerated="true"


in the application tag of the Manifest.



I also added in the following within the application tag:



<uses-library android:name="com.google.android.things" />


And an intent filter in my activity tag:



<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.IOT_LAUNCHER" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>


So that the TensorFlow App runs after boot.



I deployed the application again, but the same error persists -- I am unable to configure the preview screen session.



Here is the following code that was included in the TensorFlow Example:



private void createCameraPreviewSession() 
try
final SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;

// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());

// This is the output Surface we need to start preview.
final Surface surface = new Surface(texture);

// We set up a CaptureRequest.Builder with the output Surface.
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);

LOGGER.e("Opening camera preview: " + previewSize.getWidth() + "x" + previewSize.getHeight());

// Create the reader for the preview frames.
previewReader =
ImageReader.newInstance(
previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);

previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());

// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(
Arrays.asList(surface, previewReader.getSurface()),
new CameraCaptureSession.StateCallback()

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == cameraDevice)
return;


// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
catch (final CameraAccessException e)
LOGGER.e(e, "Exception!");
LOGGER.e("camera access exception!");



@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession)
showToast("Failed");
LOGGER.e("configure failed!!");

,
null);
catch (final CameraAccessException e)
LOGGER.e("camera access exception!");
LOGGER.e(e, "Exception!");




The error log is the one that is in the onConfigureFailed override method, and the relevant error logs leading towards that statement are:



11-12 14:02:40.677 1991-2035/org.tensorflow.demo E/CameraCaptureSession: Session 0: Failed to create capture session; configuration failed
11-12 14:02:40.679 1991-2035/org.tensorflow.demo E/tensorflow: CameraConnectionFragment: configure failed!!


However, I couldn't trace the Session 0: stack trace.



Aside from turning on hardware acceleration and adding several other tags to the Manifest, I have not tried anything.



I have done my research and I have seen other examples, but they only take a photo at the click of a button. I need a working cameraPreview.



I also have that CameraDemoForAndroidThings example, but I'm afraid that I don't know a lick of Kotlin to be able to guess how it works.



If there's any one who managed to make a Java version of the TensorFlow Detection Activity run on Raspberry Pi Android Things, kindly contribute and let us know how you did it.



UPDATE:



Apparently, the camera can only support one stream configuration at a time. I was also able to infer that I have to modify the createCaptureSession() function to only use one surface, my function now looks like this:



cameraDevice.createCaptureSession(
// Arrays.asList(surface, previewReader.getSurface()),
Arrays.asList(surface),
new CameraCaptureSession.StateCallback()

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == cameraDevice)
return;


// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AF_MODE,
// CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);

previewRequestBuilder.addTarget(previewReader.getSurface());

catch (final CameraAccessException e)
LOGGER.e("exception hit while configuring camera!");
LOGGER.e(e, "Exception!");



@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession)
LOGGER.e("Configure failed!");
showToast("Failed");

,
null);


This enables me to get a live preview. However, the code does not proceed with sending the image from the preview to the processImage() block.



Has anyone successfully implemented the TensorFlow-Lite examples that involve live Camera Previews to Android Things?










share|improve this question























  • See stackoverflow.com/questions/46997776/… and stackoverflow.com/questions/51300315/…. The bottom line is, be careful with session configuration. Make sure you find a self-compatible set of parameters before you add TensorFlow to the brew.
    – Alex Cohn
    Nov 12 at 15:30










  • @AlexCohn I have updated my question. I think I have been successful in creating a camera preview, but am stuck with proceeding to sending the preview to the part where TensorFlow-Lite processes the preview.
    – Razgriz
    Nov 18 at 12:17










  • If you only have one (preview) surface, you don't receive pixels to send for processing. You need two, but make sure that they work together. See Setting multiple ImageReader surfaces
    – Alex Cohn
    Nov 18 at 12:52











  • If TensorFlow Lite needs multiple surfaces, and the Android Things can only give process one preview surface at a time, is there a work around? Maybe allow TensorFlow to process the single surface? I am not well versed in these kinds of applications.
    – Razgriz
    Nov 18 at 13:01










  • Sorry I did not express myself well. You need previewReader.getSurface() to setOnImageAvailableListener(). But parameters of this surface should be compatible with the camera and with the preview display. See the tables at developer.android.com/reference/android/hardware/camera2/… for the constraints of such settings for different device 'levels'.
    – Alex Cohn
    Nov 18 at 15:05














0












0








0







I am trying to deploy the Android TensorFlow-Lite Example, specifically, the Detector Activity.



I have had success in deploying it in a Tablet. The app works great, it is able to detect objects, put a bounding rectangle around it, with a label as well as a confidence level.



I then set up my Raspberry Pi 3 Model B Board, installed Android Things in it, connected via ADB, and then deployed the same program from Android Studio. However, the screen I was using for my Rπ board was blank.



Upon checking a Camera Demo For Android Things tutorial, I had this idea to enable hardware acceleration in order to support the Camera Preview. I added in:



android:hardwareAccelerated="true"


in the application tag of the Manifest.



I also added in the following within the application tag:



<uses-library android:name="com.google.android.things" />


And an intent filter in my activity tag:



<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.IOT_LAUNCHER" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>


So that the TensorFlow App runs after boot.



I deployed the application again, but the same error persists -- I am unable to configure the preview screen session.



Here is the following code that was included in the TensorFlow Example:



private void createCameraPreviewSession() 
try
final SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;

// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());

// This is the output Surface we need to start preview.
final Surface surface = new Surface(texture);

// We set up a CaptureRequest.Builder with the output Surface.
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);

LOGGER.e("Opening camera preview: " + previewSize.getWidth() + "x" + previewSize.getHeight());

// Create the reader for the preview frames.
previewReader =
ImageReader.newInstance(
previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);

previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());

// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(
Arrays.asList(surface, previewReader.getSurface()),
new CameraCaptureSession.StateCallback()

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == cameraDevice)
return;


// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
catch (final CameraAccessException e)
LOGGER.e(e, "Exception!");
LOGGER.e("camera access exception!");



@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession)
showToast("Failed");
LOGGER.e("configure failed!!");

,
null);
catch (final CameraAccessException e)
LOGGER.e("camera access exception!");
LOGGER.e(e, "Exception!");




The error log is the one that is in the onConfigureFailed override method, and the relevant error logs leading towards that statement are:



11-12 14:02:40.677 1991-2035/org.tensorflow.demo E/CameraCaptureSession: Session 0: Failed to create capture session; configuration failed
11-12 14:02:40.679 1991-2035/org.tensorflow.demo E/tensorflow: CameraConnectionFragment: configure failed!!


However, I couldn't trace the Session 0: stack trace.



Aside from turning on hardware acceleration and adding several other tags to the Manifest, I have not tried anything.



I have done my research and I have seen other examples, but they only take a photo at the click of a button. I need a working cameraPreview.



I also have that CameraDemoForAndroidThings example, but I'm afraid that I don't know a lick of Kotlin to be able to guess how it works.



If there's any one who managed to make a Java version of the TensorFlow Detection Activity run on Raspberry Pi Android Things, kindly contribute and let us know how you did it.



UPDATE:



Apparently, the camera can only support one stream configuration at a time. I was also able to infer that I have to modify the createCaptureSession() function to only use one surface, my function now looks like this:



cameraDevice.createCaptureSession(
// Arrays.asList(surface, previewReader.getSurface()),
Arrays.asList(surface),
new CameraCaptureSession.StateCallback()

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == cameraDevice)
return;


// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AF_MODE,
// CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);

previewRequestBuilder.addTarget(previewReader.getSurface());

catch (final CameraAccessException e)
LOGGER.e("exception hit while configuring camera!");
LOGGER.e(e, "Exception!");



@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession)
LOGGER.e("Configure failed!");
showToast("Failed");

,
null);


This enables me to get a live preview. However, the code does not proceed with sending the image from the preview to the processImage() block.



Has anyone successfully implemented the TensorFlow-Lite examples that involve live Camera Previews to Android Things?










share|improve this question















I am trying to deploy the Android TensorFlow-Lite Example, specifically, the Detector Activity.



I have had success in deploying it in a Tablet. The app works great, it is able to detect objects, put a bounding rectangle around it, with a label as well as a confidence level.



I then set up my Raspberry Pi 3 Model B Board, installed Android Things in it, connected via ADB, and then deployed the same program from Android Studio. However, the screen I was using for my Rπ board was blank.



Upon checking a Camera Demo For Android Things tutorial, I had this idea to enable hardware acceleration in order to support the Camera Preview. I added in:



android:hardwareAccelerated="true"


in the application tag of the Manifest.



I also added in the following within the application tag:



<uses-library android:name="com.google.android.things" />


And an intent filter in my activity tag:



<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.IOT_LAUNCHER" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>


So that the TensorFlow App runs after boot.



I deployed the application again, but the same error persists -- I am unable to configure the preview screen session.



Here is the following code that was included in the TensorFlow Example:



private void createCameraPreviewSession() 
try
final SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;

// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());

// This is the output Surface we need to start preview.
final Surface surface = new Surface(texture);

// We set up a CaptureRequest.Builder with the output Surface.
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);

LOGGER.e("Opening camera preview: " + previewSize.getWidth() + "x" + previewSize.getHeight());

// Create the reader for the preview frames.
previewReader =
ImageReader.newInstance(
previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);

previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());

// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(
Arrays.asList(surface, previewReader.getSurface()),
new CameraCaptureSession.StateCallback()

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == cameraDevice)
return;


// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
catch (final CameraAccessException e)
LOGGER.e(e, "Exception!");
LOGGER.e("camera access exception!");



@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession)
showToast("Failed");
LOGGER.e("configure failed!!");

,
null);
catch (final CameraAccessException e)
LOGGER.e("camera access exception!");
LOGGER.e(e, "Exception!");




The error log is the one that is in the onConfigureFailed override method, and the relevant error logs leading towards that statement are:



11-12 14:02:40.677 1991-2035/org.tensorflow.demo E/CameraCaptureSession: Session 0: Failed to create capture session; configuration failed
11-12 14:02:40.679 1991-2035/org.tensorflow.demo E/tensorflow: CameraConnectionFragment: configure failed!!


However, I couldn't trace the Session 0: stack trace.



Aside from turning on hardware acceleration and adding several other tags to the Manifest, I have not tried anything.



I have done my research and I have seen other examples, but they only take a photo at the click of a button. I need a working cameraPreview.



I also have that CameraDemoForAndroidThings example, but I'm afraid that I don't know a lick of Kotlin to be able to guess how it works.



If there's any one who managed to make a Java version of the TensorFlow Detection Activity run on Raspberry Pi Android Things, kindly contribute and let us know how you did it.



UPDATE:



Apparently, the camera can only support one stream configuration at a time. I was also able to infer that I have to modify the createCaptureSession() function to only use one surface, my function now looks like this:



cameraDevice.createCaptureSession(
// Arrays.asList(surface, previewReader.getSurface()),
Arrays.asList(surface),
new CameraCaptureSession.StateCallback()

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == cameraDevice)
return;


// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AF_MODE,
// CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);

previewRequestBuilder.addTarget(previewReader.getSurface());

catch (final CameraAccessException e)
LOGGER.e("exception hit while configuring camera!");
LOGGER.e(e, "Exception!");



@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession)
LOGGER.e("Configure failed!");
showToast("Failed");

,
null);


This enables me to get a live preview. However, the code does not proceed with sending the image from the preview to the processImage() block.



Has anyone successfully implemented the TensorFlow-Lite examples that involve live Camera Previews to Android Things?







android tensorflow android-camera2 android-things






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 18 at 12:15

























asked Nov 12 at 14:17









Razgriz

3,5171046115




3,5171046115











  • See stackoverflow.com/questions/46997776/… and stackoverflow.com/questions/51300315/…. The bottom line is, be careful with session configuration. Make sure you find a self-compatible set of parameters before you add TensorFlow to the brew.
    – Alex Cohn
    Nov 12 at 15:30










  • @AlexCohn I have updated my question. I think I have been successful in creating a camera preview, but am stuck with proceeding to sending the preview to the part where TensorFlow-Lite processes the preview.
    – Razgriz
    Nov 18 at 12:17










  • If you only have one (preview) surface, you don't receive pixels to send for processing. You need two, but make sure that they work together. See Setting multiple ImageReader surfaces
    – Alex Cohn
    Nov 18 at 12:52











  • If TensorFlow Lite needs multiple surfaces, and the Android Things can only give process one preview surface at a time, is there a work around? Maybe allow TensorFlow to process the single surface? I am not well versed in these kinds of applications.
    – Razgriz
    Nov 18 at 13:01










  • Sorry I did not express myself well. You need previewReader.getSurface() to setOnImageAvailableListener(). But parameters of this surface should be compatible with the camera and with the preview display. See the tables at developer.android.com/reference/android/hardware/camera2/… for the constraints of such settings for different device 'levels'.
    – Alex Cohn
    Nov 18 at 15:05

















  • See stackoverflow.com/questions/46997776/… and stackoverflow.com/questions/51300315/…. The bottom line is, be careful with session configuration. Make sure you find a self-compatible set of parameters before you add TensorFlow to the brew.
    – Alex Cohn
    Nov 12 at 15:30










  • @AlexCohn I have updated my question. I think I have been successful in creating a camera preview, but am stuck with proceeding to sending the preview to the part where TensorFlow-Lite processes the preview.
    – Razgriz
    Nov 18 at 12:17










  • If you only have one (preview) surface, you don't receive pixels to send for processing. You need two, but make sure that they work together. See Setting multiple ImageReader surfaces
    – Alex Cohn
    Nov 18 at 12:52











  • If TensorFlow Lite needs multiple surfaces, and the Android Things can only give process one preview surface at a time, is there a work around? Maybe allow TensorFlow to process the single surface? I am not well versed in these kinds of applications.
    – Razgriz
    Nov 18 at 13:01










  • Sorry I did not express myself well. You need previewReader.getSurface() to setOnImageAvailableListener(). But parameters of this surface should be compatible with the camera and with the preview display. See the tables at developer.android.com/reference/android/hardware/camera2/… for the constraints of such settings for different device 'levels'.
    – Alex Cohn
    Nov 18 at 15:05
















See stackoverflow.com/questions/46997776/… and stackoverflow.com/questions/51300315/…. The bottom line is, be careful with session configuration. Make sure you find a self-compatible set of parameters before you add TensorFlow to the brew.
– Alex Cohn
Nov 12 at 15:30




See stackoverflow.com/questions/46997776/… and stackoverflow.com/questions/51300315/…. The bottom line is, be careful with session configuration. Make sure you find a self-compatible set of parameters before you add TensorFlow to the brew.
– Alex Cohn
Nov 12 at 15:30












@AlexCohn I have updated my question. I think I have been successful in creating a camera preview, but am stuck with proceeding to sending the preview to the part where TensorFlow-Lite processes the preview.
– Razgriz
Nov 18 at 12:17




@AlexCohn I have updated my question. I think I have been successful in creating a camera preview, but am stuck with proceeding to sending the preview to the part where TensorFlow-Lite processes the preview.
– Razgriz
Nov 18 at 12:17












If you only have one (preview) surface, you don't receive pixels to send for processing. You need two, but make sure that they work together. See Setting multiple ImageReader surfaces
– Alex Cohn
Nov 18 at 12:52





If you only have one (preview) surface, you don't receive pixels to send for processing. You need two, but make sure that they work together. See Setting multiple ImageReader surfaces
– Alex Cohn
Nov 18 at 12:52













If TensorFlow Lite needs multiple surfaces, and the Android Things can only give process one preview surface at a time, is there a work around? Maybe allow TensorFlow to process the single surface? I am not well versed in these kinds of applications.
– Razgriz
Nov 18 at 13:01




If TensorFlow Lite needs multiple surfaces, and the Android Things can only give process one preview surface at a time, is there a work around? Maybe allow TensorFlow to process the single surface? I am not well versed in these kinds of applications.
– Razgriz
Nov 18 at 13:01












Sorry I did not express myself well. You need previewReader.getSurface() to setOnImageAvailableListener(). But parameters of this surface should be compatible with the camera and with the preview display. See the tables at developer.android.com/reference/android/hardware/camera2/… for the constraints of such settings for different device 'levels'.
– Alex Cohn
Nov 18 at 15:05





Sorry I did not express myself well. You need previewReader.getSurface() to setOnImageAvailableListener(). But parameters of this surface should be compatible with the camera and with the preview display. See the tables at developer.android.com/reference/android/hardware/camera2/… for the constraints of such settings for different device 'levels'.
– Alex Cohn
Nov 18 at 15:05


















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53264074%2fandroid-things-creating-a-camera-preview-session-fails-and-no-preview-is-show%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53264074%2fandroid-things-creating-a-camera-preview-session-fails-and-no-preview-is-show%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Top Tejano songwriter Luis Silva dead of heart attack at 64

ReactJS Fetched API data displays live - need Data displayed static

政党