Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Photo Mode Skeleton #229

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from
Draft

Photo Mode Skeleton #229

wants to merge 3 commits into from

Conversation

ddd999
Copy link

@ddd999 ddd999 commented May 28, 2024

The start of a Photo mode that allows full-resolution still images to be captured locally on the device running Rpanion. As requested in #167

This is currently very much a work in progress (expect broken things, for now), but I'm adding it as a draft PR in case anyone wants to comment on its early stages or contribute.

The basic idea is to have a radio button on the Video page to select between Streaming (the current state) and Photo mode. Photo mode would:

  • Start capture_still.py, which configures the camera using Picamera2 and then goes to sleep waiting for a SIGUSR1
  • Accept and process MAVLink camera commands. If a DO_DIGICAM_CONTROL shoot command is received, send a SIGUSR1 to capture_still.py
  • Stop capture_still.py by sending a SIGKILL when user clicks "Stop"
  • (Optional): Allow configuration of camera settings (exposure, focus, zoom, etc.) which would be passed to capture_still.py as arguments (probably ties in to [Feature Request] Custom Camera Parameter Settings #228)
  • (Optional): Allow the user to select where images will be stored
  • (Eventual goal): Accept MAVLink gimbal control commands
  • (Pie in the sky goal): Provide a live preview from the camera, either in the browser window or via an RTP/RTSP stream
  • (Pie in the sky goal): Accept MAVLink commands to change camera settings, start/stop, and switch back and forth between Photo and Streaming modes

@stephendade
Copy link
Owner

Very nice!
Is the intent to limit this to "libcamera"-based cameras? The video streamer also includes support for v4l2 (ie USB) and Jetson-CSI camera.

A few other things to think about:

  • EXIF data with the GPS position at the time the photo was taken
  • If running low on disk space (say <500Mb), don't enable capture
  • JPG quality settings

@ddd999
Copy link
Author

ddd999 commented May 29, 2024

Thanks, those are great suggestions to add.

There is no intent to limit it to libcamera-supported devices necessarily, that's just all I have (Raspberry Pi camera module) to test with. However, all the documentation I've seen on V4L2 so far is focused on video streaming. I haven't come across an easy way to use V4L2 for still photography, and access all the camera settings one would want for that. It just doesn't seem very well suited to the task, but maybe I'm missing something?

@stephendade
Copy link
Owner

v4l2 will likely give limited options for shutter and exposure settings - really depends on what the end device supports (it can vary widely!). Opencv is another option too, but using v4ls will at least keep things a bit synchonised with the video streaming code.

@stephendade
Copy link
Owner

I've got plenty of different cameras here. Happy to help with testing and coding advice.

@ddd999
Copy link
Author

ddd999 commented May 29, 2024

I will have to dig in to the v4l2 docs some more to see what's possible. At first glance, libcamera seems way more straightforward. I agree that it makes sense to share as much code as possible, I'm just not sure of the best way to do that, or if v4l2 even supports all the wishlist features discussed above.

@ddd999
Copy link
Author

ddd999 commented Jun 1, 2024

I've done some reading and I'm just trying to wrap my head around the most camera-agnostic way to set this up, but also provide the features that I'm hoping to include.

As I understand, it would be mostly the same as what rtsp-server.py does as far as camera setup. I would need to open a pipeline with the sink set to 'fakesink', to keep the camera ready to capture stills. The camera handler script would then listen for a user signal, and trigger either a gstreamer "pipeline-snapshot" or a "capture last sample" event when the signal is received.

With the tee and/or valve plugin(s), it should be possible to do this simultaneously with video streaming. However, still frames would be limited to the same resolution as the video in that case. Simultaneous streaming + stills was a pie-in-the-sky goal above, but it still makes sense to plan for it from the beginning.

A couple of problems I foresee:

  1. Limited access to camera settings. Picamera2/libcamera seems to have a lot more support here.
  2. Stills captured during video streaming are limited to video resolution. Working around this would require interrupting the stream.

Any thoughts or advice you have on all of that would be very welcome. Thanks!

@ddd999
Copy link
Author

ddd999 commented Jun 2, 2024

I've just been playing around a bit with the gstreamer and libcamera command line tools. So far it seems like libcamera wins for ease of adjusting settings, and higher default image quality, but I haven't given up on gstreamer yet as I do see the benefits.

One thing I haven't figured out is why gstcaps.py doesn't actually return all of the camera's capabilities. I understand the purpose of the if statement at line 138 of gstcaps.py, but if I comment out that if statement, it still doesn't return any of the sizes larger than 1920x1080.

This is what I get with lines 138 and 139 commented out (i.e., still nothing above 1920x1080):

pi@rpanion:~/Rpanion-server/python $ ./gstcaps.py
(gstcaps.py:1233): GStreamer-CRITICAL **: 18:49:15.034: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
[1:22:21.717726513] [1233]  INFO Camera camera_manager.cpp:297 libcamera v0.0.5+83-bde9b04f
[1:22:21.774726730] [1234]  WARN RPI vc4.cpp:383 Mismatch between Unicam and CamHelper for embedded data usage!
[1:22:21.776053288] [1234]  INFO RPI vc4.cpp:437 Registered camera /base/soc/i2c0mux/i2c@1/imx219@10 to Unicam device /dev/media2 and ISP device /dev/media0
[1:22:21.776155059] [1234]  INFO RPI pipeline_base.cpp:1101 Using configuration file '/usr/share/libcamera/pipeline/rpi/vc4/rpi_apps.yaml'

[{"value": "/base/soc/i2c0mux/i2c@1/imx219@10", "label": "CSI Port Camera (imx219)", "caps": [{"value": "1920x1080xx-raw", "label": "1920x1080", "height": 1080, "width": 1920, "format": "video/x-raw", "fpsmax": "30", "fps": []}, {"value": "1640x922xx-raw", "label": "1640x922", "height": 922, "width": 1640, "format": "video/x-raw", "fpsmax": "40", "fps": []}, {"value": "1280x720xx-raw", "label": "1280x720", "height": 720, "width": 1280, "format": "video/x-raw", "fpsmax": "60", "fps": []}, {"value": "640x480xx-raw", "label": "640x480", "height": 480, "width": 640, "format": "video/x-raw", "fpsmax": "90", "fps": []}]}]                 

This is what gst-device-monitor says is available:

pi@rpanion:~/Rpanion-server/python $ gst-device-monitor-1.0 Video
Probing devices...
Device found:
	name  : /base/soc/i2c0mux/i2c@1/imx219@10
	class : Source/Video
	caps  : video/x-raw, format=NV21, width=160, height=120
	        video/x-raw, format=NV21, width=240, height=160
               <-- huge list of caps snipped for brevity-->
	        video/x-raw, format=UYVY, width=3200, height=2048
	        video/x-raw, format=UYVY, width=3200, height=2400
	        video/x-raw, format=UYVY, width=[ 64, 3280, 2 ], height=[ 64, 2464, 2 ]
	gst-launch-1.0 libcamerasrc camera-name="/base/soc/i2c0mux/i2c\@1/imx219\@10" ! ...

Side note, there is a bug in libcamerasrc that causes the images to be scrambled if either frame dimension isn't a multiple of 32 (confirmed with 3280x2464 on my IMX219).

@stephendade
Copy link
Owner

One thing I haven't figured out is why gstcaps.py doesn't actually return all of the camera's capabilities. I understand the purpose of the if statement at line 138 of gstcaps.py, but if I comment out that if statement, it still doesn't return any of the sizes larger than 1920x1080.

Check out lines 45-70 of gstcaps.py. I've hardcoded the available resolutions in libCamera. I don't remember why though ... I must have had an issue getting the real resolutions.

@stephendade
Copy link
Owner

In general, agree that libCamera is far better for still image capture. Definitely use that in the first instance. I think we'd just need a fallback for any sources that use v4l2, as you described above.

@ddd999
Copy link
Author

ddd999 commented Oct 21, 2024

I've got the very basics of still photo capture with the libcamera source working now (on the bench, at least). So I suppose it's time to tackle the v4l2 option for other cameras.

So far, I have a test script (not in the repo yet) that does the same thing as photomode.py, using gstreamer instead of libcamera. Eventually those can be combined to the same file.

What would be the best way to integrate that functionality into the existing code? Does it make sense to be part of rtsp-server.py since the code that handles device options is already there?

Or should everything related to local image/video capture be separated into a standalone .py file? I think this is my preference, to keep the code more readable, but then it might duplicate some (or a lot) of what's in rtsp-server.py.

Look forward to hearing your input. Thanks!

@stephendade
Copy link
Owner

Great! I'll take a look sometime this week.

@stephendade
Copy link
Owner

Ok, I've had some time to test your PR. Broadly happy with it.

A few things I did find during testing:

  • If the target directory doesn't already exist, the photo saving silently fails
  • Will need user selection of camera / resolution
  • Mission Planner doesn't seem to interpret the COMMAND_ACK's from taking a photo. I do see the COMMAND_ACK coming into Mission Planner, but it's not parsing it properly(?) Maybe something about the fields.
  • Looks like libcamera already populates some of the exif data in the captured file, which is nice.
  • photomode.py has a hardcoded capture path
  • The whole page sohuld be renamed from "Video Streaming" to "Video and Photo"

Or should everything related to local image/video capture be separated into a standalone .py file? I think this is my preference, to keep the code more readable, but then it might duplicate some (or a lot) of what's in rtsp-server.py.

Agree - keep it in a separate .py file

@ddd999
Copy link
Author

ddd999 commented Nov 11, 2024

Thanks for the feedback! I will incorporate those changes whenever I have time to continue with this.

Enables Rpanion to capture still photos when a button on the frontend is pressed, or when the MavLink MAV_CMD_DO_DIGICAM_CONTROL message is received.

Adds ability to send CameraInformation, CameraSettings, and CameraTrigger messages.
@ddd999
Copy link
Author

ddd999 commented Nov 17, 2024

  • Mission Planner doesn't seem to interpret the COMMAND_ACK's from taking a photo. I do see the COMMAND_ACK coming into Mission Planner, but it's not parsing it properly(?) Maybe something about the fields.

Just so I can duplicate and hopefully fix the issue, how were you testing the photo capture when you saw this?

@stephendade
Copy link
Owner

Just so I can duplicate and hopefully fix the issue, how were you testing the photo capture when you saw this?

Screenshot from 2024-11-17 20-28-33

@ddd999
Copy link
Author

ddd999 commented Nov 17, 2024

I've done the same test with Mission Planner connected directly to the autopilot via USB just to take rpanion out of the equation for the moment.

MP sends the COMMAND_LONG to system 1, component 1 (autopilot). The autopilot replies with a COMMAND_ACK to system 255, component 190 (Mission Planner).

What I'm not clear on is how that command is supposed to get to the camera. I assumed that if the ArduPilot setting CAM1_TYPE was set to 5: MAVLink or 6: MAVLinkCamV2, then the autopilot would forward the command to component 100 (camera). This doesn't happen for me regardless of the CAM1_TYPE setting, or whether the autopilot is armed or disarmed.

It appears that the autopilot acknowledges the command but then doesn't do anything with it. The ArduPilot doc page on MAVLink camera control doesn't offer much help here. Any thoughts?

@stephendade
Copy link
Owner

What I'm not clear on is how that command is supposed to get to the camera. I assumed that if the ArduPilot setting CAM1_TYPE was set to 5: MAVLink or 6: MAVLinkCamV2, then the autopilot would forward the command to component 100 (camera). This doesn't happen for me regardless of the CAM1_TYPE setting, or whether the autopilot is armed or disarmed.

It appears that the autopilot acknowledges the command but then doesn't do anything with it. The ArduPilot doc page on MAVLink camera control doesn't offer much help here. Any thoughts?

Ahh, I should have specified that you need to select the camera component from the drop down in the upper right (next to the connection button).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants