-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Photo Mode Skeleton #229
base: master
Are you sure you want to change the base?
Photo Mode Skeleton #229
Conversation
Very nice! A few other things to think about:
|
Thanks, those are great suggestions to add. There is no intent to limit it to libcamera-supported devices necessarily, that's just all I have (Raspberry Pi camera module) to test with. However, all the documentation I've seen on V4L2 so far is focused on video streaming. I haven't come across an easy way to use V4L2 for still photography, and access all the camera settings one would want for that. It just doesn't seem very well suited to the task, but maybe I'm missing something? |
v4l2 will likely give limited options for shutter and exposure settings - really depends on what the end device supports (it can vary widely!). Opencv is another option too, but using v4ls will at least keep things a bit synchonised with the video streaming code. |
I've got plenty of different cameras here. Happy to help with testing and coding advice. |
I will have to dig in to the v4l2 docs some more to see what's possible. At first glance, libcamera seems way more straightforward. I agree that it makes sense to share as much code as possible, I'm just not sure of the best way to do that, or if v4l2 even supports all the wishlist features discussed above. |
I've done some reading and I'm just trying to wrap my head around the most camera-agnostic way to set this up, but also provide the features that I'm hoping to include. As I understand, it would be mostly the same as what rtsp-server.py does as far as camera setup. I would need to open a pipeline with the sink set to 'fakesink', to keep the camera ready to capture stills. The camera handler script would then listen for a user signal, and trigger either a gstreamer "pipeline-snapshot" or a "capture last sample" event when the signal is received. With the tee and/or valve plugin(s), it should be possible to do this simultaneously with video streaming. However, still frames would be limited to the same resolution as the video in that case. Simultaneous streaming + stills was a pie-in-the-sky goal above, but it still makes sense to plan for it from the beginning. A couple of problems I foresee:
Any thoughts or advice you have on all of that would be very welcome. Thanks! |
I've just been playing around a bit with the gstreamer and libcamera command line tools. So far it seems like libcamera wins for ease of adjusting settings, and higher default image quality, but I haven't given up on gstreamer yet as I do see the benefits. One thing I haven't figured out is why gstcaps.py doesn't actually return all of the camera's capabilities. I understand the purpose of the if statement at line 138 of gstcaps.py, but if I comment out that if statement, it still doesn't return any of the sizes larger than 1920x1080. This is what I get with lines 138 and 139 commented out (i.e., still nothing above 1920x1080):
This is what gst-device-monitor says is available:
Side note, there is a bug in libcamerasrc that causes the images to be scrambled if either frame dimension isn't a multiple of 32 (confirmed with 3280x2464 on my IMX219). |
Check out lines 45-70 of |
In general, agree that libCamera is far better for still image capture. Definitely use that in the first instance. I think we'd just need a fallback for any sources that use v4l2, as you described above. |
8c8ad8e
to
bccc6d6
Compare
I've got the very basics of still photo capture with the libcamera source working now (on the bench, at least). So I suppose it's time to tackle the v4l2 option for other cameras. So far, I have a test script (not in the repo yet) that does the same thing as photomode.py, using gstreamer instead of libcamera. Eventually those can be combined to the same file. What would be the best way to integrate that functionality into the existing code? Does it make sense to be part of rtsp-server.py since the code that handles device options is already there? Or should everything related to local image/video capture be separated into a standalone .py file? I think this is my preference, to keep the code more readable, but then it might duplicate some (or a lot) of what's in rtsp-server.py. Look forward to hearing your input. Thanks! |
Great! I'll take a look sometime this week. |
Ok, I've had some time to test your PR. Broadly happy with it. A few things I did find during testing:
Agree - keep it in a separate .py file |
Thanks for the feedback! I will incorporate those changes whenever I have time to continue with this. |
Enables Rpanion to capture still photos when a button on the frontend is pressed, or when the MavLink MAV_CMD_DO_DIGICAM_CONTROL message is received. Adds ability to send CameraInformation, CameraSettings, and CameraTrigger messages.
Just so I can duplicate and hopefully fix the issue, how were you testing the photo capture when you saw this? |
I've done the same test with Mission Planner connected directly to the autopilot via USB just to take rpanion out of the equation for the moment. MP sends the COMMAND_LONG to system 1, component 1 (autopilot). The autopilot replies with a COMMAND_ACK to system 255, component 190 (Mission Planner). What I'm not clear on is how that command is supposed to get to the camera. I assumed that if the ArduPilot setting CAM1_TYPE was set to 5: MAVLink or 6: MAVLinkCamV2, then the autopilot would forward the command to component 100 (camera). This doesn't happen for me regardless of the CAM1_TYPE setting, or whether the autopilot is armed or disarmed. It appears that the autopilot acknowledges the command but then doesn't do anything with it. The ArduPilot doc page on MAVLink camera control doesn't offer much help here. Any thoughts? |
Ahh, I should have specified that you need to select the camera component from the drop down in the upper right (next to the connection button). |
The start of a Photo mode that allows full-resolution still images to be captured locally on the device running Rpanion. As requested in #167
This is currently very much a work in progress (expect broken things, for now), but I'm adding it as a draft PR in case anyone wants to comment on its early stages or contribute.
The basic idea is to have a radio button on the Video page to select between Streaming (the current state) and Photo mode. Photo mode would: