-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complex number support #646
Comments
Note that DI currently doesn't guarantee anything vis a vis complex number support (this should probably be emphasized in the docs, PR welcome). The reason is that many backends don't support them at all, and the remaining ones probably have wildly diverging conventions. |
I can understand that different AD backends may support this differently, so what is the current solution for this? I can PR to relax these type annotations and add proper tests(which I believe will resolve the ForwardDiff and ReverseDiff cases), we have to start somewhere right🤠? |
Feel free to open a PR replacing any |
From my testing, the funny thing is that the dense AD using DI would just work when dealing with complex numbers(only tested on ForwardDiff), it’s only sparse AD that suffers problems. |
Sure, but what I mean is that even if it "just works" (because DI forwards everything without constraining types), different backends may have different interpretations of what the complex derivatives mean. And I don't know if it is wise to expose these different interpretations under a common interface, possibly confusing users by letting them believe the operations are semantically the same. Does that make sense? |
Yeah, that does make sense. So now I just relax the type annotations of the compressed matrix for my special use case then. |
Added a docs warning in #651 |
This feels like something it would be really great to have in DI (e.g. have a DI convention, and convert the corresponding package gradient/etc calls to be able to use it). |
But even if we tried to homogenize the outputs/interpretation, there is nothing we can do about inputs. For instance, ForwardDiff's gradient will fail on a function with complex input, even if the function is real-valued. |
sure, just throw a nicer error message and mark it as unsupported in your "is X backend supported on Y function" table |
Find this while implementing SciML/BoundaryValueDiffEq.jl#258
This MWE is just proof of the idea and may not be meaningful, but I think it can still prove that the current DI lacks support for sparse Jacobian of complex numbers.
stack trace
I believe the culprit is the annotation of the compressed matrix being too restricted in the SparseMatrixColorings extensions, I locally changed this into more generic ones and the errors are gone.
DifferentiationInterface.jl/DifferentiationInterface/ext/DifferentiationInterfaceSparseMatrixColoringsExt/jacobian.jl
Lines 3 to 33 in cc8818a
The text was updated successfully, but these errors were encountered: