You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Emerging compliance regulations around AI suggest it may be helpful to explicitly annotate any analysis findings that either originate entirely with AI or are augmented/influenced by LLMs, custom models, etc.
The text was updated successfully, but these errors were encountered:
Shouldn't the info on the source of any "rating" be sufficient (in the scope of the format)?
Given the "fashion trends" I expect most analyzers will use (or claim to use) models to extrapolate or interpolate findings from the system analyzed.
Stating the obvious would IMO not really accelerate the use case of tracking down false positives.
Until we plan to document the processing of credit or job applications (or anything else where the producer or consumer might face challenges of bias or similar) per SARIF, I have a hard time to imagine how the documentation of analysis of any system might need more than the already existing ways of originator tagging.
So, I like to get to know one or more such specific use cases before I see myself able to discuss this suggested addition.
Emerging compliance regulations around AI suggest it may be helpful to explicitly annotate any analysis findings that either originate entirely with AI or are augmented/influenced by LLMs, custom models, etc.
The text was updated successfully, but these errors were encountered: