Replies: 1 comment
-
Thanks for opening this issue! This seems more like a discussion than an issue with the code, so I'm going to convert to a discussion |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This is rather something to think about and to be put into backlog. I looked at the white paper and the fabrication of answers (in the document titled “Contradictions”) is ~ 3%. That's quite a lot. There are tools (so far I've only found paid ones, but I'm sure there are free ones as well) that combat this, such as Patronus AI. Maybe in future iterations it would be worth looking into LLM's fabrication?
Beta Was this translation helpful? Give feedback.
All reactions