You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The new complete tool allows the agent to call it quits after trying many search iterations. This is great but if it finds no relevant evidence, the GenerateAnswer tool won't be called either, so the session has nothing populated for the answer. We should have some kind of sentinel auto-injected here that can be displayed as output rather than just an empty string for this case.
The text was updated successfully, but these errors were encountered:
importasynciofrompaperqa.litqaimportLitQAEvaluationqa_prompt2, eval_fn2=LitQAEvaluation.from_question(
ideal="circular dichroism",
distractors=["cryo EM", "x-ray crystallography", "NMR"],
question=(
"What method was used to demonstrate that the enzyme PafA is stable after"" incubation with 4M urea for 14 days?"
),
)
print(asyncio.run(eval_fn2(""))) # correct
I am actually working on this right now
The issue is without an answer, the LLM begins to pull on its innate knowledge
The new complete tool allows the agent to call it quits after trying many search iterations. This is great but if it finds no relevant evidence, the GenerateAnswer tool won't be called either, so the session has nothing populated for the answer. We should have some kind of sentinel auto-injected here that can be displayed as output rather than just an empty string for this case.
The text was updated successfully, but these errors were encountered: