Replies: 3 comments
-
I'm also curious about this. All of the boilerplate to create a Guidance agent in Autogen is a lot less elegant than using Instructor. |
Beta Was this translation helpful? Give feedback.
0 replies
-
This isn't something I'm planning right now, but I definitely think it makes sense. |
Beta Was this translation helpful? Give feedback.
0 replies
-
see #741 feedback welcomed. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all! I'm currently working on constraining the output of Autogen agents using Guidance. I'd like to figure out how to use Instructor instead of Guidance.
Based on the sample Autogen/Guidance notebook, Guidance appears to link directly into the next-token generation process (?) and apply its constraints there, but only after the first-pass LLM response has been generated (? - I'm actually not 100% sure about what's going on there) and evaluated.
In contrast, my understanding of Instructor is that it tries to JSONify the LLM response, feeds it into the Pydantic model, then either succeeds or retries by feeding original prompt + whatever validation errors back to the LLM. In principle, that too could be fed into the
register_reply
override - would that be the suggested method?(I should note that while this seems to be an Autogen/Guidance-specific question and so should be asked in other fora, there's a lack of materials on integrating almost any output-constraining libraries into Autogen - and also, this is a question that's a little bit about the inner workings of Instructor. In any case, my apologies if I'm off-topic.)
Beta Was this translation helpful? Give feedback.
All reactions