-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: Connect to the HuggingFace Hub to achieve a multimodal capability #2577
Comments
Re 1. We have some on going work in #1929 and #2414 for adding functions as a function store. This is similar to your idea of adding built in hugging face tools. It's best to discuss with @gagb @afourney and @LeoLjl about this direction and see if you can combine effort. Re 2. Sounds interesting! I think we can start with a notebook example to show how this work. And then we can decide on whether to just do a notebook PR or a contrib agent. Re 3. cc @WaelKarkoub @BeibinLi we do have text to image capability already. |
@whiskyboy we implemented We also implemented an Other multimodality features are currently being worked on that you can track the progress of in this roadmap #1975. Let me know if you have other ideas that we could add to the roadmap. |
@ekzhu @WaelKarkoub |
@whiskyboy just for awareness, I have a PR that handles text-to-speech and speech-to-text #2098. I'm experimenting with architecture but it mainly works. |
@WaelKarkoub @ekzhu |
Is your feature request related to a problem? Please describe.
The HuggingFace Hub provides an elegant python client to allow users to control over 100,000+ huggingface models and run inference on these models to achieve a variety of multimodal tasks, like image-to-text, text-to-speech, etc. By connecting to this hub, a text-based LLM like gpt-3.5-tubor could also have the multimodal capability to handle images, video, audio, and documents, in a cost efficient way.
However, it still needs some additional coding work to allow an autogen agent to interact with a huggingface-hub client, such as wrapping the client method into a function, parsing different input/output types, and model deployment management. That's why I'm seeking if autogen could have an out-of-box solution for the connecting.
Other similar works: JARVIS, Transformers Agent
Describe the solution you'd like
huggingface_agent
, like Transformers Agent. This agent would essentially consist of a pairing between an assistant and a user-proxy agent, both are registered with the huggingface-hub toolkit. Users could seamlessly access this agent to leverage its multimodal capabilities, without the need for manual registration of toolkits for execution.process_last_received_message
method. However, it may not be straightforward for some tasks such as text-to-image.Additional context
I'd like to hear your suggestions and make contributions in different ways.
The text was updated successfully, but these errors were encountered: