-
-
Notifications
You must be signed in to change notification settings - Fork 13.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provider responds even if the model is not supported by it #2538
Comments
Hey, Just pick a good model, and ditch any providers that ignore your choice. It's easy! Add the bad ones to |
What if I want to use several models? Then I will have to check all the models I need on all the providers. This will take a lot of time. |
Please document the issues with the inaccurate models. I will correct the list. |
@hlohaus I didn't quite understand what it means to document the issue. But I checked all providers without authorization. The correct behavior of the provider is to output an error message: Model is not supported: o1-mini in: "ProviderName" The incorrect behavior is to respond to the user even if the selected model is not in the list of supported models. I also encountered other errors (although they may only pertain to me). Here is the code I used:
I hope this helps to fix the problem. |
@daniij, the approach is incorrect. Prioritize verification of the provider model list. The correct procedure is to check if "o1-mini" exists within the |
@hlohaus, I think we misunderstood each other a bit. I just wanted to show that even if a model is not supported by the provider, they still respond, which can be misleading. I believe this is a malfunction, not something normal. I used the o1-mini model because it is only available from Liabots. Instead, I could have written any random word, and logically, all providers should have ignored me. I think most users do not specify a particular provider and do not check if the selected model is available. And using code like this.
Users who, for example, choose the o1 model will be interacting with llama, which will mislead them. As I understand it, you are suggesting some kind of check to see if a specific provider has the model. Could you elaborate on the code for this check and how it can be used with RetryProvider? It might also be better to add such a check to the library code. |
@daniij , the RetryProvider does not require this. All providers listed in the models.py have been verified for compatibility with the selected model. |
@hlohaus Sorry for the not very quick response. Here is an example of code where RertyProvider is used.
Output: version of g4f: 0.4.1.2 As I understand it, you are telling me that if the model is not specified in get_models, then the provider does not support it and should not respond. But in my example, the provider does respond. Can you explain why you do not consider it a problem that the provider can respond even if the selected model is not present in the list of supported models? Am I doing something wrong? If so, please tell me how to do it correctly. Please respond to me clearly and in detail, preferably with examples. |
Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again. |
I cannot change the behavior of the providers on providers side; I can only ensure that the model lists are kept up to date. I believe we shouldn't block all unlisted models, as providers often support more models than are explicitly listed. |
When selecting the o1 or o1-mini model, the provider PollinationsAI responds to the request, even though this model is not listed among those it supports.
Here is the list of supported models from the Providers and Models page:
And from the code:
A similar situation occurs with the provider Pizzagpt, which only lists the gpt4o-mini model, yet it responds when other models are selected as well. I haven't checked all providers, but this might be happening with others too. For example, BlackBox works correctly and indicates that the model is not supported and and he's not responding to the request. I have used both synchronous and asynchronous clients. Using: g4f v0.4.0.6 and python 3.13.1.
Perhaps I am doing something wrong or misunderstanding something, and I am the only person with this issue? Please fix it.
The text was updated successfully, but these errors were encountered: