Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider responds even if the model is not supported by it #2538

Open
daniij opened this issue Jan 4, 2025 · 12 comments
Open

Provider responds even if the model is not supported by it #2538

daniij opened this issue Jan 4, 2025 · 12 comments
Assignees
Labels
bug Something isn't working

Comments

@daniij
Copy link

daniij commented Jan 4, 2025

When selecting the o1 or o1-mini model, the provider PollinationsAI responds to the request, even though this model is not listed among those it supports.

Here is the list of supported models from the Providers and Models page:

gpt-4o, mistral-large, mistral-nemo, llama-3.1-70b, gpt-4, qwen-2.5-coder-32b, claude-3.5-sonnet, command-r, evil, p1,turbo, unity, 
midijourney, rtist 

And from the code:

model_aliases = {
        "gpt-4o": "openai",
        "mistral-nemo": "mistral",
        "llama-3.1-70b": "llama",
        "gpt-4": "searchgpt",
        "gpt-4": "claude",
        "qwen-2.5-coder-32b": "qwen-coder",
        "claude-3.5-sonnet": "sur",
    }

A similar situation occurs with the provider Pizzagpt, which only lists the gpt4o-mini model, yet it responds when other models are selected as well. I haven't checked all providers, but this might be happening with others too. For example, BlackBox works correctly and indicates that the model is not supported and and he's not responding to the request. I have used both synchronous and asynchronous clients. Using: g4f v0.4.0.6 and python 3.13.1.

Perhaps I am doing something wrong or misunderstanding something, and I am the only person with this issue? Please fix it.

@daniij daniij added the bug Something isn't working label Jan 4, 2025
@daniij daniij changed the title Api responds even if the model is not supported by it Provider responds even if the model is not supported by it Jan 5, 2025
@daniij
Copy link
Author

daniij commented Jan 6, 2025

@xtekky @hlohaus

@hlohaus
Copy link
Collaborator

hlohaus commented Jan 6, 2025

Hey,

Just pick a good model, and ditch any providers that ignore your choice. It's easy! Add the bad ones to { "model": "", "ignored": ["BadProvider"] }

@daniij
Copy link
Author

daniij commented Jan 7, 2025

What if I want to use several models? Then I will have to check all the models I need on all the providers. This will take a lot of time.

@hlohaus
Copy link
Collaborator

hlohaus commented Jan 7, 2025

Please document the issues with the inaccurate models. I will correct the list.
@daniij

@daniij
Copy link
Author

daniij commented Jan 8, 2025

@hlohaus I didn't quite understand what it means to document the issue. But I checked all providers without authorization. The correct behavior of the provider is to output an error message: Model is not supported: o1-mini in: "ProviderName"

The incorrect behavior is to respond to the user even if the selected model is not in the list of supported models. I also encountered other errors (although they may only pertain to me). Here is the code I used:

async def main():
    client = AsyncClient(provider=Airforce)
    
    try:
        response = await client.chat.completions.create(
            model="o1-mini",
            messages=[
                {
                    "role": "user",
                    "content": "What model are you using?"
                }
            ]
        )
        
        print(response.choices[0].message.content)
    
    except Exception as e:
        print(f"Error: {e}")
        
asyncio.run(main())
  • Airforce - correct
  • AmigoChat - correct
  • Blackbox - correct
  • BlackboxCreateAgent - correct
  • ChatGpt - correct (empty resposne with unsupported model)
  • ChatGptEs - (with any model) Error: list index out of range
  • ClaudeSon - (with any model) Error: Response 405: HTML content
  • Cloudflare - correct
  • Copilot - (with any model) Error: Invalid response: {'event': 'challenge', 'id': '1'}
  • DarkAI - correct
  • DDG - correct (Error: 404, message='Not Found', url='https://duckduckgo.com/duckchat/v1/chat')
  • DeepInfraChat - correct
  • FreeGpt - incorrect (Response:我是一个大型语言模型,由 Google 训练。.) Supported only mistral-7b
  • Free2GPT - incorrect (Response: 我是一个大型语言模型,由Google训练。) Supported only gemini-1.5-pro
  • GizAI - incorrect (Response: I'm a large language model, trained by Google.) Supported only gemini-1.5-flash
  • Liaobots - (with any model) Error: Response 402: Error
  • Mhystical - (with any model) Error: Response 520: Unknown error (Cloudflare)
  • PerplexityLabs - (with any model) Error: Unknown error
  • Pi - incorrect (Response: ’m built on top of Inflection-2.5, which is a proprietary Large Language Model developed by Inflection AI. I can’t provide the technical details, but I can assure you that it’s pretty powerful. 😊)
  • Pizzagpt - incorrect (Response: I am based on OpenAI's GPT-3 model, specifically engineered for conversational tasks. If you have any questions or need assistance, feel free to ask!)
  • PollinationsAI - incorrect (Response: I am based on OpenAI's GPT-3 model. If there have been any updates or new versions released after October 2023, I wouldn't have information about them. How can I assist you today?) (Almost all supported models answer like that)
  • Prodia - (Response by image)(I think if its not support chat model it should not answer)
  • ReplicateHome - correct
  • RubiksAI - Error: Response 403: Cloudflare detected
  • TeachAnything - incorrect
  • You - correct (Empty response with supported models)

I hope this helps to fix the problem.

@hlohaus
Copy link
Collaborator

hlohaus commented Jan 10, 2025

@daniij, the approach is incorrect. Prioritize verification of the provider model list. The correct procedure is to check if "o1-mini" exists within the g4f.Provider.OpenaiAccount.get_models() list; if it does, assign g4f.Provider.OpenaiAccount to the provider variable.

@daniij
Copy link
Author

daniij commented Jan 12, 2025

@hlohaus, I think we misunderstood each other a bit. I just wanted to show that even if a model is not supported by the provider, they still respond, which can be misleading. I believe this is a malfunction, not something normal. I used the o1-mini model because it is only available from Liabots. Instead, I could have written any random word, and logically, all providers should have ignored me. I think most users do not specify a particular provider and do not check if the selected model is available. And using code like this.

from g4f.client import Client

client = Client()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello"}],
    web_search = False
)
print(response.choices[0].message.content)

Users who, for example, choose the o1 model will be interacting with llama, which will mislead them.

As I understand it, you are suggesting some kind of check to see if a specific provider has the model. Could you elaborate on the code for this check and how it can be used with RetryProvider? It might also be better to add such a check to the library code.

@hlohaus
Copy link
Collaborator

hlohaus commented Jan 12, 2025

@daniij , the RetryProvider does not require this. All providers listed in the models.py have been verified for compatibility with the selected model.

@daniij
Copy link
Author

daniij commented Jan 16, 2025

@hlohaus Sorry for the not very quick response. Here is an example of code where RertyProvider is used.

async def main():
    print("Available models of FreeGpt",g4f.Provider.FreeGpt.get_models())
    
    retry_client = AsyncClient(provider=RetryProvider([Blackbox,FreeGpt], shuffle=False))

    response = await retry_client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {
                "role": "user",
                "content": "What model are you using?"
            }
        ],
        web_search = False
    )

    print(response.choices[0].message.content)

asyncio.run(main())

Output:
Available models of FreeGpt ['gemini-1.5-pro']
Using RetryProvider provider and gpt-4o-mini model
Using Blackbox provider
Blackbox: ModelNotSupportedError: Model is not supported: gpt-4o-mini in: Blackbox
Using FreeGpt provider
我是一个大型语言模型,由 Google 训练。

version of g4f: 0.4.1.2

As I understand it, you are telling me that if the model is not specified in get_models, then the provider does not support it and should not respond. But in my example, the provider does respond. Can you explain why you do not consider it a problem that the provider can respond even if the selected model is not present in the list of supported models? Am I doing something wrong? If so, please tell me how to do it correctly. Please respond to me clearly and in detail, preferably with examples.

Copy link

Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.

@github-actions github-actions bot added the stale label Jan 24, 2025
@daniij
Copy link
Author

daniij commented Jan 24, 2025

@hlohaus

@github-actions github-actions bot removed the stale label Jan 25, 2025
@hlohaus
Copy link
Collaborator

hlohaus commented Jan 25, 2025

I cannot change the behavior of the providers on providers side; I can only ensure that the model lists are kept up to date. I believe we shouldn't block all unlisted models, as providers often support more models than are explicitly listed.
@daniij

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants