Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to use other AI tools like Gemini, Claude or DeepSeek? #157

Open
nurlan114 opened this issue Jan 9, 2025 · 5 comments
Open

Comments

@nurlan114
Copy link

Thank you for developing this amazing plugin! It has been incredibly helpful in integrating AI capabilities with Zotero.

I noticed that the plugin currently supports ChatGPT, which is great. However, I was wondering if it would be possible to expand the plugin to support other AI tools, such as Gemini, Claude, or DeepSeek.

@apstrom
Copy link

apstrom commented Jan 15, 2025

+1 to this request.

I use LocalAI to run my LLMs, which uses the ChatGPT API. I would like to point ARIA to my LocalAI instance (on my LAN), but the preferences pane does not allow me to point the plugin to the LAN address.

LocalAI's API documentation may be found here: LocalAI. It should be a drop-in replacement for ChatGPT.

@lifan0127
Copy link
Owner

Hi @nurlan114 and @apstrom , you can connect to third-party LLMs, as long as they provide compatible APIs ("drop-in replacement"). Be aware that their behaviors may still be different from OpenAI's.

Please see here on how to change the API end point: https://github.com/lifan0127/ai-research-assistant?tab=readme-ov-file#preferences

@apstrom
Copy link

apstrom commented Jan 15, 2025

Thanks @lifan0127. I appreciate your work on this and the timely response. The only issue with the preferences pane here is that it does not call the models available in LocalAI (not necessarily GPT4).

A model selection function would be required to call the model list from LocalAI to the plugin. Here is an example of the function written for AnythingLLM, which integrates LocalAI and ChatGPT:

function LocalAIModelSelection({ settings, apiKey = null, basePath = null }) {
  const [customModels, setCustomModels] = useState([]);
  const [loading, setLoading] = useState(true);

  useEffect(() => {
    async function findCustomModels() {
      if (!basePath || !basePath.includes("/v1")) {
        setCustomModels([]);
        setLoading(false);
        return;
      }
      setLoading(true);
      const { models } = await System.customModels(
        "localai",
        typeof apiKey === "boolean" ? null : apiKey,
        basePath
      );
      setCustomModels(models || []);
      setLoading(false);
    }
    findCustomModels();
  }, [basePath, apiKey]);

  if (loading || customModels.length == 0) {
    return (
      <div className="flex flex-col w-60">
        <label className="text-white text-sm font-semibold block mb-2">
          Embedding Model Name
        </label>
        <select
          name="EmbeddingModelPref"
          disabled={true}
          className="border-none bg-theme-settings-input-bg border-gray-500 text-white text-sm rounded-lg block w-full p-2.5"
        >
          <option disabled={true} selected={true}>
            {basePath?.includes("/v1")
              ? "-- loading available models --"
              : "-- waiting for URL --"}
          </option>
        </select>
      </div>
    );
  }

  return (
    <div className="flex flex-col w-60">
      <label className="text-white text-sm font-semibold block mb-2">
        Embedding Model Name
      </label>
      <select
        name="EmbeddingModelPref"
        required={true}
        className="border-none bg-theme-settings-input-bg border-gray-500 text-white text-sm rounded-lg block w-full p-2.5"
      >
        {customModels.length > 0 && (
          <optgroup label="Your loaded models">
            {customModels.map((model) => {
              return (
                <option
                  key={model.id}
                  value={model.id}
                  selected={settings?.EmbeddingModelPref === model.id}
                >
                  {model.id}
                </option>
              );
            })}
          </optgroup>
        )}
      </select>
    </div>
  );
}

If a function similar to the above could be called from the preferences pane, the user could select their own custom models using LocalAI. This function also includes the ability to select embedding models, which is a separate feature in AnythingLLM.

I've proposed some modifications to the plugin in an effort to implement the above. I have no skill in this field, so am proposing changes that may not fully address implementation in your code. #158

@lifan0127
Copy link
Owner

Hi @apstrom, thanks for the detailed explanation. If you only need to specify a different model name, it is perhaps easier to update the model name in Zotero config editor. If you need other customization, you may need to modify the code.

image

I should also mention that Aria has switched to the OpenAI Assistants APIs in the latest development, to take advantage of the built-in vector store, message history (memory) and other features. If you plan to use 3rd-party LLMs, please ensure they provide compatible capabilities.

@menelic
Copy link

menelic commented Jan 26, 2025

It would be great if the ability to use Gemini, deepseek, mistral or local models could become part of the standard UI to enable non technical users to make this choice. Individual or institutional preferences e.g for Gemini or data confidentiality rules requiring the use of EU located Mistral or local, self hosted models.

I understand that the OpenAI Api is a de facto standard but it would be important to keep supporting the standard API so that it is easier to plug other endpoints that conform with Open AI formatting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants