Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep being asked for OpenAI key when using Gemini #789

Open
Exemocaro opened this issue Jan 5, 2025 · 7 comments
Open

Keep being asked for OpenAI key when using Gemini #789

Exemocaro opened this issue Jan 5, 2025 · 7 comments
Labels
bug Something isn't working

Comments

@Exemocaro
Copy link

Hi, I'm trying to use a Gemini API and a local embedding model, and I keep getting the following error. Is there something wrong with my code? I tried to search for similar errors but I can't seem to find what's wrong.

My code:

import os
from dotenv import load_dotenv
from paperqa import Settings, ask

load_dotenv()

GEMINI_API_KEY = os.getenv('GEMINI_API_KEY')

model = 'gemini/gemini-1.5-flash'

llm_config={
    "model_list": [{
        "model_name": model,
        "litellm_params": {
            "model": model,
            "api_key": GEMINI_API_KEY
        }
    }],
    "rate_limit": {
        model: "10 per 1 minute"  
    }
}

answer_response = ask(
    "What is Paper-QA?",
    settings=Settings(
        llm=model,
        llm_config=llm_config,
        summary_llm=model,
        summary_llm_config=llm_config,
        embedding="st-multi-qa-MiniLM-L6-cos-v1"
    ),
)

The error:

Traceback (most recent call last):
  File "/home/myuser/myproject/src/test.py", line 24, in <module>
    answer_response = ask(
                      ^^^^
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/paperqa/agents/__init__.py", line 103, in ask
    return get_loop().run_until_complete(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/paperqa/agents/main.py", line 77, in agent_query
    response = await run_agent(docs, query, agent_type, **runner_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/paperqa/agents/main.py", line 129, in run_agent
    elif tool_selector_or_none := query.settings.make_aviary_tool_selector(agent_type):
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/paperqa/settings.py", line 836, in make_aviary_tool_selector
    acompletion=self.get_agent_llm().router.acompletion,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/paperqa/llms.py", line 674, in router
    self._router = litellm.Router(
                   ^^^^^^^^^^^^^^^
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/litellm/router.py", line 362, in __init__
    self.set_model_list(model_list)
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/litellm/router.py", line 3980, in set_model_list
    self._create_deployment(
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/litellm/router.py", line 3900, in _create_deployment
    deployment = self._add_deployment(deployment=deployment)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/litellm/router.py", line 4061, in _add_deployment
    InitalizeOpenAISDKClient.set_client(
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/litellm/router_utils/client_initalization_utils.py", line 472, in set_client
    _client = openai.AsyncOpenAI(  # type: ignore
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/openai/_client.py", line 337, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
Copy link

dosubot bot commented Jan 5, 2025

The error you're encountering indicates that the OpenAI API key is required but not set. This is likely because the litellm library, which paperqa uses, is attempting to initialize an OpenAI client, and it expects an OPENAI_API_KEY to be set.

Here are a few steps you can take to resolve this issue:

  1. Check Environment Variables: Ensure that the OPENAI_API_KEY environment variable is set in your environment. You can do this by adding it to your .env file or setting it directly in your environment.

  2. Modify Code for OpenAI Key: If you intend to use the Gemini API and not OpenAI, ensure that your configuration does not inadvertently require OpenAI. However, if OpenAI is needed for some part of the process, make sure to provide the necessary API key.

  3. Review litellm Configuration: Double-check the litellm configuration to ensure it doesn't default to using OpenAI. If your intention is to use Gemini, ensure that all configurations are correctly pointing to Gemini and not OpenAI.

If you are certain that OpenAI should not be involved, you might need to review the litellm library's documentation or configuration to ensure it is set up correctly for your use case.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the bug Something isn't working label Jan 5, 2025
@jamesbraza
Copy link
Collaborator

Hi @Exemocaro without me reading deeply here it's unclear if it's a PaperQA issue (us not instantiating the LiteLLM Router correctly), a LiteLLM Router issue (it incorrectly calling InitalizeOpenAISDKClient.set_client), or a mis-specification of parameters to PaperQA.

Are you able to try to make a more minimal repro of this in one script? What I mean is try to basically rip out as much of PaperQA as possible to root cause where the issue actually is.

@Exemocaro
Copy link
Author

@jamesbraza I don't think I can make it much smaller than this. I still get the same error, and if I remove the embedding it will opt for the default OpenAI one so I still get the error as well. I just picked this tool up so I'm sorry if I'm being a complete dunce by the way. If there's a Gemini example code available somewhere I would be glad to try it out! By the way, this is the same error I'm getting when running the code with a local model, as explained in #787

import os
from dotenv import load_dotenv
from paperqa import Settings, ask

load_dotenv()

GEMINI_API_KEY = os.getenv('GEMINI_API_KEY')

model = 'gemini/gemini-1.5-flash'

llm_config={
    "model_list": [{
        "model_name": model,
        "litellm_params": {
            "model": model,
            "api_key": GEMINI_API_KEY
        }
    }]
}

answer_response = ask(
    "What is Paper-QA?",
    settings=Settings(
        llm=model,
        llm_config=llm_config,
        embedding="st-multi-qa-MiniLM-L6-cos-v1"
    ),
)

@Exemocaro
Copy link
Author

@jamesbraza Sorry for the ping, I'm not sure if you saw this.

@jamesbraza
Copy link
Collaborator

Hello @Exemocaro I see your issue here. Notice the get_agent_llm usage:

  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/paperqa/settings.py", line 836, in make_aviary_tool_selector
    acompletion=self.get_agent_llm().router.acompletion,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^

What happens is we currently have several places LLMs can be configured:

  • LLM (Settings.llm): for general queries with paper-qa
  • Summary LLM (Settings.summary_llm): to create contextual summaries
  • Agent LLM (Settings.agent.agent_llm): to make tool selections

You will need to pass your llm_config to all of these places in order for Gemini to be used everywhere

@jamesbraza
Copy link
Collaborator

Also @Exemocaro you can't actually use gemini/gemini-1.5-flash for the agent LLM at this point in time, see #796.

@Exemocaro
Copy link
Author

Exemocaro commented Jan 8, 2025

Thank you! Other gemini versions such as gemini-1.5-flash and gemini-2.0-flash-exp don't appear to be working either btw. The error seems the same about the tool arguments. Also, if I leave the rate_limit I get the following rate_limit error:

  File "/home/myuser/myproject/.venv/lib/python3.12/site-packages/paperqa/rate_limiter.py", line 371, in try_acquire
    raise TimeoutError(
TimeoutError: Timeout (2.1299999999999986 secs): rate limit for key: ('client', 'gemini/gemini-2.0-flash-exp')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants