Skip to content

Commit

Permalink
refactor(agent/config): Modularize Config and revive Azure support (#…
Browse files Browse the repository at this point in the history
…6497)

* feat: Refactor config loading and initialization to be modular and decentralized

   - Refactored the `ConfigBuilder` class to support modular loading and initialization of the configuration from environment variables.
   - Implemented recursive loading and initialization of nested config objects.
   - Introduced the `SystemConfiguration` base class to provide common functionality for all system settings.
   - Added the `from_env` attribute to the `UserConfigurable` decorator to provide environment variable mappings.
   - Updated the `Config` class and its related classes to inherit from `SystemConfiguration` and use the `UserConfigurable` decorator.
   - Updated `LoggingConfig` and `TTSConfig` to use the `UserConfigurable` decorator for their fields.
   - Modified the implementation of the `build_config_from_env` method in `ConfigBuilder` to utilize the new modular and recursive loading and initialization logic.
   - Updated applicable test cases to reflect the changes in the config loading and initialization logic.

   This refactor improves the flexibility and maintainability of the configuration loading process by introducing modular and recursive behavior, allowing for easier extension and customization through environment variables.

* refactor: Move OpenAI credentials into `OpenAICredentials` sub-config

   - Move OpenAI API key and other OpenAI credentials from the global config to a new sub-config called OpenAICredentials.
   - Update the necessary code to use the new OpenAICredentials sub-config instead of the global config when accessing OpenAI credentials.
   - (Hopefully) unbreak Azure support.
      - Update azure.yaml.template.
   - Enable validation of assignment operations on SystemConfiguration and SystemSettings objects.

* feat: Update AutoGPT configuration options and setup instructions

   - Added new configuration options for logging and OpenAI usage to .env.template
   - Removed deprecated configuration options in config/config.py
   - Updated setup instructions in Docker and general setup documentation to include information on using Azure's OpenAI services

* fix: Fix image generation with Dall-E

   - Fix issue with image generation with Dall-E API

Additional user context: This commit fixes an issue with image generation using the Dall-E API. The code now correctly retrieves the API key from the agent's legacy configuration.

* refactor(agent/core): Refactor `autogpt.core.configuration.schema` and update docstrings

   - Refactor the `schema.py` file in the `autogpt.core.configuration` module.
   - Added docstring to `SystemConfiguration.from_env()`
   - Updated docstrings for functions `_get_user_config_values`, `_get_non_default_user_config_values`, `_recursive_init_model`, `_recurse_user_config_fields`, and `_recurse_user_config_values`.
  • Loading branch information
Pwuts authored Dec 5, 2023
1 parent 03eb921 commit 7b05245
Show file tree
Hide file tree
Showing 17 changed files with 669 additions and 404 deletions.
69 changes: 31 additions & 38 deletions autogpts/autogpt/.env.template
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
# For further descriptions of these settings see docs/configuration/options.md or go to docs.agpt.co

################################################################################
### AutoGPT - GENERAL SETTINGS
################################################################################
Expand All @@ -25,14 +23,6 @@ OPENAI_API_KEY=your-openai-api-key
## PROMPT_SETTINGS_FILE - Specifies which Prompt Settings file to use, relative to the AutoGPT root directory. (defaults to prompt_settings.yaml)
# PROMPT_SETTINGS_FILE=prompt_settings.yaml

## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url
# the following is an example:
# OPENAI_API_BASE_URL=http://localhost:443/v1

## OPENAI_FUNCTIONS - Enables OpenAI functions: https://platform.openai.com/docs/guides/gpt/function-calling
## WARNING: this feature is only supported by OpenAI's newest models. Until these models become the default on 27 June, add a '-0613' suffix to the model of your choosing.
# OPENAI_FUNCTIONS=False

## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y

Expand All @@ -52,6 +42,17 @@ OPENAI_API_KEY=your-openai-api-key
## TEMPERATURE - Sets temperature in OpenAI (Default: 0)
# TEMPERATURE=0

## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url
# the following is an example:
# OPENAI_API_BASE_URL=http://localhost:443/v1

# OPENAI_API_TYPE=
# OPENAI_API_VERSION=

## OPENAI_FUNCTIONS - Enables OpenAI functions: https://platform.openai.com/docs/guides/gpt/function-calling
## Note: this feature is only supported by OpenAI's newer models.
# OPENAI_FUNCTIONS=False

## OPENAI_ORGANIZATION - Your OpenAI Organization key (Default: None)
# OPENAI_ORGANIZATION=

Expand Down Expand Up @@ -90,32 +91,6 @@ OPENAI_API_KEY=your-openai-api-key
## SHELL_ALLOWLIST - List of shell commands that ARE allowed to be executed by AutoGPT (Default: None)
# SHELL_ALLOWLIST=

################################################################################
### MEMORY
################################################################################

### General

## MEMORY_BACKEND - Memory backend type
# MEMORY_BACKEND=json_file

## MEMORY_INDEX - Value used in the Memory backend for scoping, naming, or indexing (Default: auto-gpt)
# MEMORY_INDEX=auto-gpt

### Redis

## REDIS_HOST - Redis host (Default: localhost, use "redis" for docker-compose)
# REDIS_HOST=localhost

## REDIS_PORT - Redis port (Default: 6379)
# REDIS_PORT=6379

## REDIS_PASSWORD - Redis password (Default: "")
# REDIS_PASSWORD=

## WIPE_REDIS_ON_START - Wipes data / index on start (Default: True)
# WIPE_REDIS_ON_START=True

################################################################################
### IMAGE GENERATION PROVIDER
################################################################################
Expand Down Expand Up @@ -191,13 +166,12 @@ OPENAI_API_KEY=your-openai-api-key
################################################################################

## TEXT_TO_SPEECH_PROVIDER - Which Text to Speech provider to use (Default: gtts)
## Options: gtts, streamelements, elevenlabs, macos
# TEXT_TO_SPEECH_PROVIDER=gtts

### Only if TEXT_TO_SPEECH_PROVIDER=streamelements
## STREAMELEMENTS_VOICE - Voice to use for StreamElements (Default: Brian)
# STREAMELEMENTS_VOICE=Brian

### Only if TEXT_TO_SPEECH_PROVIDER=elevenlabs
## ELEVENLABS_API_KEY - Eleven Labs API key (Default: None)
# ELEVENLABS_API_KEY=

Expand All @@ -210,3 +184,22 @@ OPENAI_API_KEY=your-openai-api-key

## CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False)
# CHAT_MESSAGES_ENABLED=False

################################################################################
### LOGGING
################################################################################

## LOG_LEVEL - Set the minimum level to filter log output by. Setting this to DEBUG implies LOG_FORMAT=debug, unless LOG_FORMAT is set explicitly.
## Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
# LOG_LEVEL=INFO

## LOG_FORMAT - The format in which to log messages to the console (and log files).
## Options: simple, debug, structured_google_cloud
# LOG_FORMAT=simple

## LOG_FILE_FORMAT - Normally follows the LOG_FORMAT setting, but can be set separately.
## Note: Log file output is disabled if LOG_FORMAT=structured_google_cloud.
# LOG_FILE_FORMAT=simple

## PLAIN_OUTPUT - Disables animated typing in the console output.
# PLAIN_OUTPUT=False
24 changes: 17 additions & 7 deletions autogpts/autogpt/autogpt/app/configurator.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

import logging
from pathlib import Path
from typing import Literal, Optional
from typing import TYPE_CHECKING, Literal, Optional

import click
from colorama import Back, Fore, Style
Expand All @@ -16,6 +16,9 @@
from autogpt.logs.helpers import print_attribute, request_user_double_check
from autogpt.memory.vector import get_supported_memory_backends

if TYPE_CHECKING:
from autogpt.core.resource.model_providers.openai import OpenAICredentials

logger = logging.getLogger(__name__)


Expand Down Expand Up @@ -103,16 +106,24 @@ def apply_overrides_to_config(
config.smart_llm = GPT_3_MODEL
elif (
gpt4only
and check_model(GPT_4_MODEL, model_type="smart_llm", config=config)
and check_model(
GPT_4_MODEL,
model_type="smart_llm",
api_credentials=config.openai_credentials,
)
== GPT_4_MODEL
):
print_attribute("GPT4 Only Mode", "ENABLED")
# --gpt4only should always use gpt-4, despite user's SMART_LLM config
config.fast_llm = GPT_4_MODEL
config.smart_llm = GPT_4_MODEL
else:
config.fast_llm = check_model(config.fast_llm, "fast_llm", config=config)
config.smart_llm = check_model(config.smart_llm, "smart_llm", config=config)
config.fast_llm = check_model(
config.fast_llm, "fast_llm", api_credentials=config.openai_credentials
)
config.smart_llm = check_model(
config.smart_llm, "smart_llm", api_credentials=config.openai_credentials
)

if memory_type:
supported_memory = get_supported_memory_backends()
Expand Down Expand Up @@ -187,12 +198,11 @@ def apply_overrides_to_config(
def check_model(
model_name: str,
model_type: Literal["smart_llm", "fast_llm"],
config: Config,
api_credentials: OpenAICredentials,
) -> str:
"""Check if model is available for use. If not, return gpt-3.5-turbo."""
openai_credentials = config.get_openai_credentials(model_name)
api_manager = ApiManager()
models = api_manager.get_models(**openai_credentials)
models = api_manager.get_models(**api_credentials.get_api_access_kwargs(model_name))

if any(model_name in m["id"] for m in models):
return model_name
Expand Down
14 changes: 2 additions & 12 deletions autogpts/autogpt/autogpt/app/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@

from colorama import Fore, Style
from forge.sdk.db import AgentDB
from pydantic import SecretStr

if TYPE_CHECKING:
from autogpt.agents.agent import Agent
Expand All @@ -31,7 +30,6 @@
ConfigBuilder,
assert_config_has_openai_api_key,
)
from autogpt.core.resource.model_providers import ModelProviderCredentials
from autogpt.core.resource.model_providers.openai import OpenAIProvider
from autogpt.core.runner.client_lib.utils import coroutine
from autogpt.logs.config import configure_chat_plugins, configure_logging
Expand Down Expand Up @@ -364,19 +362,11 @@ def _configure_openai_provider(config: Config) -> OpenAIProvider:
Returns:
A configured OpenAIProvider object.
"""
if config.openai_api_key is None:
if config.openai_credentials is None:
raise RuntimeError("OpenAI key is not configured")

openai_settings = OpenAIProvider.default_settings.copy(deep=True)
openai_settings.credentials = ModelProviderCredentials(
api_key=SecretStr(config.openai_api_key),
# TODO: support OpenAI Azure credentials
api_base=SecretStr(config.openai_api_base) if config.openai_api_base else None,
api_type=SecretStr(config.openai_api_type) if config.openai_api_type else None,
api_version=SecretStr(config.openai_api_version)
if config.openai_api_version
else None,
)
openai_settings.credentials = config.openai_credentials
return OpenAIProvider(
settings=openai_settings,
logger=logging.getLogger("OpenAIProvider"),
Expand Down
2 changes: 1 addition & 1 deletion autogpts/autogpt/autogpt/commands/image_gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ def generate_image_with_dalle(
n=1,
size=f"{size}x{size}",
response_format="b64_json",
api_key=agent.legacy_config.openai_api_key,
api_key=agent.legacy_config.openai_credentials.api_key.get_secret_value(),
)

logger.info(f"Image Generated for prompt:{prompt}")
Expand Down
Loading

0 comments on commit 7b05245

Please sign in to comment.