Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defining ai_fn at runtime #831

Open
3 tasks done
pietz opened this issue Feb 6, 2024 · 7 comments
Open
3 tasks done

Defining ai_fn at runtime #831

pietz opened this issue Feb 6, 2024 · 7 comments
Labels
enhancement New feature or request feature Pull requests that add features

Comments

@pietz
Copy link

pietz commented Feb 6, 2024

First check

  • I added a descriptive title to this issue.
  • I used the GitHub search to look for a similar issue and didn't find it.
  • I searched the Marvin documentation for this feature.

Describe the current behavior

I'm working on a project where I basically want what the ai_fn is doing but I need to be able to define it at runtime. I might be missing something but that doesn't seem possible at the moment. I even tried setting the docstring through __doc__ but it just doesn't work.

Describe the proposed behavior

I think it would be nice to have the functionality of the ai_fn decorator through an actual function like cast and extract. According to the current naming conventions write() could be a good name but it might be worth discussing this a bit more.

Example Use

marvin.write("Hello, my name is Pietz.", str, "Translate the provided text to German")

Additional context

No response

@pietz pietz added the enhancement New feature or request label Feb 6, 2024
@zzstoatzz zzstoatzz added the feature Pull requests that add features label Feb 6, 2024
@zzstoatzz
Copy link
Collaborator

hi @pietz - is there a case where cast doesn't work for you? e.g.

In [1]: import marvin

In [2]: marvin.cast("Hello, my name is Pietz.", str, "Translate the provided text to German")
Out[2]: 'Hallo, mein Name ist Pietz.'

In [3]: !marvin version
Version:		2.1.4.dev27+g8ee6b7c0
Python version:		3.12.1
OS/Arch:		darwin/arm64

@pietz
Copy link
Author

pietz commented Feb 7, 2024

Sorry I didn't mention that. I haven't looked into the prompt template of cast but the results were pretty bad for purposes outside the core idea of cast. Generating, summarizing, translating we're all kinda bad.

@HamzaFarhan
Copy link

@zzstoatzz Yes I think a basic abstraction without a pre-defined prompt would be useful as well.
One that takes data, instructions, target and just does what the instructions ask it to do, while also making sure the result is in the target format.
Data could also be a list of messages for conversation history.
Basically a wrapper around the chat completions API with an extra target param.

@pietz
Copy link
Author

pietz commented Feb 7, 2024

@HamzaFarhan completely agree with this! It would be great to define it at runtime and also offer async.

I like the decorator syntax for the ai function. It's such a nice mental model that we define a python function but because we're dealing with LLMs, we only write the docstring and not the body. Not being able to define it at runtime, is a bummer though.

@HamzaFarhan
Copy link

Thoughts on this?

import textwrap
from enum import Enum
from inspect import cleandoc

import marvin


class ModelName(str, Enum):
    GPT_3 = "gpt-3.5-turbo-0125"
    GPT_4 = "gpt-4-turbo-preview"


def deindent(text: str) -> str:
    return textwrap.dedent(cleandoc(text))


def message_template(message: dict[str, str]) -> str:
    return deindent(f"## {message['role'].upper()} ##\n\n{message['content']}")


def chat_template(messages: list[dict[str, str]]) -> str:
    chat = [message_template(message) for message in messages]
    return deindent("\n\n".join(chat))


def chat_message(role: str, content: str) -> dict[str, str]:
    return {"role": role, "content": content}


def user_message(content: str) -> dict[str, str]:
    return chat_message(role="user", content=content)


def assistant_message(content: str) -> dict[str, str]:
    return chat_message(role="assistant", content=content)


@marvin.fn(model_kwargs={"model": ModelName.GPT_3, "temperature": 0.5})
def assistant_response(convo: list[dict[str, str]]) -> str:
    """
    Returns the assistant response to the conversation so far.
    """
    # Returning the convo as a fromatted string gives much better results.
    return chat_template(convo)


def ask_marvin(
    messages: list[dict[str, str]] | None = None, prompt: str = ""
) -> list[dict[str, str]]:
    messages = messages or []
    if prompt:
        messages.append(user_message(prompt))
    if messages:
        messages.append(assistant_message(assistant_response(messages)))
    return messages


messages = [
    user_message("It's my first day at a new job."),
    user_message("The commute is an hour long."),
]
messages = ask_marvin(messages=messages, prompt="How should I pass the time?")

# [{'role': 'user', 'content': "It's my first day at a new job."},
#  {'role': 'user', 'content': 'The commute is an hour long.'},
#  {'role': 'user', 'content': 'How should I pass the time?'},
#  {'role': 'assistant',
#   'content': "That's exciting! You could listen to podcasts, read a book, or plan your day ahead during the commute."}]

@pietz
Copy link
Author

pietz commented Feb 8, 2024

Personally, I'm not looking for a chat interface. Maybe I misunderstood you. This is my workaround for now:

from marvin.ai.text import _generate_typed_llm_response_with_tool
from marvin.ai.prompts.text_prompts import FUNCTION_PROMPT

async def ai_function(
    instruction: str,
    inputs: dict,
    output_type: Any
):

    # Call the language model to generate the output
    result = await _generate_typed_llm_response_with_tool(
        prompt_template=FUNCTION_PROMPT,
        prompt_kwargs=dict(
            fn_definition=instruction,
            bound_parameters=inputs,
            return_value=str(output_type),  # Assuming return_annotation is a string representation
        ),
        type_=output_type
    )

    return result

It's "just" the normal marvin function but I can define it at runtime. I'm happy with my workaround. Everything else is just convenience.

@heijligers
Copy link

can you share an example how you use this versus the ai_fn decorator? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature Pull requests that add features
Projects
None yet
Development

No branches or pull requests

4 participants