Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only 10 iterations for Llama agent #507

Merged
merged 1 commit into from
Oct 10, 2024
Merged

Only 10 iterations for Llama agent #507

merged 1 commit into from
Oct 10, 2024

Conversation

kongzii
Copy link
Contributor

@kongzii kongzii commented Oct 9, 2024

We are still getting essage: "{"detail":"E1002 PromptTooLong: Prompt length exceeds maximum input length"}", these general agents probably need a more clever, general fix.

Copy link
Contributor

coderabbitai bot commented Oct 9, 2024

Walkthrough

The pull request introduces modifications to the DeployableMicrochainAgent and its subclasses in the deploy.py file. The n_iterations attribute for the DeployableMicrochainModifiableSystemPromptAgent3 class is reduced from 20 to 10. Additionally, the build_goal_manager method in the DeployableMicrochainWithGoalManagerAgent0 class is updated to return a new instance of GoalManager with specific parameters. These changes focus on refining goal management and adjusting operational parameters without adding new classes or methods.

Changes

File Path Change Summary
prediction_market_agent/agents/microchain_agent/deploy.py - Reduced n_iterations from 20 to 10 in DeployableMicrochainModifiableSystemPromptAgent3.
- Updated build_goal_manager method in DeployableMicrochainWithGoalManagerAgent0 to return a new GoalManager instance with specific parameters.

Possibly related PRs

Suggested reviewers

  • evangriffiths

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
prediction_market_agent/agents/microchain_agent/deploy.py (2)

167-167: Approved: Reduction of iterations addresses the PromptTooLong error.

The change from 20 to 10 iterations is a good initial step to address the PromptTooLong error. This aligns with the PR objective and the comment provides clear context for the change.

Consider implementing a dynamic iteration limit based on the current prompt length to maximize the number of iterations while staying within the token limit. This could be achieved by:

  1. Calculating the token count of the current prompt before each iteration.
  2. Estimating the tokens needed for each iteration.
  3. Adjusting the remaining iterations based on the available token budget.

Example implementation:

from transformers import GPT2Tokenizer

class DeployableMicrochainModifiableSystemPromptAgent3(
    DeployableMicrochainModifiableSystemPromptAgentAbstract
):
    # ... existing code ...

    def run_general_agent(self, market_type: MarketType) -> None:
        # ... existing code ...

        tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
        max_tokens = 4096
        estimated_tokens_per_iteration = 200  # Adjust based on your observations

        for i in range(self.n_iterations):
            current_prompt_tokens = len(tokenizer.encode(agent.prompt))
            remaining_tokens = max_tokens - current_prompt_tokens
            remaining_iterations = remaining_tokens // estimated_tokens_per_iteration

            if remaining_iterations <= 0:
                logger.info(f"Stopping after {i} iterations due to token limit.")
                break

            # Run a single iteration
            agent.run(1)

        # ... rest of the existing code ...

This approach would allow for more flexible usage of the available token budget.


Line range hint 183-193: Approved: Improved goal management implementation.

The changes to the build_goal_manager method enhance the goal management functionality of the agent. The method now consistently returns a GoalManager instance with well-defined parameters, which improves the overall robustness of the agent's goal-oriented behavior.

Consider extracting the hardcoded values (retry_limit and goal_history_limit) to class-level constants for better maintainability:

class DeployableMicrochainWithGoalManagerAgent0(DeployableMicrochainAgent):
    # ... existing code ...
    RETRY_LIMIT = 1
    GOAL_HISTORY_LIMIT = 10

    def build_goal_manager(self, agent: Agent) -> GoalManager:
        return GoalManager(
            agent_id=self.task_description,
            high_level_description="You are a trader agent in prediction markets, aiming to maximise your long-term profit.",
            agent_capabilities=f"You have the following capabilities:\n{get_functions_summary_list(agent.engine)}",
            retry_limit=self.RETRY_LIMIT,
            goal_history_limit=self.GOAL_HISTORY_LIMIT,
        )

This change would make it easier to adjust these values in the future if needed.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 7fa4080 and c00fbb7.

📒 Files selected for processing (1)
  • prediction_market_agent/agents/microchain_agent/deploy.py (1 hunks)
🧰 Additional context used
🔇 Additional comments (1)
prediction_market_agent/agents/microchain_agent/deploy.py (1)

Line range hint 1-193: Summary: Changes address the PromptTooLong error and improve agent functionality.

The modifications in this file effectively address the PR objective of handling the PromptTooLong error by reducing the number of iterations for the Llama agent. Additionally, the changes enhance the goal management functionality of the agent, improving its overall robustness and capabilities.

Key improvements:

  1. Reduced iterations for the Llama agent to prevent token limit issues.
  2. Enhanced goal management implementation with consistent return type and well-defined parameters.

Recommendations:

  1. Consider implementing a dynamic iteration limit for more flexible token budget usage.
  2. Extract hardcoded values in the build_goal_manager method to class-level constants for better maintainability.

Overall, these changes are well-implemented and achieve the desired objectives. The PR is ready for approval, pending the suggested minor improvements.

@kongzii kongzii merged commit dae1419 into main Oct 10, 2024
9 checks passed
@kongzii kongzii deleted the peter/lessiters2 branch October 10, 2024 07:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants