-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only 10 iterations for Llama agent #507
Conversation
WalkthroughThe pull request introduces modifications to the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
prediction_market_agent/agents/microchain_agent/deploy.py (2)
167-167
: Approved: Reduction of iterations addresses the PromptTooLong error.The change from 20 to 10 iterations is a good initial step to address the PromptTooLong error. This aligns with the PR objective and the comment provides clear context for the change.
Consider implementing a dynamic iteration limit based on the current prompt length to maximize the number of iterations while staying within the token limit. This could be achieved by:
- Calculating the token count of the current prompt before each iteration.
- Estimating the tokens needed for each iteration.
- Adjusting the remaining iterations based on the available token budget.
Example implementation:
from transformers import GPT2Tokenizer class DeployableMicrochainModifiableSystemPromptAgent3( DeployableMicrochainModifiableSystemPromptAgentAbstract ): # ... existing code ... def run_general_agent(self, market_type: MarketType) -> None: # ... existing code ... tokenizer = GPT2Tokenizer.from_pretrained("gpt2") max_tokens = 4096 estimated_tokens_per_iteration = 200 # Adjust based on your observations for i in range(self.n_iterations): current_prompt_tokens = len(tokenizer.encode(agent.prompt)) remaining_tokens = max_tokens - current_prompt_tokens remaining_iterations = remaining_tokens // estimated_tokens_per_iteration if remaining_iterations <= 0: logger.info(f"Stopping after {i} iterations due to token limit.") break # Run a single iteration agent.run(1) # ... rest of the existing code ...This approach would allow for more flexible usage of the available token budget.
Line range hint
183-193
: Approved: Improved goal management implementation.The changes to the
build_goal_manager
method enhance the goal management functionality of the agent. The method now consistently returns a GoalManager instance with well-defined parameters, which improves the overall robustness of the agent's goal-oriented behavior.Consider extracting the hardcoded values (retry_limit and goal_history_limit) to class-level constants for better maintainability:
class DeployableMicrochainWithGoalManagerAgent0(DeployableMicrochainAgent): # ... existing code ... RETRY_LIMIT = 1 GOAL_HISTORY_LIMIT = 10 def build_goal_manager(self, agent: Agent) -> GoalManager: return GoalManager( agent_id=self.task_description, high_level_description="You are a trader agent in prediction markets, aiming to maximise your long-term profit.", agent_capabilities=f"You have the following capabilities:\n{get_functions_summary_list(agent.engine)}", retry_limit=self.RETRY_LIMIT, goal_history_limit=self.GOAL_HISTORY_LIMIT, )This change would make it easier to adjust these values in the future if needed.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
- prediction_market_agent/agents/microchain_agent/deploy.py (1 hunks)
🧰 Additional context used
🔇 Additional comments (1)
prediction_market_agent/agents/microchain_agent/deploy.py (1)
Line range hint
1-193
: Summary: Changes address the PromptTooLong error and improve agent functionality.The modifications in this file effectively address the PR objective of handling the PromptTooLong error by reducing the number of iterations for the Llama agent. Additionally, the changes enhance the goal management functionality of the agent, improving its overall robustness and capabilities.
Key improvements:
- Reduced iterations for the Llama agent to prevent token limit issues.
- Enhanced goal management implementation with consistent return type and well-defined parameters.
Recommendations:
- Consider implementing a dynamic iteration limit for more flexible token budget usage.
- Extract hardcoded values in the
build_goal_manager
method to class-level constants for better maintainability.Overall, these changes are well-implemented and achieve the desired objectives. The PR is ready for approval, pending the suggested minor improvements.
We are still getting
essage: "{"detail":"E1002 PromptTooLong: Prompt length exceeds maximum input length"}"
, these general agents probably need a more clever, general fix.