Replies: 8 comments 4 replies
-
Hey @yamonkjd This looks incredible. I never quite understand use cases for the Pipeline. Could you help me understand it? So, the main goal of this is to have dynamic prompts but using the Python Function Tool instead? How does that differ from having a normal PromptTemplate with many variables and then just adding components that fill that variable with whatever data you need? Really awesome work. Thanks for all the contributions, by the way! |
Beta Was this translation helpful? Give feedback.
-
The pipeline prompt in Langchain works by comparing the value of a specific prompt with the name of the prompt to combine specific areas. It then sequentially assembles them in the final prompter of the pipe, operating on this logic. My source code utilizes this design as is However, the concept I added allows the prompter object's partial value to combine dynamic values instead of static ones, as you mentioned. This enables even models that do not support function calls to simply generate outputs without inputs by substituting the value. (ex: weather, time, status, system version ~~) Also, as mentioned in the title, through Routerchain, the agent operates by selecting the prompter itself according to the appropriate format based on the name and description of the prompt object. Therefore, when combined with a pipeline, there's no need to create multiple instances of the same prompter, much like a class. Through this, when the agent operates, it doesn't send useless values, allowing for flexible response to token limitations. Also, because it provides filtered values, the performance of the model can improve. To further explain the usage example, Q. The user asked about the weather at the current location. case2. case1 is the operation logic of the function call, And, because it uses partial values, if the Python function tool cannot provide data, it is automatically deleted from the prompter. Reference link : I have verified that my logic works correctly with this code. |
Beta Was this translation helpful? Give feedback.
-
If you connect another hyperprompt to the hyperprompt being inputted into the model via an extra prompt, and match the variable names in the final prompt with those of the connected prompt, it will work. |
Beta Was this translation helpful? Give feedback.
-
I created it using the basic structure of Langchain. Because the name and description already exist in the Python function tool, we decided to reuse the widget rather than creating a separate widget. Due to the structure of the LangChain architecture, Python code is copied and operated from the LangChain framework, so no additional development was required, and it also works well in LangFlow. The Python function tool is configured to have only output and no input. (If it is not an OPENAI model, parameters cannot be created and passed by the model, so it is not valid in other models such as LAMMA.) Of course, you can expect the same effect by calling the tool, but data with only output consumes excessive resources, so this configuration is reasonable for simple data. |
Beta Was this translation helpful? Give feedback.
-
Currently, we are researching the logic to automatically convert and output this template into chat prompts, message prompts, and general prompts. If you think this structure is valid and can be included in the roadmap, we will create a new pull request and commit the code for that part. Since we are trying to include too many features, if there are areas in progress with other similar goals, we will try to hold off on progress. We want to control all prompter types with one widget. |
Beta Was this translation helpful? Give feedback.
-
zz.mp4An example of use is attached. However, due to a bug in my code, the last Pipe0 did not work properly. zz2.mp4 |
Beta Was this translation helpful? Give feedback.
-
Hi there, |
Beta Was this translation helpful? Give feedback.
-
Hi! 👋 |
Beta Was this translation helpful? Give feedback.
-
As shown in the above concept, we have created a custom prompter chain that can configure a pipeline by connecting Python function tools and other prompters.
It is a structure that is driven partially by comparing the Name field of the Python function tool with the value in the prompter.
Through this, you can check the time in real time within the prompter or insert data in time series format directly into the prompter, and since it has its own Name field, it can be operated as a multiprompter later. (Confirmed direct operation)
Since this custom component uses the “python Function Tool”, there is no need to create a separate new widget. This functions similarly to Openai's Funtioncall by directly connecting the tools for agents to use.
By expanding in this way, it is possible to create a Langchain pipeline.
When I try "Lint" it works fine, although it has some bugs. I hope that prompt editing will become more convenient as this concept becomes a common task.
Beta Was this translation helpful? Give feedback.
All reactions