Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: history is not extended with tool call related messages automatically #9408

Open
bbence84 opened this issue Oct 23, 2024 · 7 comments
Assignees
Labels
bug Something isn't working python Pull requests for the Python Semantic Kernel

Comments

@bbence84
Copy link

Describe the bug

Not exactly sure if this is a bug, but at least a gap. I have moved away from using the Planners and rely on "vanilla" function calling. Yet my tools that I defined can be chained after each other, and the LLM does come up with a plan how to chain them, and can call them after each other pretty well.
Now my problem is that my tool functions usually return a JSON which contains i.e. the IDs of certain business objects that were created, and then the followup tool has a parameter that should use this (technical ID). The LLM response (the textual response) based on the function return value usually does not contain this ID (rightly so, as it's indeed a technical detail). The problem is that now followup functions have no clue about the ID, because the ChatHistory does not contain the tool calls.

Rerefencing issue #8071 and #8020.

To Reproduce

I tested this with a simpler example, as per recommendation the following:
https://github.com/microsoft/semantic-kernel/blob/main/python/samples/concepts/filtering/auto_function_invoke_filters.py
Also here printing the ChatHistory does not contain any tool call or tool call results:

    chat_hist_json = history.serialize()
    print(f"Chat history: {chat_hist_json}")    

Expected behavior

Chat history contains FunctionResultContent and FunctionCallContent automatically, as that is needed if tool functions are to be chained and one tool uses the result of a previous tool call.

Platform

  • semantic-kernel==1.11.0

Additional context
Add any other context about the problem here.

@bbence84 bbence84 added the bug Something isn't working label Oct 23, 2024
@markwallace-microsoft markwallace-microsoft added python Pull requests for the Python Semantic Kernel triage labels Oct 23, 2024
@moonbox3 moonbox3 self-assigned this Oct 24, 2024
@moonbox3 moonbox3 removed the triage label Oct 24, 2024
@moonbox3
Copy link
Contributor

Hi @bbence84, thanks for continuing the conversation here. As I mentioned in our previous thread in #8071, all of the relevant calls are contained in the metadata's chat history. After each kernel invocation, you have access to the underlying content types (whether FunctionCallContent or FunctionResultContent -- even if not using a filter).

For the caller's ChatHistory, that's their responsibility to manage. The caller shall populate their chat history with the information required to either maintain context (or not).

Can you please help me understand what the gap is?

@bbence84
Copy link
Author

bbence84 commented Nov 1, 2024

Sorry for the late response and thanks for the reply.
I think I know what confused me: I was using kernel.invoke_stream, and I could not find these in the messages returned by the kernel.invoke_stream result messages. The result of kernel.invoke indeed contains what you mentioned, but I can't seem to be able to get this with invoke_stream.
Am I overlooking something? :)
Thanks!

@bbence84
Copy link
Author

bbence84 commented Nov 5, 2024

Hi @bbence84, thanks for continuing the conversation here. As I mentioned in our previous thread in #8071, all of the relevant calls are contained in the metadata's chat history. After each kernel invocation, you have access to the underlying content types (whether FunctionCallContent or FunctionResultContent -- even if not using a filter).

For the caller's ChatHistory, that's their responsibility to manage. The caller shall populate their chat history with the information required to either maintain context (or not).

Can you please help me understand what the gap is?

You are right, I understand now, that the caller needs to fill the ChatHistory. However as I mentioned above, there seems to be a difference between how the kernel.invoke and kernel.invoke_stream returns relevant info. The stream based invoke does not seem to contain the tool call related messages. I can do the following for the kernel.invoke version though:
chat_history.add_message(answer.metadata['messages'].messages[-2])
chat_history.add_message(answer.metadata['messages'].messages[-1])

But this is not available for the invoke_stream.

Am I missing something? Thanks!

@moonbox3
Copy link
Contributor

moonbox3 commented Nov 5, 2024

Thanks for the ping on this. Will have a look tomorrow morning and get back to you.

@moonbox3
Copy link
Contributor

moonbox3 commented Nov 6, 2024

Hi @bbence84, I've been looking into this more. Sorry for the delay.

In the chat_gpt_api_function_calling.py concept sample, I added the following line to combine the streaming chunks we receive from the LLM:

if result_content:
    # this line is new to view content types
    streaming_chat_message = reduce(lambda first, second: first + second, result_content)
    return "".join([str(content) for content in result_content])

I can see that we get the FunctionCallContent and then the result as StreamingTextContent:

Image

We are yielding those two content types out as they are what we receive streaming from the model. The gap here is that we aren't yielding out the FunctionResultContent. We will add this work to our backlog so we can expose the streaming FunctionResultContent. This would then allow you to add the proper content types to your chat history that you manage.

@bbence84
Copy link
Author

Thanks a lot! :) I was beginning to have doubts that I don't understand how this supposed to work, but thanks for confirming that it does not work the same way with streaming.

@bbence84
Copy link
Author

@moonbox3 sorry for pinging you, but is the yielding of FunctionResultContent already part of the backlog for an upcoming sprint?
The problem is that with streaming, I am doing a workaround by adding the FunctionResultContent as a regular (hidden) assistant message, and with this, the result of the function call can be later used by the agent. However sometimes the agent gets confused with subsequent function calls, and will output the function calls in string, as opposed to actually calling the function. So it starts to assume that the way to call functions is to just output them as text in the response, instead of returning a function_call response.
Without streaming, like you said, it works, but I would like to have both streaming and FunctionResultContent available in the result.
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working python Pull requests for the Python Semantic Kernel
Projects
Status: No status
Development

No branches or pull requests

3 participants