Open Interpreter Fails With Ollama - Litellm.BadRequestError: Invalid Message
Introduction
Open Interpreter is a powerful tool for executing code in a local environment. However, when using Ollama as the local model, the application crashes with a litellm.BadRequestError: Invalid Message
exception. In this article, we will investigate the cause of this issue and provide a solution.
Environment
The environment used to reproduce this issue is:
- OS: macOS 15.3.2
- Python version: 3.10
- Open Interpreter version: 0.4.3
- Ollama version: 0.6.0
Reproduce
To reproduce this issue, follow these steps:
- Install Ollama and Open Interpreter.
- Pull the model and run it using Ollama.
- Run
interpreter --local
- Choose Ollama and select the running model.
Expected behavior
The expected behavior is that the Open Interpreter CLI tool should open and allow the user to execute code in a local environment. However, instead, the application crashes with a litellm.BadRequestError: Invalid Message
exception.
Stacktrace
The stacktrace for the mistral
model is:
Traceback (most recent call last):
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/bin/interpreter", line 8, in <module>
sys.exit(main())
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "<string>", line 1, in <module>
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/core.py", line 145, in local_setup
self = local_setup(self)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 86, in run
self.load()
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/utils.py", line 1235, in wrapper
raise e
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/utils.py", line 1113, in wrapper
result = original_function(*args, **kwargs)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/main.py", line 3101, in completion
raise exception_type(
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/main.py", line 2823, in completion
response = base_llm_http_handler.completion(
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 247, in completion
data = provider_config.transform_request(
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/llms/ollama/completion/transformation.py", line 315, in transform_request
modified_prompt = ollama_pt(model=model, messages=messages)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/litellm_core_utils/prompt_templates/factory.py", line 265, in ollama_pt
raise litellm.BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: Invalid Message passed in {'role': 'system', 'content': 'You are a helpful AI assistant. Produce JSON OUTPUT ONLY! Adhere to this format {"name": "function_name", "arguments":{"argument_name": "argument_value"}} The following functions are available to you:\n{\'type\': \'function\', \'function\': {\'name\': \'execute\', \'description\': "Executes code on the user\'s machine **in the users local environment** and returns the output", \'parameters\': {\'type\': \'object\', \'properties\': {\'language\': {\'type\': \'string\', \'description\': \'The programming language (required parameter to the `execute` function)\', \'enum\': [\'ruby\', \'python\', \'shell\', \'javascript\', \'html\', \'applescript\', \'r\', \'powershell\', \'react\', \'java\']}, \'code\': {\'type\': \'string\', \'description\': \'The code to execute (required)\'}}, \'required\': [\'language\', \'code\']}}\n'}
The stacktrace for the llama3.2
model is similar, but with a different error message:
litellm.exceptions.BadRequestError: litellm.BadRequestError: Invalid Message passed in {'role': 'system', 'content': "You are a helpful AI assistant.\nTo execute code on the user's machine, write a markdown code block. Specify the language after the ```. You will receive the output. Use any programming language."}
Cause of the issue
The cause of this issue is that the ollama_pt
function in the litellm
library is not correctly handling the messages
parameter. The messages
parameter is expected to be a dictionary with a specific format, but the ollama_pt
function is not checking for this format correctly.
Solution
To fix this issue, we need to modify the ollama_pt
function to correctly handle the messages
parameter. We can do this by adding a check to ensure that the messages
parameter is a dictionary with the correct format.
Here is an example of how we can modify the ollama_pt
function:
def ollama_pt(model, messages):
if not isinstance(messages, dict):
<br/>
**Open Interpreter Fails with Ollama - litellm.BadRequestError: Invalid Message**
====================================================================================
**Q&A**
------
**Q: What is the cause of the issue?**
------------------------------------
A: The cause of this issue is that the `ollama_pt` function in the `litellm` library is not correctly handling the `messages` parameter. The `messages` parameter is expected to be a dictionary with a specific format, but the `ollama_pt` function is not checking for this format correctly.
**Q: How can I reproduce the issue?**
--------------------------------------
A: To reproduce this issue, follow these steps:
1. Install Ollama and Open Interpreter.
2. Pull the model and run it using Ollama.
3. Run `interpreter --local`
4. Choose Ollama and select the running model.
**Q: What is the expected behavior?**
--------------------------------------
A: The expected behavior is that the Open Interpreter CLI tool should open and allow the user to execute code in a local environment. However, instead, the application crashes with a `litellm.BadRequestError: Invalid Message` exception.
**Q: What is the stacktrace for the `mistral` model?**
------------------------------------------------
A: The stacktrace for the `mistral` model is:
```python
Traceback (most recent call last):
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/bin/interpreter", line 8, in <module>
sys.exit(main())
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "<string>", line 1, in <module>
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/core.py", line 145, in local_setup
self = local_setup(self)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 86, in run
self.load()
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/utils.py", line 1235, in wrapper
raise e
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/utils.py", line 1113, in wrapper
result = original_function(*args, **kwargs)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/main.py", line 3101, in completion
raise exception_type(
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/main.py", line 2823, in completion
response = base_llm_http_handler.completion(
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 247, in completion
data = provider_config.transform_request(
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/llms/ollama/completion/transformation.py", line 315, in transform_request
modified_prompt = ollama_pt(model=model, messages=messages)
File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/litellm_core_utils/prompt_templates/factory.py", line 265, in ollama_pt
raise litellm.BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: Invalid Message passed in {'role': 'system', 'content': 'You are a helpful AI assistant. Produce JSON OUTPUT ONLY! Adhere to this format {"name": "function_name", "arguments":{"argument_name": "argument_value"}} The following functions are available to you:\n{\'type\': \'function\', \'function\': {\'name\': \'execute\', \'description\': "Executes code on the user\'s machine **in the users local environment** and returns the output", \'parameters\': {\'type\': \'object\', \'properties\': {\'language\': {\'type\': \'string\', \'description\': \'The programming language (required parameter to the `execute` function)\', \'enum\': [\'ruby\', \'python\', \'shell\', \'javascript\', \'html\', \'applescript\', \'r\', \'powershell\', \'react\', \'java\']}, \'code\': {\'type\': \'string\', \'description\': \'The code to execute (required)\'}}, \'required\': [\'language\', \'code\']}}\n'}
Q: What is the stacktrace for the llama3.2
model?
A: The stacktrace for the llama3.2
model is similar, but with a different error message:
litellm.exceptions.BadRequestError: litellm.BadRequestError: Invalid Message passed in {'role': 'system', 'content': "You are a helpful AI assistant.\nTo execute code on the user's machine, write a markdown code block. Specify the language after the ```. You will receive the output. Use any programming language."}
Q: How can I fix the issue?
A: To fix this issue, you need to modify the ollama_pt
function to correctly handle the messages
parameter. You can do this by adding a check to ensure that the messages
parameter is a dictionary with the correct format.
Here is an example of how you can modify the ollama_pt
function:
def ollama_pt(model, messages):
if not isinstance(messages, dict):
raise litellm.BadRequestError("Invalid Message passed")
# ... rest of the function ...
Q: What is the expected behavior after fixing the issue?
A: After fixing the issue, the expected behavior is that