When Using Autogen Studio To Configure The Model In Ollama, A 502 Error Occurs When Executing The Agent.

by ADMIN 105 views

When using Autogen Studio to configure the Model in Ollama, a 502 error occurs when executing the Agent

Introduction

Autogen Studio is a powerful tool for configuring and managing models in Ollama. However, users have reported encountering a 502 error when executing the Agent after configuring the model using Autogen Studio. In this article, we will delve into the details of this issue and explore possible solutions.

What happened?

When using Autogen Studio to configure the model, the configuration JSON is as follows:

{
  "provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
  "component_type": "model",
  "version": 1,
  "component_version": 1,
  "description": "deepseek-r1:1.5b",
  "label": "deepseek-r1:1.5b",
  "config": {
    "model": "deepseek-r1:1.5b",
    "model_info": {
      "vision": false,
      "function_calling": true,
      "json_output": false,
      "family": "unknown"
    },
    "base_url": "http://127.0.0.1:11434",
    "api_key": "ollama"
  }
}

The status of "Validate Team" is "success", but when using "Run Team" and typing some input like "Hello," it gets an error "Error code: 502." The base_url is okay, because it can be linked in the browser and get "Ollama is running." Additionally, the model can be used from Python:

llm = ChatOllama(model='deepseek-r1:1.5b',base_url = '127.0.0.1:11434')
llm.invoke("Hi~")

Error Log

The log in conda shows:

Traceback (most recent call last):
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogenstudio\web\managers\connection.py", line 106, in start_stream
    async for message in team_manager.run_stream(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogenstudio\teammanager\teammanager.py", line 117, in run_stream
    async for message in team.run_stream(task=task, cancellation_token=cancellation_token):
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_base_group_chat.py", line 453, in run_stream
    await shutdown_task
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_base_group_chat.py", line 413, in stop_runtime
    await self._runtime.stop_when_idle()
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 745, in stop_when_idle
    await self._run_context.stop_when_idle()
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 120, in stop_when_idle
    await self._run_task
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 109, in _run
    await self._runtime._process_next()  # type: ignore
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 580, in _process_next
    raise e from None
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 527, in _process_publish
    await asyncio.gather(*responses)
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 522, in _on_message
    raise e
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 509, in _on_message
    return await agent.on_message(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_base_agent.py", line 113, in on_message
    return await self.on_message_impl(message, ctx)
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_sequential_routed_agent.py", line 48, in on_message_impl
    return await super().on_message_impl(message, ctx)
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_routed_agent.py", line 485, in on_message_impl
    return await h(self, message, ctx)
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_core\_routed_agent.py", line 268, in wrapper
    return_value = await func(self, message, ctx)  # type: ignore
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_chat_agent_container.py", line 53, in handle_request
    async for msg in self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token):
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_agentchat\agents\_assistant_agent.py", line 748, in on_messages_stream
    async for inference_output in self._call_llm(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_agentchat\agents\_assistant_agent.py", line 856, in _call_llm
    async for chunk in model_client.create_stream(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_ext\models\openai\_openai_client.py", line 760, in create_stream
    async for chunk in chunks:
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\autogen_ext\models\openai\_openai_client.py", line 904, in _create_stream_chunks
    stream = await stream_future
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\openai\resources\chat\completions\completions.py", line 1927, in create
    return await self._post(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\openai\_base_client.py", line 1767, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\openai\_base_client.py", line 1461, in request
    return await self._request(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\openai\_base_client.py", line 1547, in _request
    return await self._retry_request(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\openai\_base_client.py", line 1594, in _retry_request
    return await self._request(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\openai\_base_client.py", line 1547, in _request
    return await self._retry_request(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\openai\_base_client.py", line 1594, in _retry_request
    return await self._request(
  File "C:\Users\Omar\anaconda3\envs\autogen\lib\site-packages\openai\_base_client.py", line 1562, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 502

Which packages was the bug in?

The bug is in AutoGen Studio (autogensudio).

AutoGen library version.

The version of AutoGen Studio is 0.4.1.

Other library version.

No other library versions are provided.

Model used

The model used is deepseek-r1:1.5b.

Model provider

The model provider is Ollama.

Other model provider

No other model providers are provided.


**When using Autogen Studio to configure the Model in Ollama, a 502 error occurs when executing the Agent: Q&A**

Q: What is the 502 error in Autogen Studio?

A: The 502 error in Autogen Studio is an internal server error that occurs when the server is unable to process the request. In this case, the error occurs when executing the Agent after configuring the model using Autogen Studio.

Q: What are the possible causes of the 502 error in Autogen Studio?

A: The possible causes of the 502 error in Autogen Studio include:

  • Incorrect configuration: The configuration JSON may be incorrect or incomplete, leading to the 502 error.
  • Model issues: The model used may be faulty or incompatible with Autogen Studio, causing the 502 error.
  • Server issues: The server may be down or experiencing technical difficulties, resulting in the 502 error.
  • Network issues: The network connection may be unstable or slow, leading to the 502 error.

Q: How can I troubleshoot the 502 error in Autogen Studio?

A: To troubleshoot the 502 error in Autogen Studio, follow these steps:

  1. Check the configuration JSON: Verify that the configuration JSON is correct and complete.
  2. Check the model: Ensure that the model used is compatible with Autogen Studio and is not faulty.
  3. Check the server: Verify that the server is up and running.
  4. Check the network: Ensure that the network connection is stable and fast.

Q: How can I resolve the 502 error in Autogen Studio?

A: To resolve the 502 error in Autogen Studio, follow these steps:

  1. Update Autogen Studio: Ensure that Autogen Studio is up-to-date with the latest version.
  2. Reconfigure the model: Reconfigure the model using the correct configuration JSON.
  3. Restart the server: Restart the server to ensure that it is running correctly.
  4. Check the network: Ensure that the network connection is stable and fast.

Q: What are the best practices for using Autogen Studio with Ollama?

A: The best practices for using Autogen Studio with Ollama include:

  • Verify the configuration JSON: Ensure that the configuration JSON is correct and complete.
  • Use a compatible model: Use a model that is compatible with Autogen Studio and Ollama.
  • Check the server: Verify that the server is up and running.
  • Check the network: Ensure that the network connection is stable and fast.

Q: Where can I find more information about Autogen Studio and Ollama?

A: You can find more information about Autogen Studio and Ollama on the official websites and documentation. Additionally, you can search for online tutorials and forums for more information and support.