You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm getting the following error even though I explicitly specified that my agents should use gpt-4o-mini which is the actually the default, but apparently something is broken.
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 26827 tokens (26827 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Steps to Reproduce
crewai create crew some_crew
pass some variable that makes it exceed a context length of 8192 tokens
run the crew and you should get the error from openai telling you that you're using a model that a context lenght of 8192 tokens
Expected behavior
When specifying llm parameter to agents it should use it!
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 26827 tokens (26827 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Possible Solution
Assuming It's related to your recent migration to LiteLLM
Additional context
When I use the old version of declaring agents, It works fine
The text was updated successfully, but these errors were encountered:
Description
I'm getting the following error even though I explicitly specified that my agents should use
gpt-4o-mini
which is the actually the default, but apparently something is broken.Steps to Reproduce
Expected behavior
When specifying llm parameter to agents it should use it!
Screenshots/Code snippets
Operating System
Ubuntu 24.04
Python Version
3.12
crewAI Version
0.63.6
crewAI Tools Version
0.63.6
Virtual Environment
Venv
Evidence
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 26827 tokens (26827 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Possible Solution
Assuming It's related to your recent migration to LiteLLM
Additional context
When I use the old version of declaring agents, It works fine
The text was updated successfully, but these errors were encountered: