Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Configuration of llm variable for agent doesn't work #1356

Closed
racso-dev opened this issue Sep 26, 2024 · 4 comments
Closed

[BUG] Configuration of llm variable for agent doesn't work #1356

racso-dev opened this issue Sep 26, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@racso-dev
Copy link

racso-dev commented Sep 26, 2024

Description

I'm getting the following error even though I explicitly specified that my agents should use gpt-4o-mini which is the actually the default, but apparently something is broken.

openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 26827 tokens (26827 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Steps to Reproduce

  1. crewai create crew some_crew
  2. pass some variable that makes it exceed a context length of 8192 tokens
  3. run the crew and you should get the error from openai telling you that you're using a model that a context lenght of 8192 tokens

Expected behavior

When specifying llm parameter to agents it should use it!

Screenshots/Code snippets

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from langchain_openai import ChatOpenAI

@CrewBase
class SomeCrew():
	"""Some crew"""
	agents_config = 'config/agents.yaml'
	tasks_config = 'config/tasks.yaml'

	@agent
	def writer(self) -> Agent:
		return Agent(
			config=self.agents_config['writer'],
			verbose=True,
			llm=ChatOpenAI(temperature=0.7, model="gpt-4o-mini"),
		)

	@task
	def writing_task(self) -> Task:
		return Task(
			config=self.tasks_config['writing_task'],
		)
		
	@crew
	def crew(self) -> Crew:
		"""Creates the Autoseo crew"""
		return Crew(
			agents=self.agents,
			tasks=self.tasks,
			process=Process.sequential,
			verbose=True,
			memory=True,
			output_log_file='crew.log',
		)

Operating System

Ubuntu 24.04

Python Version

3.12

crewAI Version

0.63.6

crewAI Tools Version

0.63.6

Virtual Environment

Venv

Evidence

openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 26827 tokens (26827 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Possible Solution

Assuming It's related to your recent migration to LiteLLM

Additional context

When I use the old version of declaring agents, It works fine

@racso-dev racso-dev added the bug Something isn't working label Sep 26, 2024
@joaomdmoura
Copy link
Collaborator

Good catch, looking into it!

@joaomdmoura
Copy link
Collaborator

joaomdmoura commented Sep 26, 2024

Trying to replicate this, one thing that I realized is that 4o-mini has 128k window, so the error seems odd.
will dig deeper

@joaomdmoura
Copy link
Collaborator

Version 0.64.0 is out and fixes this :D
Let me know if it still an issue but I was able to replicate it and fix it

@racso-dev
Copy link
Author

It now seems to be working fine indeed, thks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants