Releases: crewAIInc/crewAI
Releases · crewAIInc/crewAI
0.65.2
0.64.0
v0.63.6
- Updating projects templates
v0.63.5
- Bringing support to o1 family back, and any model that don't support stop words
- Updating dependencies
- Updating logs
- Updating docs
v0.63.2
- Adding OPENAI_BASE_URL as fallback
- Adding proper LLM import
- Updating docs
v0.63.1
- Small bug fix for support future CrewAI deploy
v0.63.0
- New LLM class to interact with LLMs (leveraging LiteLLM)
- Adding support to custom memory interfaces
- Bringing GPT-4o-mini as the default model
- Updates Docs
- Updating dependencies
- Bug fixes
- Remove redundant task creation in
kickoff_for_each_async
- Remove redundant task creation in
v0.61.0
- Updating dependencies
- Printing max rpm message in different color
- Updating all cassettes for tests
- Always ending on a user message - to better support certain models like bedrock ones
- Overall small bug fixes
v0.60.0
- Removing LangChain and Rebuilding Executor
- Get all of out tests back to green
- Adds the ability to not use system prompt use_system_prompt on the Agent
- Adds the ability to not use stop words (to support o1 models) use_stop_words on the Agent
- Sliding context window gets renamed to respect_context_window, and enable by default
- Delegation is now disabled by default
- Inner prompts were slightly changed as well
- Overall reliability and quality of results
- New support for:
- Number of max requests per minute
- A maximum number of iterations before giving a final answer
- Proper take advantage of system prompts
- Token calculation flow
- New logging of the crew and agent execution
v0.55.2
What's Changed
- Adding ability for auto complete
- Add name and expected_output to TaskOutput
- New
crewai install
CLI - New
crewai deploy
CLI - Cleaning up of Pipeline feature
- Updated docs
- Dev experience improvements like bandit CI pipeline
- Fix bugs:
- Ability to use
planning_llm
- Fix YAML based projects
- Fix Azure support
- Add support to Python 3.10
- Moving away from Pydantic v1
- Ability to use