Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do these two warnings important? #862

Open
AK51 opened this issue Jul 3, 2024 · 10 comments
Open

Do these two warnings important? #862

AK51 opened this issue Jul 3, 2024 · 10 comments

Comments

@AK51
Copy link

AK51 commented Jul 3, 2024

Hi,
I am trying to make a discussion group using hierarchical,
crewAI can kind of giving me the answer, but I see these two (actually three) warnings often. Is it the parameter setting issue? I think the role is wrong.... How can I make an Albert Einstein?
May I know a proper way to make a discussion group that everyone can give their options in his/her field, and then come up with a reasonable output? It seems like they do not really discussing in my code.... Thx

Error executing tool. Co-worker mentioned not found, it must to be one of the following options:
I encountered an error while trying to use the tool. This was the error: AgentTools.delegate_work() missing 1 required positional argument: 'context'.
 Tool Delegate work to co-worker accepts these inputs: Delegate work to co-worker(task: str, context: str, coworker: Optional[str] = None, **kwargs) - Delegate a specific task to one of the following co-workers: [You are Albert Einstein, You are Paul Dirac, You are JJ Thomson, You are Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.
Action 'Delegate work to co-worker(task="Explain how the speed of light is related to Maxwell's equations", context="The speed of light is given by c = 1/(√(με_0)), where c is the speed of light in vacuum, μ and ε are the permeability and permittivity of space respectively. I need a detailed explanation of how this relationship is established in Maxwell's equations.", coworker="Max Planck")' don't exist, these are the only available Actions:
 Delegate work to co-worker: Delegate work to co-worker(task: str, context: str, coworker: Optional[str] = None, **kwargs) - Delegate a specific task to one of the following co-workers: [Albert Einstein, Paul Dirac, JJ Thomson, Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.
Ask question to co-worker: Ask question to co-worker(question: str, context: str, coworker: Optional[str] = None, **kwargs) - Ask a specific question to one of the following co-workers: [Albert Einstein, Paul Dirac, JJ Thomson, Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.
@gadgethome
Copy link

Hi the co-worker issue should be fixed in the latest release. Please can you upgrade and then retest. thanks

@AK51
Copy link
Author

AK51 commented Jul 3, 2024

Hi, thanks for your reply.
ok, I update it again.
pip install 'crewai[tools]'

I am working on a physic research AI group. So far, I have these warnings. Thx

Action 'Delegate work to University professor in Physics' don't exist, these are the only available Actions:
 Delegate work to co-worker: Delegate work to co-worker(task: str, context: str, coworker: Optional[str] = None, **kwargs) - Delegate a specific task to one of the following co-workers: [Albert Einstein, JJ Thomson, Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.
Ask question to co-worker: Ask question to co-worker(question: str, context: str, coworker: Optional[str] = None, **kwargs) - Ask a specific question to one of the following co-workers: [Albert Einstein, JJ Thomson, Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.
 Thought: I have an understanding of how electricity is related to the speed of light within General Relativity, but to better elucidate this concept, I should ask a question about electric and magnetic fields and their energy content.

Action: Ask question to co-worker
Action Input: {"co-worker": "University professor in Physics", "question": "Could you please explain the relationship between electric and magnetic fields, and their energy content within the context of General Relativity?"} 

I encountered an error while trying to use the tool. This was the error: AgentTools.ask_question() missing 1 required positional argument: 'context'.
 Tool Ask question to co-worker accepts these inputs: Ask question to co-worker(question: str, context: str, coworker: Optional[str] = None, **kwargs) - Ask a specific question to one of the following co-workers: [Albert Einstein, Paul Dirac, Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.


Here is part of my code, I use Sequential now, Is it a correct way to create a discussion group? Thx

crew = Crew(
    agents=[albert_einstein, paul_dirac, jj_thomson, max_planck, professor, andy],
    tasks=[task0,task1,task2,task3,task4],
    verbose=2,
    manager_llm=llm_model,
    share_crew=True,   
    process=Process.sequential,
)

@theCyberTech
Copy link
Collaborator

theCyberTech commented Jul 4, 2024

Can you please raise this is our Discord support channel? This is so we can help you more efficiently .

Be sure to supply all your working code

@lorenzejay
Copy link
Collaborator

And can you share the full code of your crew with agents + tasks defined please

@learner-crapy
Copy link

Hi, thanks for your reply. ok, I update it again. pip install 'crewai[tools]'

I am working on a physic research AI group. So far, I have these warnings. Thx

Action 'Delegate work to University professor in Physics' don't exist, these are the only available Actions:
 Delegate work to co-worker: Delegate work to co-worker(task: str, context: str, coworker: Optional[str] = None, **kwargs) - Delegate a specific task to one of the following co-workers: [Albert Einstein, JJ Thomson, Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.
Ask question to co-worker: Ask question to co-worker(question: str, context: str, coworker: Optional[str] = None, **kwargs) - Ask a specific question to one of the following co-workers: [Albert Einstein, JJ Thomson, Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.
 Thought: I have an understanding of how electricity is related to the speed of light within General Relativity, but to better elucidate this concept, I should ask a question about electric and magnetic fields and their energy content.

Action: Ask question to co-worker
Action Input: {"co-worker": "University professor in Physics", "question": "Could you please explain the relationship between electric and magnetic fields, and their energy content within the context of General Relativity?"} 

I encountered an error while trying to use the tool. This was the error: AgentTools.ask_question() missing 1 required positional argument: 'context'.
 Tool Ask question to co-worker accepts these inputs: Ask question to co-worker(question: str, context: str, coworker: Optional[str] = None, **kwargs) - Ask a specific question to one of the following co-workers: [Albert Einstein, Paul Dirac, Max Planck, University professor in Physics, University student major in Physics]
The input to this tool should be the co-worker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.

Here is part of my code, I use Sequential now, Is it a correct way to create a discussion group? Thx

crew = Crew(
    agents=[albert_einstein, paul_dirac, jj_thomson, max_planck, professor, andy],
    tasks=[task0,task1,task2,task3,task4],
    verbose=2,
    manager_llm=llm_model,
    share_crew=True,   
    process=Process.sequential,
)

I also met this issue with local ollama, but it works with openAI API, I searched that and some guys said that because the local model has not enough token length. have you solved that?

@lorenzejay
Copy link
Collaborator

can you confirm you're on the latest version of crew? saw you pip install crewai[tools] but also pip install crewai

@learner-crapy
Copy link

pip install crewai

Hi, thanks for your response, I'm sure I installed the newest version, when I run pip show crewai, it shows 0.36.0, it's current newest version. here is my code, I use phi3 with ollama, the picture shows the problem I met.

import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
from langchain_community.llms import Ollama

# Set environment variables for Ollama
os.environ['OPENAI_API_BASE'] = 'http://localhost:11434'
os.environ['OPENAI_MODEL_NAME'] = 'phi3'  # Adjust based on available model
os.environ["OPENAI_API_KEY"] = "NA"  # Assuming no API key is needed for local Ollama
os.environ["SERPER_API_KEY"] = "my key"
# Initialize the Ollama model
llm = Ollama(
    model="phi3",
    base_url="http://localhost:11434"
)

# Initialize the tools
search_tool = SerperDevTool()

# Define your agents with roles, goals, and the Ollama model
researcher = Agent(
  role='Senior Research Analyst',
  goal='Uncover cutting-edge developments in AI and data science',
  backstory="""You work at a leading tech think tank.
  Your expertise lies in identifying emerging trends.
  You have a knack for dissecting complex data and presenting actionable insights.""",
  verbose=True,
  allow_delegation=False,
  llm=llm,  # Use the Ollama model here
  tools=[search_tool]
)

writer = Agent(
  role='Tech Content Strategist',
  goal='Craft compelling content on tech advancements',
  backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
  You transform complex concepts into compelling narratives.""",
  verbose=True,
  allow_delegation=True,
  llm=llm  # Use the Ollama model here
)

# Create tasks for your agents
task1 = Task(
  description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
  Identify key trends, breakthrough technologies, and potential industry impacts.""",
  expected_output="Full analysis report in bullet points",
  agent=researcher
)

task2 = Task(
  description="""Using the insights provided, develop an engaging blog
  post that highlights the most significant AI advancements.
  Your post should be informative yet accessible, catering to a tech-savvy audience.
  Make it sound cool, avoid complex words so it doesn't sound like AI.""",
  expected_output="Full blog post of at least 4 paragraphs",
  agent=writer
)

# Instantiate your crew with a sequential process
crew = Crew(
  agents=[researcher, writer],
  tasks=[task1,task2],
  verbose=2, # You can set it to 1 or 2 to different logging levels
  process = Process.sequential
)

# Get your crew to work!
result = crew.kickoff()

print("######################")
print(result)
image

@lorenzejay
Copy link
Collaborator

Gotcha. So its not the version. I noticed you're using phi3 if you run with gpt-x or llama3 locally, do it still have the same errors?

@learner-crapy
Copy link

Gotcha. So its not the version. I noticed you're using phi3 if you run with gpt-x or llama3 locally, do it still have the same errors?

I tried llama3:7b locally, I got another issue shows in the image:

when I tried openai GPT-4o, it works. it was wired with the local model, different models have different issues.

image

@AK51
Copy link
Author

AK51 commented Jul 8, 2024

Hi,

I use local model Mistral. Do only local models have these warnings?
For my program, other Local models do not really work, keep looping itself...
I have tried lama2,3
Thx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants