You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a faculty member at a US University. I got interested in crewai because we have an upcoming project where we evaluate our courses and compare them against industry trends and so on for relevancy discussions. I thought it may work for this use case.
I decided to implement a simple test for just grading and see how it goes. I created three “agents”, one to pull a student’s discussion posts from the discussion forum, one to grade based on a rubric and instructions and one to use the grading results to craft a feedback response. Also one more at the end that just looks at all the results and provides a summary for the instructor. It worked fine.
The issue is, what I implemented is just a series of LLM calls and not much more. One pushes the forum export and receives that student’s specific work. Grader pushes that plus rubric and grading instructions, receives an evaluation. Feedback writer pushes that plus instructions on tone etc and receives an email. I could easily do all of this manually, using custom gpts or gemini gems. This is nice automation, but I am not seeing the agent angle.
For me, agents imply:
A goal or objective.
The ability to plan or decompose that goal into tasks.
Iterative reasoning with feedback loops.
Some notion of state and progress.
A stopping condition that is internally determined rather than externally scripted.
That implies loops, reflection, self correction, tool use decisions, and termination logic that emerges from the agent’s own reasoning rather than being told specifically what to do.Is the difference I am seeing here because of my implementation? My real project of looking at courses and their relevancy wouldn’t be all that different. It would still be a bunch of calls to gather various bits of information, and then calling an LLM to evaluate all of it together.
Don't get me wrong, If crewai is not really an agent framework but an automated managed workflow of LLM calls, there is nothing wrong with that. This was helpful to me, and the other project would also benefit from automation. I just want to understand the terms and what I am doing. If I left some capabilities unexplored and I can tap into more agentic behavior as I described above, that's great to learn.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am a faculty member at a US University. I got interested in crewai because we have an upcoming project where we evaluate our courses and compare them against industry trends and so on for relevancy discussions. I thought it may work for this use case.
I decided to implement a simple test for just grading and see how it goes. I created three “agents”, one to pull a student’s discussion posts from the discussion forum, one to grade based on a rubric and instructions and one to use the grading results to craft a feedback response. Also one more at the end that just looks at all the results and provides a summary for the instructor. It worked fine.
The issue is, what I implemented is just a series of LLM calls and not much more. One pushes the forum export and receives that student’s specific work. Grader pushes that plus rubric and grading instructions, receives an evaluation. Feedback writer pushes that plus instructions on tone etc and receives an email. I could easily do all of this manually, using custom gpts or gemini gems. This is nice automation, but I am not seeing the agent angle.
For me, agents imply:
A goal or objective.
The ability to plan or decompose that goal into tasks.
Iterative reasoning with feedback loops.
Some notion of state and progress.
A stopping condition that is internally determined rather than externally scripted.
That implies loops, reflection, self correction, tool use decisions, and termination logic that emerges from the agent’s own reasoning rather than being told specifically what to do.Is the difference I am seeing here because of my implementation? My real project of looking at courses and their relevancy wouldn’t be all that different. It would still be a bunch of calls to gather various bits of information, and then calling an LLM to evaluate all of it together.
Don't get me wrong, If crewai is not really an agent framework but an automated managed workflow of LLM calls, there is nothing wrong with that. This was helpful to me, and the other project would also benefit from automation. I just want to understand the terms and what I am doing. If I left some capabilities unexplored and I can tap into more agentic behavior as I described above, that's great to learn.
Beta Was this translation helpful? Give feedback.
All reactions