Skip to content

change ai agent human collab problem to amount of changes #356

@samuelstroschein

Description

@samuelstroschein

Context

The current phrasing on ai to human collaboration mentions hallucinations as a problem but thats not the real problem.

From https://www.loom.com/share/5f19135a5758465cae2cac8232d9e7ff?t=520&sid=8ee62b13-625f-4124-a771-1601ed2b064a

Comment

we need positive framing here (and also update the docs!)

comapre:

- agents are powerful but hallucinate
- agents make mistakes
- bad bad bad agent

no agents are amazing. the problem is not that they hallucinate or make mistakes. it the sheer amount of changes they create. if an agent does 1 change a day, you can manually diff and verify that everthing is okay.

BUT agents create 1000's of changes in 1 day. THAT's where lix helps. See and control what agents do (OMG SUCH A GOOD TAGLINE), irrespective of them doing mistakes

Transcript

08:57

I think the core problem that Liggs solves is that agents generate so many, so many changes. And it's hard to keep track of what AI agents do.

09:06

Sure, if they hallucinate, that's just an outcome of them doing so many changes. I mean, look at that. Assume that an AI agent does one change a day.

09:13

Is it problematic that they're hallucinating? No, it's not, because you have the time to look at that one change. But if they do a thousand changes in a day, that becomes problematic, because sure, now, do you have time to go through a thousand changes?

09:27

Like, how do you even know? That's the main problem, how do you even know what changes the AI agent did?

09:31

If you have one change a day, sure, you can manually compare the document and what if not. But the main problem is the amount of changes.

09:38

And hallucinations are just an outcome. So, if we start this section with a full but imperfect. AI agents are powerful, they are the future.

09:49

Part of that future is that AI agents create a shit ton of changes. And staying on top of what AI agents do is extremely hard.

10:00

This is where Lix helps. No hallucinations, not talking about the problems. People will solve the problems with AI agents. They will hallucinate less, and so on and so forth.

10:10

The amount of changes they create is just overwhelming. Show a demo here. So, the step where you are is, you're setting, you're giving this awesome hook.

Proposal

Change docs to say the amount of changes agents generate is the problem, not if they hallucinate or not.

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions