Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's why give the chance to the model to call expand() in case if it needs more context. We know it's counterintuitive, so we will add the benchmarks to the repo soon.

Given our observations, the performance depends on the task and the model itself, most visible on long-running tasks

 help



How does the model know it needs more context?

We provide the model with a tool, we call expand() that allows the model to get access to more context if needed by using it.

We state this directly appended into the outputs so the model knows exactly where the lines were removed from.


Presumably in much the same way it knows it needs to use to calls for reaching its objective.

I'd argue not, as with tool calls it has available to it at all times a description of what each tool can be used for. There's plenty of intermediate but still important information that could be compacted away, and unless there was a logical reason to go looking for it the model doesn't know what it doesn't know.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: