☆ Yσɠƚԋσʂ ☆

  • 15.8K Posts
  • 11.8K Comments
Joined 6 年前
cake
Cake day: 2020年3月30日

help-circle




































  • I think you’ll still need a human in the loop because only a human can decide whether the code is doing what’s intended or not. The nature of the job is going to change dramatically though. My prediction is that the focus will be on making declarative specifications that act as a contract for the LLM. There are also types of features that are very difficult to specify and verify formally. Anything dealing with side effects or external systems is a good example. We have good tools to formally prove data consistency using type systems and provers, but real world applications have to deal with outside world to do anything useful. So, it’s most likely that the human will just work at a higher level actually focusing on what the application is doing in a semantic sense, while the agents handle the underlying implementation details.


  • I’ve been using this pattern in some large production projects, and it’s been a real life saver for me. Like you said, once the code gets large, it’s just too hard to keep track of everything cause it overflows what you can keep in your head effectively. And at that point you just start guessing when you make decisions which inevitably leads to weird bugs. The other huge benefit is it makes it far easier to deal with changing requirements. If you have a graph of steps you’re doing, it’s trivial to add, remove, or rearrange steps. You can visually inspect it, and guarantee that the new workflow is doing what you want it to.