

Could you clarify why you’re arguing an agent needs to be a single model (really a single instance) to be “autonomous”?
Or rather, for example, why does the model that makes the kill decision need to be the same one that flies the drone? I imagine that is the use case the Pentagon is most interested in







It’s the only book I’ve tried reading where I couldn’t even make it through the third chapter. It is so remarkably bad. Like, I don’t understand how it could possibly be popular bad. I had heard his books were very “reddit” but holy fuck was it so much worse than I thought. You will be happier not reading it or presumably any of his other books.