This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/uxl on 2025-06-02 13:57:19+00:00.


…Alongside the actually “advanced” voice mode demo from over a year ago. I would not be surprised if there is a Sora2 that we don’t know about. o3 and o4 mini are already pretty damn good, but you know there must already be an o4-full and an o4 Pro.

Even if whatever o4-full is capable of is the farthest they’ve gotten with reason, then all it takes is that + whatever model produces the level of creative depth in Altman’s tweet + Sora2 + the real advanced voice mode + larger context windows - all integrated into a single UX package that automatically calls whatever makes sense - and “GPT-5” will be a slam dunk. My bet is on OpenAI to do exactly that.

My fingers are crossed for in-platform music generation as well, but that would just be icing. Anyway, I’m reminding everyone of that tweet because to me, it’s the most glaring evidence that OpenAI still has something much better than many people suspect behind closed doors. That fiction to me - even if cherry picked - is miles ahead of any other simulation of human writing I’ve ever read.