labrador 2 days ago

GPT-5 is better as a conversational thinking partner than GPT-4o. It's answers are more concise, focused and informative. The conversation flows. GPT-5 feels more mature than GPT-4o with less juvenile "glazing."

I can't speak to other uses such as coding, but as a sounding board GPT-5 is better than GPT-4o, which was already pretty good. GPT-5's personality has definitely shifted to a more professional tone which I like.

I do understand why people are missing the more synchophant persoanlity of GPT-4o, but I'm not one of them.

  • saulpw 2 days ago

    That sounds 10% better, not 10x better. That's close enough to 'peaked'.

    • labrador 2 days ago

      Agreed. Sam Altman definitely over-hyped GPT-5. It's not so much more capable that it deserves a major version number bump.

      • torginus 2 days ago

        I still think it's a solid achievement, but weirdly positioned. It's their new-poverty spec model, available to everyone, and is likely not too large.

        It's decently good at coding, and math, beating current SOTA of Opus 4.1 by a small margin, while being much cheaper and faster to run, hinting at likely a much smaller model.

        However it's no better at trivia or writing emails or essays, which is what regular people who use ChatGPT through the website actually care about, making this launch come off as awkward.

      • 3836293648 2 days ago

        Surely a major version bump says more about the internals than the capabilities

        • labrador 2 days ago

          I see your point from a software engineering perspective but unfortuately that's not how the public sees it. The common perception is that we are making leaps towards AGI. I never thought AGI was close so I'm not disappointed, but a lot of people seem to be. On the other hand, I've seen comments like "I guess my fears of a destructive super-intelligence were over-blown."

      • hoppp 2 days ago

        They gonna release new models like apple releases iphones, same stuff little tweaks and improvements.

      • kjkjadksj 2 days ago

        People seem to make this exact comment on here at every gpt release. I wonder what gpt we ought to actually be on? 1.4.6?

        • coldtea 2 days ago

          3.something.

          They make "this exact comment on here at every gpt release" because every GPT release is touted as revolutionary and it's increasinly a smaller bump.

        • labrador 2 days ago

          In retrospect I would have named it as follows:

          GPT-4 -> GPT-4 Home

          GPT-5 -> GPT-4 Enterprise

          Because my impression after using GPT-5 that it is designed to satisfy the needs of Microsoft mainly. Microsoft has no interest in making AI therapists or AI companions, probably because of the legal liability. Also, that's outside their core business.

    • dileeparanawake a day ago

      Yep right now I’m not even sure I’d say it’s 10% better (at things I use it for). Feels overhyped based on the launch and what every influencer covered in their review (maybe obviously). Maybe it will get better as they sort out routing and other kinks. Right now feels like big gap between marketing and reality.

    • al_borland 2 days ago

      By definition, if something is still getting 10% better each year it hasn’t yet peaked. Not even close.

      • coldtea 2 days ago

        Getting 10% better over this last year compared to, say, 100% and 50% and 25% better the 4-5 years before?

        I'd say that points to it being very close to peaked.

        Nobody said anything about a steady 10% year-over-year being the case forever...

        • anuramat 21 hours ago

          kinda how theoretical physics peaked in 1878, as predicted by Philipp von Jolly? why does everyone feel the urge to extrapolate a vibe-based metric based on three points? isn't scientific/technological progress inherently unpredictable anyway

          • coldtea 16 hours ago

            No, more like how theoretical physics peaked in 1960s.

            >why does everyone feel the urge to extrapolate a vibe-based metric based on three points?

            Because marketers of AI extrapolate even worse to hype it, and a counter-correction is needed...

      • BriggyDwiggs42 16 hours ago

        It’s just extrapolating the asymptote man.

hirvi74 2 days ago

I'm noticing significant differences already.

Code seems to work on the first try more often for me too.

Perhaps my favorite change so far is the difference of verbosity. Some of the responses I am receiving when asking trivial questions are now are merely a handful of sentences instead of a dissertation. However, dissertation mode comes back when appropriate, which is also nice.

Edit: Slightly tangential, but I forgot to ask, do any of you all have access to the $200/month plan? If so, how does that model compare to GPT-5?

  • jostylr a day ago

    I have been using the $200 plan the past day in CodexCli and so far find that it is easy to work with (nothing crazy, just web app stuff). The context window means I no longer have to worry about running out of room and it seems to stay on track just fine so far. I have it incrementally coding pieces in a manageable chunk to understand and verify. No limits so far as promised. In contrast, using Claude Code ($100 level) I run into limits after just a couple of iterations of asking and context window gets problematic too quite quickly. It feels like 5 isn't gobbling up as much irrelevant text as Claude does.

  • dyauspitr 2 days ago

    It’s the same model at the $200 price point.

    • Tiberium a day ago

      The $200 sub (ChatGPT Pro) offers GPT-5 Pro, which is not the same model.

      • anuramat 21 hours ago

        It's still GPT-5, just with the highest reasoning effort value

        • Tiberium 17 hours ago

          It's not, just like o3-pro is not the same as o3 with high reasoning.

al_borland 2 days ago

It feels slower, but if the quality is better, so one response will do instead of multiple follow up questions, it’s still faster overall. It’s also still orders of magnitude faster than doing the research manually.

I’m reminded of that Louis CK joke about people being upset about the WiFi not working on their airplane.

  • BriggyDwiggs42 16 hours ago

    There’s definitely an element of that, being too used to a very impressive technology so coming off entitled. There’s also the other side though, which is that LLM-based AI has been sort of positioned as and portrayed as a technology which will drive an utter transformation in the near future, and with gpt 5 many people are having the realization forced on them that it was all sorta nonsense, lies, motivated reasoning, exaggeration, etc.

pseudo_meta 2 days ago

API is noticeably slower for me, sometimes up to 10x slower.

Upon some digging, it seems that part of the slowdown is due to the gpt-5 models by default doing some reasoning (reasoning effort "medium"), even for the nano or mini model. Setting the reasoning effort to "minimal" improves the speed a lot.

However, to be able to set the reasoning effort you have to switch to the new Response API, which wasn't a lot of work, but more than just changing a URL.

  • Tiberium a day ago

    > However, to be able to set the reasoning effort you have to switch to the new Response API, which wasn't a lot of work, but more than just changing a URL.

    That's not true - you can switch reasoning effort in the Chat Completions API - https://platform.openai.com/docs/api-reference/chat/create . It's just that in Chat Completions API it's a parameter called "reasoning_effort", while in the Responses API it's a "reasoning" parameter (object) with a parameter "effort" inside.

    • pseudo_meta a day ago

      Oh thx, must have missed that. Guess at least that saves me some time to switch to the newer API in the future.

Tiberium a day ago

GPT-5 in the API, especially at "high" reasoning is quite a bit better than o3, especially at web design "style". Also from my own experience, and from some people I've talked with it's great at agentic programming and at finding bugs (and then fixing them, of course).

ChatGPT 5 in web is a router to lots of different models [1], and the non-reasoning GPT-5 chat model ("gpt-5-chat-latest" in the API) is quite dumb - no significant difference from 4o/4.1. Even if you choose GPT-5 Thinking, there's a chance that your request will be routed to GPT-5 Mini, not to full GPT-5. The only real way to fix that in ChatGPT is to subscribe to Pro and use GPT-5 Pro, but of course that's very expensive. Otherwise people suggest saying "think hard" in the prompt, which might make the router choose the better model. Worse even, Sam Altman publicly said that in the first day of the GPT-5 release their router didn't work properly. [2]

I'd suggest trying GPT-5 in API or in apps like Cursor/Windsurf if you want to truly test it.

Oh, and from the GPT-5 guide [3] apparently OpenAI considers GPT-5 to be good even at "minimal" reasoning (it's still the full thinking model then, not the chat variant, and will respond much faster):

> The minimal setting performs especially well in coding and instruction following scenarios, adhering closely to given directions

[1] https://cdn.openai.com/pdf/8124a3ce-ab78-4f06-96eb-49ea29ffb... "Table 1: Model progressions"

[2] https://x.com/sama/status/1953893841381273969 "Yesterday, the autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber"

[3] https://platform.openai.com/docs/guides/latest-model

dileeparanawake a day ago

Not sure about them peaking but I definitely feel that right now gpt-5 doesn’t feel like a game changer like the marketing says. In some things it’s worse, slower, less useful (in terms of detail not ego flattery!). My use case is mainly code and learning.

I think the make everything bigger is plateauing and not yielding the infinite returns that were first suggested and demonstrated from gpt 2 - 4.

I think now it’s harder because they’ve got to focus on value per watt. Smaller good models mean less energy, less complexity to go wrong, but harder to achieve.

The unlock could be more techniques and focussed synthetic data from old models used to train new ones but apparently gpt-5 uses synthetic data and this is one of the reasons it isn’t necessarily good in real world tasks.

For me if we go the synthetic data route it’s important to shoot for quality - good synthetic data distils useful stuff, discards the noise so useful patterns are more solid in training, but imagine it’s hard to distinguish signal from noise to produce good synthetic data.

ManlyBread a day ago

I have used GPT-5 a few times and I genuinely can't think of a single improvement. You could stealthy deploy it in place of GPT-4 and I wouldn't be able to tell the difference.

  • iwontberude 15 hours ago

    Indeed I would say wow, ChatGPT must be having issues today it’s so slow

rolodexter2023 a day ago

“Open”AI is growing more opaque and black box every day

darepublic 2 days ago

They took away o3 on plus for this :(

  • Buttons840 2 days ago

    o3 was surprisingly good at research. I once saw it spend 6 full minutes researching something before giving an answer, and I wasn't using the "research" or "deep think" or whatever it's called, o3 just decided on its own to do that much research.

binarymax 2 days ago

My primary use case for LLMs are running jobs at scale over an API, and not chat. Yes it's very slow, and it is annoying. Getting a response from GPT-5-mini for <Classify these 50 tokens as true or false> takes 5 seconds, compared to GPT-4o which takes about a second.

  • beering 2 days ago

    The 5 seconds delay is probably due to reasoning. Maybe try setting it to minimal? If your use case isn’t complex maybe reasoning is overkill and gpt-4.1 would suffice.

  • jscheel 2 days ago

    Doing quite a bit of that as well, but I’ve held off moving anything to gpt-5 yet. Guessing it’s a capacity issue right now.

  • hoppp 2 days ago

    If its 5 seconds maybe you are better off renting a GPU server and running the inference where the data is, without round trips and you can use gpt-oss

gooodvibes 2 days ago

Not having the choice to use the old models is a horrible user experience. Taking 4o away so soon was a crime.

I don’t feel like I got anything new, I feel like something got taken away.

  • hirvi74 2 days ago

    4o and perhaps a few other of the older models are coming back. Altman already stated so.

mikert89 2 days ago

Have you used the Pro version? Its incredible

iwontberude 2 days ago

This is intended to be a discussion thread speculating about why ChatGPT 5 is so slow and why it seems to be no better than previous versions.