dgreensp 3 days ago

> Never place rich UI elements within a table, list, or other markdown element.

> Place rich UI elements within tables, lists, or other markdown elements when appropriate.

crazygringo 3 days ago

How does a prompt this long affect resource usage?

Does inference need to process this whole thing from scratch at the start of every chat?

Or is there some way to cache the state of the LLM after processing this prompt, before the first user token is received, and every request starts from this cached state?

  • mdaniel 3 days ago

    My understanding is that is what the KV cache does in models serving. I would imagine they'd want to prime any such KV cache with common tokens but retain a per-session cache to avoid leaks. It seems HF agrees with the concept, at least https://huggingface.co/docs/transformers/kv_cache#prefill-a-...

    • kingstnap a day ago

      OpenAI has docs about how it works.

      https://platform.openai.com/docs/guides/prompt-caching

      It's fairly simple actually. Each machine stores the KV cache in blocks of 128 tokens.

      That's stored in a prefix tree like structure. Probably with some sort of LRU eviction policy.

      If you ask a machine to generate it does so starting from the longest matching sequence in the cache.

      They route between racks using a hash of the prefix.

      Therefore the system prompt, being frequently used and at the beginning of the context, will always be in the prefix cache.

      • crazygringo 12 hours ago

        Fascinating, exactly what I was wondering about. Thank you! Turns out it's very sophisticated, and also explains why the current date is always at the very end of the system prompt.

mdaniel 3 days ago

It's a good thing people were enamored of how inexpensive GPT-5 is, given that the system prompt is (allegedly) 54kb. I don't know how many tokens that is offhand, but what a lot of them to burn just on setup of the thing

  • Tadpole9181 3 days ago

    54,000 bytes, one byte per character. 4 characters per token (more or less). Around 13,000 tokens.

    These are NOT included in the model context size for pricing.

  • btdmaster 3 days ago

    I might be wrong, but can't you checkpoint the post-system prompt model and restore from there, trading memory for compute? Or is that too much extra state?

    • mdaniel 3 days ago

      My mental model is that the system prompt isn't one thing, and that seems even more apparent with line 6 telling the model what today's date is. I have no insider information but system prompts could undergo A/B testing just like any change, to find the optimal one for some population of users

      Which is to say you wouldn't want to bake such a thing too deeply into a multi-terabyte bunch of floating points because it makes operating things harder

      • reitzensteinm 2 days ago

        OpenAI automatically caches prompt prefixes on the API. Caching an infrequently changing internally controlled system prompt is trivial by comparison.

TZubiri 3 days ago

These are always so embarassing

  • NewsaHackO 3 days ago

    It's because they always put things that seem way to specific to certain issues, like riddles and arithmetic. Also, I am not a WS, but the mention of "proud boys" are things that can be used as fodder for LLM bias. I wonder why they even have to use a system prompt; why can't that have a separate fine-tuned model for ChatGPT specifically so that they don't need a system prompt?

    • sellmesoap 16 hours ago

      "Dear computer, I'm writing to you today to tell you to make sure you really check your math sums!" I find it amusing so much emphasis is put on a computer to get math correct.

    • TZubiri 2 days ago

      Also because we have these image of super scientist mathematician who fight for a better world and reject 1m salaries and raise billions in funding.

      And their work is literally "DON'T do this, DO that in these situations"