From a purely technical perspective, UE is an absolute monster. It's not even remotely in the same league as Unity, Godot, etc. when it comes to iteration difficulty and tooling.
I struggle with UE over others for any project that doesn't demand an HDRP equivalent and nanometric mesh resolution. Unity isn't exactly a walk in the park either but the iteration speed tends to be much higher if you aren't a AAA wizard with an entire army at your disposal. I've never once had a UE project on my machine that made me feel I was on a happy path.
Godot and Unity are like cheating by comparison. ~Instant play mode and trivial debugging experience makes a huge difference for solo and small teams. Any experienced .NET developer can become productive on a Unity project in <1 day with reasonable mentorship. The best strategy I had for UE was to just use blueprints, but this is really bad at source control and code review time.
I felt the exact same way until I tried Hazelight's AngelScript UE fork. It is amazing, it brings the developer experience and iteration speed to Unity levels. They even made a VSCode plugin for it. Cannot recommend enough
I think UE is so good at graphics that there is no reason to use it for most of the developers. I don't understand why many indie developers chose to use it.
I'm working with a friend on a project and desperately trying to sway him away from Unreal. His reason for wanting to use it is because he can build the engine from source and modify it any way he wants (and he intends to attempt just that). He's also very much into pushing the engine's lighting to its limits.
We're a team with < 10 employees. He's paying very handsomely, so even if his Unreal foray is an absolute disaster, I'll have the savings to find something else.
Never worked in a larger game-dev team before, but I always saw the benefits of Blueprints to be mainly for the ones who don't know how to code. Setup the right abstractions and you can let the level designers add interactivity for example, rather than Blueprints mainly existing for speeding up the work of C++ devs.
Any time a library in your code goes from being used by a couple people to used by everyone, you have to periodically audit it from then on.
A set of libraries on our code had hit 20% of response time through years of accretion. A couple months to cut that in half, no architectural or cache changes. Just about the largest and definitely the most cost effective initiative we completed on that team.
Looking at flame charts is only step one. You also need to look at invocation counts, for things that seem to be getting called far more often than they should be. Profiling tools frequently (dare I say consistently) misattribute costs of functions due to pressures on the CPU subsystems. And most of the times I’ve found optimizations that were substantially larger improvements than expected, it’s been from cumulative call count, not run time.
Dare me to say costless leaky abstraction. Then I'll point to the thread next door using Chrome profilers to diagnose Chrome internals using Scratch. Then I'll finish saying that at least Unreal has that authentic '90s feel to it.
Seems to make the app smoother the more models we had. Rendering the UI (not downloading the code, this is still part of the bundle) only when you need it seems to be a low hanging fruit for optimizing performance.
I remember solving this problem before. These are both global components, so you create a single global instance and control them with a global context or function.
You basically have a global part of the component and a local part. The global part is what actually gets rendered when necessary and manages current state, the local part defines what content will be rendered inside the global part for a particular trigger and interacts with the global part when a trigger condition happens (eg hover timeout for a tooltip).
React devs re-discovering DOM manipulation... SMH.
This is, in general, the idea that is being solved by native interaction with the DOM. It stores the graphic, so it doesn't have to be re-instated every time. Gets hidden with "display:none" or something. When it needs to display something, just the content gets swapped and the object gets 'unhidden'.
Alternatively, how many modals can be open at any given time? And is it a floating element? May be an option to make it a global single instance thing then, set the content when needed. Allows for in/out transitions, too, as another commenter pointed out. See also "Portals" in React.
Good. Transitions are meant to serve a purpose, showing what came from where. A modal doesn't need a transition, it should just disappear instantly. Like closing a window. The user is not helped by animating that something disappears when they close it, they already knew that.
You can create a ref that stores whether isOpen has ever been true, and condition on that, letting you lazily initialize the Modal and its contents while preserving out transitions. I’m honestly surprised this isn’t recommended a lot more often!
The "Transitioning elements on DOM addition and removal" example in that article uses a setTimeout() to wait an extra 1000 milliseconds before removing the element from the DOM. If you immediately remove the element from the DOM (like would usually happen if you do {isOpen && <Modal />} in React), it'll vanish immediately and won't have time to play the transition.
Hmm, so what exactly is stored in that gigabyte of tooltips? Even 100,000 tooltips per language should take maybe a few tens of megabytes of space. How many localizations does the editor have?
It is not the text data. It is that every tool tip gets made into an UI element.
"Firstly, despite its name, the function doesn’t just set the text of a tooltip; it spawns a full tooltip widget, including sub-widgets to display and layout the text, as well as some helper objects. This is not ideal from a performance point of view. The other problem?
Unreal does this for every tooltip in the entire editor, and there are a lot of tooltips in Unreal. In fact, up to version 5.6, the text for all the tooltips alone took up around 1 GB of storage space."
But I assume the 1GB storage for all tooltips include boilerplate. I doubt it is 1 GB of raw text.
Yes, I meant the size on disk. I presume the serialization format isn't the most efficient possible. But I can't think of any particular boilerplate that you'd want to store in a file that's just supposed to store localization strings.
At most one instance at start up. Asynchronous creation or lazy creation on first use are two other potential options. Speaking generally, not Unreal-specific.
This was originally submitted with the title "Speeding up Unreal Editor launch by not spawning 38000 tooltips", a much closer match to the actual title of the post, "Speeding up the Unreal Editor launch by ... not spawning 38000 tooltips".
Why has it been changed? The number of tooltips improves the title and is accurate to the post.
Kinda annoying that the article doesn't really answer the core question, which is how much time was saved in the start up time. It does give a 0.05ms per tooltip figure, so I guess multiplied by 38000 gives ~2s saved, which is not too bad.
"Together, these two problems can result in the editor spending an extremely long time just creating unused tooltips. In a debug build of the engine, creating all of these tooltips resulted in 2-5 seconds of startup time. In comparison development builds were faster, taking just under a second."
I once made the mistake to buy some sound effects from Fab, I had to download the entire Unreal Engine and start it to create a project to then import the assets..
It took the whole afternoon
It's no wonder UE5 games have the reputation of being poorly optimized, you need an insane machine only just to run the editor..
State of the art graphics pipeline, but webdev level of bloat when it comes to software.. I'd even argue electron is a smoother experience tan Unreal Engine Editor
It's just like your computer and IDE, you start it up and never shut it down again.
Wouldn't it taking the whole afternoon be because it's downloading and installing assets, creating caches, indexing, etc?
Like with IDEs, it really doesn't matter much once they're up and running, and the performance of the product has ultimately little to do with the tools used in making them. Poorly optimized games have the reputation of being poorly optimized, that's rarely down to the engine. Maybe the complete package, where it's too easy to just use and plop down assets from the internets without tweaking for performance or having a performance budget per scene.
Being around back in days when LCDs replaced the CRTs and learning importance of native resolutions. I feel like recent games have been saved too much by frame-generation and all sort of weird resolution hacks... Mostly by Nvidia and AMD.
I am kinda sad we have reached point where native resolution is not the standard for high mid tier/low high tier GPUs. Surely games should run natively at non-4k resolution on my 700€+ GPU...
I don't agree with the framing of it as "faking" a higher than native resolution. The native resolution is what it is. The problem lies in how the view is sampled as it is rendered to the screen. What you ideally do when you have higher frequency content than the screen can represent is to oversample, filter and downsample the view, as in SSAA, or you approximate the effect or use it selectively when there is high frequency content, using some more clever methods.
It's really the same problem as in synthesizing audio. 44.1 kHz is adequate for most audio purposes, but if you are generating sounds with content past the nyquist frequency it's going to alias and fold back in undesirable ways, causing distortion in the audible content. So you multisample, filter to remove the high frequency content and downsample in order to antialias (which would be roughly equivalent to SSAA) or you build the audio from band limited impulses or steps.
You mean back in the day when 30 fps at 1024x768 was the norm?
New monitors default to 60hz but folks looking to game are convinced by ads that the only reason they lost that last round was not because of the SBMM algorithm, but because the other player undoubtedly had a 240hz 4K monitor rendering the player coming around the corner a tick faster.
Competitive gaming and Twitch are what pushed the current priorities, and the hardware makers were only too happy to oblige.
30 fps was not the norm, at least not with competitive games. Like Counter-Strike in 2000 on a CRT. Yes 1024x768 was common, but at 100 fps. Alternatively you would go to 800x600 to reach 120 fps.
It’s only when LCDs appeared that 60 Hz started being a thing on PCs and 60 fps followed as a consequence, because the display can’t show more anyway.
It’s true that competitive gaming has pushed the priority of performance, but this happened in the 90s already with Quake II. There’s nothing fake about it either. At the time a lot of playing happened at LANs not online. The person with the better PC got better results. Repeatedly reproduced by rotating people around on the available PCs.
There is something to having a monitor display at a higher than 60fps frame rate especially if the game runs and can process inputs at that higher rate as well. This just decreases the response time that a player can attain as there is literally less time between seeing something on screen and then the game reacting to the player input.
For a bit of background modern games tend to do game procesing and rendering at the same time in parallel, but that means that the frame being processed by the rendering system is the previous frame, and then once rendering has been submitted to the "graphics card" it can take one or more more frames before it's actually visible on the monitor. So you end up with a lag of 3+ frames rather than only a single one like you had on old DOS games and such. So having a faster monitor and being able to render frames at that faster rate will give you some benefit.
In addition this is why using frame generation can actually hurt the gaming experience as instead of waiting 3+ frames to see your input reflected in what is on the screen you end up with something like 7+ frames because the fake in-between frames don't actually deal with any input.
I don't play any online competitive games or FPSes, but I can definitely tell that 144 FPS on a synced monitor is nicer than 60 FPS, especially when I play anything that uses mouse look.
For me, it's not quite as big of a jump as, say, when we went from SD to HD TV, but it's still a big enough leap that I don't consider it gimmicky.
Gaming in 4K, on the other hand, I don't really care for. QHD is plenty, but I do find 4K makes for slightly nicer desktop use.
Edit: I'll add that I almost always limit FPS anyway because my GPU turns into a jet engine under high load and I hate fan noise, but that's a different problem.
Games haven't been running full native resolution for quite some time, maybe even the last decade, as they tend to render to a smaller buffer and then upscale to the desired resolution in order to achieve better frame rates. This doesn't even include frame generation which is trading off supposed higher frame rates for worse response times so the games can feel worse to play.
By Games I mean modern AAA first or third person games. 2D and others will often run at full resolution all the time.
> It's no wonder UE5 games have the reputation of being poorly optimized
Care to exemplify?
I find UE games to be not only the most optimized, but also capable of running everywhere. Take X-COM, which I can play on my 14 year old linux laptop with i915 excuse-for-a-gfx-card, whereas Unity stuff doesn't work here, and on my Windows gaming rig always makes everything red-hot without even approaching the quality and fidelity of UE games.
To me UE is like SolidWorks, whereas Unity is like FreeCAD... Which I guess is actually very close to what the differences are :-)
Or is this "reputation of being poorly optimized" only specific to UE version 5 (as compared to older versions of UE, perhaps)?
The reputation of being poorly optimized only applies to version 5, UE was rather respected before the wave of terribly performing UE 5 AAA games came out and tanked UE's reputation.
It also has a terrible reputation because a bunch of the visual effects have a hard dependency on temporal anti-aliasing, which is a form of AA which typically results in a blurry-looking picture with ghosting as soon as anything is moving.
The reputation is specific to UE5. UE3 used to have such reputation as well. UE5 introduced new systems that are not compatible with traditional systems and these systems especially if used poorly tank the performance. Its not uncommon for UE5 games to run poorly even on the most expensive nvidia GPU and AI upscaling is requirement.
This is one scenario where IMGUI approaches have a small win, even if it's by accident - since GUI elements are constructed on demand in immediate mode, invisible/unused elements won't have tooltip setup run, and the tooltip setup code will probably only run for the control that's showing a tooltip.
(Depending on your IMGUI API you might be setting tooltip text in advance as a constant on every visible control, but that's probably a lot fewer than 38000 controls, I'd hope.)
It's interesting that every control previously had its own dedicated tooltip component, instead of having all controls share a single system wide tooltip. I'm curious why they designed it that way.
No idea why you're getting down voted but that was my thought as well.
With immediate mode you don't have to construct any widgets or objects. You just render them via code every frame which gives you more freedom in how you tackle each UI element. You're not forced into one widget system across the entire application. For example, if you detect your tooltip code is slow you could memcpy all the strings in a block of memory and then have tooltips use an index to that memory, or have them load on demand from disk, or the cloud or space or whatever. The point being you can optimise the UI piecemeal.
Immediate mode has its own challenges but I do find it interesting to at least see how the different approaches would tackle the problem
Unity uses an IMGUI approach and it makes all the difference in the universe. Overriding an OnDrawGizmos method to quickly get at an editor viz of a new component is super efficient. There are some sharp edges like forgetting to set/reset colors, etc, but I much prefer these little annoyances for the convenience I get in return.
AFAIK, UE relies on a retained mode GUI, but I never got far enough into that version of Narnia to experience it first hand.
From a purely technical perspective, UE is an absolute monster. It's not even remotely in the same league as Unity, Godot, etc. when it comes to iteration difficulty and tooling.
I struggle with UE over others for any project that doesn't demand an HDRP equivalent and nanometric mesh resolution. Unity isn't exactly a walk in the park either but the iteration speed tends to be much higher if you aren't a AAA wizard with an entire army at your disposal. I've never once had a UE project on my machine that made me feel I was on a happy path.
Godot and Unity are like cheating by comparison. ~Instant play mode and trivial debugging experience makes a huge difference for solo and small teams. Any experienced .NET developer can become productive on a Unity project in <1 day with reasonable mentorship. The best strategy I had for UE was to just use blueprints, but this is really bad at source control and code review time.
I felt the exact same way until I tried Hazelight's AngelScript UE fork. It is amazing, it brings the developer experience and iteration speed to Unity levels. They even made a VSCode plugin for it. Cannot recommend enough
Blueprints can be a great learning tool, if you double click a node it will open a VS window with the actual C++ code function.
I think UE is so good at graphics that there is no reason to use it for most of the developers. I don't understand why many indie developers chose to use it.
I'm working with a friend on a project and desperately trying to sway him away from Unreal. His reason for wanting to use it is because he can build the engine from source and modify it any way he wants (and he intends to attempt just that). He's also very much into pushing the engine's lighting to its limits.
We're a team with < 10 employees. He's paying very handsomely, so even if his Unreal foray is an absolute disaster, I'll have the savings to find something else.
Oh that's definitely something to play with if the $$ is good :D I wouldn't mind.
NEW QUEST: "These New Gaming Requirements Are Unreal"
OBJECTIVE: Any project that demands HDRP and Nanometric Mesh
BONUS: Find the happy path
And blueprints take forever to wire up in my experience compared to just writing the C++ directly.
Never worked in a larger game-dev team before, but I always saw the benefits of Blueprints to be mainly for the ones who don't know how to code. Setup the right abstractions and you can let the level designers add interactivity for example, rather than Blueprints mainly existing for speeding up the work of C++ devs.
Any time a library in your code goes from being used by a couple people to used by everyone, you have to periodically audit it from then on.
A set of libraries on our code had hit 20% of response time through years of accretion. A couple months to cut that in half, no architectural or cache changes. Just about the largest and definitely the most cost effective initiative we completed on that team.
Looking at flame charts is only step one. You also need to look at invocation counts, for things that seem to be getting called far more often than they should be. Profiling tools frequently (dare I say consistently) misattribute costs of functions due to pressures on the CPU subsystems. And most of the times I’ve found optimizations that were substantially larger improvements than expected, it’s been from cumulative call count, not run time.
Dare me to say costless leaky abstraction. Then I'll point to the thread next door using Chrome profilers to diagnose Chrome internals using Scratch. Then I'll finish saying that at least Unreal has that authentic '90s feel to it.
This reminded me, I saw tooltips being a large chunk when I profiled my react app. I should go and check that.
Similarly, adding a modal like this
{isOpen && <Modal isOpen={isOpen} onClose={onClose} />}
instead of
<Modal isOpen={isOpen} onClose={onClose} />
Seems to make the app smoother the more models we had. Rendering the UI (not downloading the code, this is still part of the bundle) only when you need it seems to be a low hanging fruit for optimizing performance.
I remember solving this problem before. These are both global components, so you create a single global instance and control them with a global context or function.
You basically have a global part of the component and a local part. The global part is what actually gets rendered when necessary and manages current state, the local part defines what content will be rendered inside the global part for a particular trigger and interacts with the global part when a trigger condition happens (eg hover timeout for a tooltip).
React devs re-discovering DOM manipulation... SMH.
This is, in general, the idea that is being solved by native interaction with the DOM. It stores the graphic, so it doesn't have to be re-instated every time. Gets hidden with "display:none" or something. When it needs to display something, just the content gets swapped and the object gets 'unhidden'.
Good luck.
Alternatively, how many modals can be open at any given time? And is it a floating element? May be an option to make it a global single instance thing then, set the content when needed. Allows for in/out transitions, too, as another commenter pointed out. See also "Portals" in React.
That breaks the out transition.
Good. Transitions are meant to serve a purpose, showing what came from where. A modal doesn't need a transition, it should just disappear instantly. Like closing a window. The user is not helped by animating that something disappears when they close it, they already knew that.
So, win-win? I want a modal to get out of the way as fast as possible, any fade/transition animations are keeping me from what I want to look at. :)
You can create a ref that stores whether isOpen has ever been true, and condition on that, letting you lazily initialize the Modal and its contents while preserving out transitions. I’m honestly surprised this isn’t recommended a lot more often!
Even when using view transitions?
https://developer.mozilla.org/en-US/docs/Web/CSS/@starting-s...
The "Transitioning elements on DOM addition and removal" example in that article uses a setTimeout() to wait an extra 1000 milliseconds before removing the element from the DOM. If you immediately remove the element from the DOM (like would usually happen if you do {isOpen && <Modal />} in React), it'll vanish immediately and won't have time to play the transition.
Unless you set `isOpen` only when the transition has ended
Isn't isOpen = false what triggers the transition in the first place here?
In the Blazor space we use factories/managers to spawn new instances of a modal/tooltip instead of having something idle waiting for activation.
The tradeoff is for more complicated components, first renders can be slower.
Hmm, so what exactly is stored in that gigabyte of tooltips? Even 100,000 tooltips per language should take maybe a few tens of megabytes of space. How many localizations does the editor have?
It is not the text data. It is that every tool tip gets made into an UI element.
"Firstly, despite its name, the function doesn’t just set the text of a tooltip; it spawns a full tooltip widget, including sub-widgets to display and layout the text, as well as some helper objects. This is not ideal from a performance point of view. The other problem? Unreal does this for every tooltip in the entire editor, and there are a lot of tooltips in Unreal. In fact, up to version 5.6, the text for all the tooltips alone took up around 1 GB of storage space."
But I assume the 1GB storage for all tooltips include boilerplate. I doubt it is 1 GB of raw text.
Yes, I meant the size on disk. I presume the serialization format isn't the most efficient possible. But I can't think of any particular boilerplate that you'd want to store in a file that's just supposed to store localization strings.
Don't have access to read the code, but I think ideally there should be only one instance created at startup, right?
At most one instance at start up. Asynchronous creation or lazy creation on first use are two other potential options. Speaking generally, not Unreal-specific.
This was originally submitted with the title "Speeding up Unreal Editor launch by not spawning 38000 tooltips", a much closer match to the actual title of the post, "Speeding up the Unreal Editor launch by ... not spawning 38000 tooltips".
Why has it been changed? The number of tooltips improves the title and is accurate to the post.
Kinda annoying that the article doesn't really answer the core question, which is how much time was saved in the start up time. It does give a 0.05ms per tooltip figure, so I guess multiplied by 38000 gives ~2s saved, which is not too bad.
"Together, these two problems can result in the editor spending an extremely long time just creating unused tooltips. In a debug build of the engine, creating all of these tooltips resulted in 2-5 seconds of startup time. In comparison development builds were faster, taking just under a second."
I once made the mistake to buy some sound effects from Fab, I had to download the entire Unreal Engine and start it to create a project to then import the assets..
It took the whole afternoon
It's no wonder UE5 games have the reputation of being poorly optimized, you need an insane machine only just to run the editor..
State of the art graphics pipeline, but webdev level of bloat when it comes to software.. I'd even argue electron is a smoother experience tan Unreal Engine Editor
Insanity
It's just like your computer and IDE, you start it up and never shut it down again.
Wouldn't it taking the whole afternoon be because it's downloading and installing assets, creating caches, indexing, etc?
Like with IDEs, it really doesn't matter much once they're up and running, and the performance of the product has ultimately little to do with the tools used in making them. Poorly optimized games have the reputation of being poorly optimized, that's rarely down to the engine. Maybe the complete package, where it's too easy to just use and plop down assets from the internets without tweaking for performance or having a performance budget per scene.
It took that long because it had to compile shaders for the first time. After that it would open in seconds.
Yet it is the engine dominating the industry and beloved by artists of all kinds.
To get UE games that run well you either need your own engine team to optimise it or you drop all fancy new features.
Being around back in days when LCDs replaced the CRTs and learning importance of native resolutions. I feel like recent games have been saved too much by frame-generation and all sort of weird resolution hacks... Mostly by Nvidia and AMD.
I am kinda sad we have reached point where native resolution is not the standard for high mid tier/low high tier GPUs. Surely games should run natively at non-4k resolution on my 700€+ GPU...
Native resolution was never good enough though. That's why antialiasing is a thing, to fake a higher than native resolution
And now antialiasing is so good you can start from lower resolutions and still fake even higher quality
I don't agree with the framing of it as "faking" a higher than native resolution. The native resolution is what it is. The problem lies in how the view is sampled as it is rendered to the screen. What you ideally do when you have higher frequency content than the screen can represent is to oversample, filter and downsample the view, as in SSAA, or you approximate the effect or use it selectively when there is high frequency content, using some more clever methods.
It's really the same problem as in synthesizing audio. 44.1 kHz is adequate for most audio purposes, but if you are generating sounds with content past the nyquist frequency it's going to alias and fold back in undesirable ways, causing distortion in the audible content. So you multisample, filter to remove the high frequency content and downsample in order to antialias (which would be roughly equivalent to SSAA) or you build the audio from band limited impulses or steps.
You mean back in the day when 30 fps at 1024x768 was the norm?
New monitors default to 60hz but folks looking to game are convinced by ads that the only reason they lost that last round was not because of the SBMM algorithm, but because the other player undoubtedly had a 240hz 4K monitor rendering the player coming around the corner a tick faster.
Competitive gaming and Twitch are what pushed the current priorities, and the hardware makers were only too happy to oblige.
30 fps was not the norm, at least not with competitive games. Like Counter-Strike in 2000 on a CRT. Yes 1024x768 was common, but at 100 fps. Alternatively you would go to 800x600 to reach 120 fps.
It’s only when LCDs appeared that 60 Hz started being a thing on PCs and 60 fps followed as a consequence, because the display can’t show more anyway.
It’s true that competitive gaming has pushed the priority of performance, but this happened in the 90s already with Quake II. There’s nothing fake about it either. At the time a lot of playing happened at LANs not online. The person with the better PC got better results. Repeatedly reproduced by rotating people around on the available PCs.
There is something to having a monitor display at a higher than 60fps frame rate especially if the game runs and can process inputs at that higher rate as well. This just decreases the response time that a player can attain as there is literally less time between seeing something on screen and then the game reacting to the player input.
For a bit of background modern games tend to do game procesing and rendering at the same time in parallel, but that means that the frame being processed by the rendering system is the previous frame, and then once rendering has been submitted to the "graphics card" it can take one or more more frames before it's actually visible on the monitor. So you end up with a lag of 3+ frames rather than only a single one like you had on old DOS games and such. So having a faster monitor and being able to render frames at that faster rate will give you some benefit.
In addition this is why using frame generation can actually hurt the gaming experience as instead of waiting 3+ frames to see your input reflected in what is on the screen you end up with something like 7+ frames because the fake in-between frames don't actually deal with any input.
I don't play any online competitive games or FPSes, but I can definitely tell that 144 FPS on a synced monitor is nicer than 60 FPS, especially when I play anything that uses mouse look.
For me, it's not quite as big of a jump as, say, when we went from SD to HD TV, but it's still a big enough leap that I don't consider it gimmicky.
Gaming in 4K, on the other hand, I don't really care for. QHD is plenty, but I do find 4K makes for slightly nicer desktop use.
Edit: I'll add that I almost always limit FPS anyway because my GPU turns into a jet engine under high load and I hate fan noise, but that's a different problem.
Games haven't been running full native resolution for quite some time, maybe even the last decade, as they tend to render to a smaller buffer and then upscale to the desired resolution in order to achieve better frame rates. This doesn't even include frame generation which is trading off supposed higher frame rates for worse response times so the games can feel worse to play.
By Games I mean modern AAA first or third person games. 2D and others will often run at full resolution all the time.
> It's no wonder UE5 games have the reputation of being poorly optimized
Care to exemplify?
I find UE games to be not only the most optimized, but also capable of running everywhere. Take X-COM, which I can play on my 14 year old linux laptop with i915 excuse-for-a-gfx-card, whereas Unity stuff doesn't work here, and on my Windows gaming rig always makes everything red-hot without even approaching the quality and fidelity of UE games.
To me UE is like SolidWorks, whereas Unity is like FreeCAD... Which I guess is actually very close to what the differences are :-)
Or is this "reputation of being poorly optimized" only specific to UE version 5 (as compared to older versions of UE, perhaps)?
The reputation of being poorly optimized only applies to version 5, UE was rather respected before the wave of terribly performing UE 5 AAA games came out and tanked UE's reputation.
It also has a terrible reputation because a bunch of the visual effects have a hard dependency on temporal anti-aliasing, which is a form of AA which typically results in a blurry-looking picture with ghosting as soon as anything is moving.
Funnily enough a lot of those "poor performing" UE games were actually UE4 still, not UE5.
The reputation is specific to UE5. UE3 used to have such reputation as well. UE5 introduced new systems that are not compatible with traditional systems and these systems especially if used poorly tank the performance. Its not uncommon for UE5 games to run poorly even on the most expensive nvidia GPU and AI upscaling is requirement.
Thanks for the replies! Will note the UE5 specificity.
This is one scenario where IMGUI approaches have a small win, even if it's by accident - since GUI elements are constructed on demand in immediate mode, invisible/unused elements won't have tooltip setup run, and the tooltip setup code will probably only run for the control that's showing a tooltip.
(Depending on your IMGUI API you might be setting tooltip text in advance as a constant on every visible control, but that's probably a lot fewer than 38000 controls, I'd hope.)
It's interesting that every control previously had its own dedicated tooltip component, instead of having all controls share a single system wide tooltip. I'm curious why they designed it that way.
No idea why you're getting down voted but that was my thought as well.
With immediate mode you don't have to construct any widgets or objects. You just render them via code every frame which gives you more freedom in how you tackle each UI element. You're not forced into one widget system across the entire application. For example, if you detect your tooltip code is slow you could memcpy all the strings in a block of memory and then have tooltips use an index to that memory, or have them load on demand from disk, or the cloud or space or whatever. The point being you can optimise the UI piecemeal.
Immediate mode has its own challenges but I do find it interesting to at least see how the different approaches would tackle the problem
Unity uses an IMGUI approach and it makes all the difference in the universe. Overriding an OnDrawGizmos method to quickly get at an editor viz of a new component is super efficient. There are some sharp edges like forgetting to set/reset colors, etc, but I much prefer these little annoyances for the convenience I get in return.
AFAIK, UE relies on a retained mode GUI, but I never got far enough into that version of Narnia to experience it first hand.
How does this compare to React-like approach (React, Flutter, SwiftUI)?
It seems like those libraries do what IMGUI do, but more structured.