Oh, they’ll do compression alright, they’ll ship every asset in a dozen resolutions with different lossy compression algos so they don’t need to spend dev time actually handling model and texture downscaling properly. And games will still run like crap because reasons.
Stuff like textures generally use a lossless bitmap format. The compression artefacts you get with lossy formats, while unnoticable to the human eye, can cause much more visible rendering artefacts once the game engine goes to calculate how light should interact with the material.
That’s not to say devs couldn’t be more efficient, but it does explain why games don’t really compress that well.
When I say “compress” I mean downscale. I’m suggesting they could have dozens of copies of each texture and model in a host of different resolutions (number of polygons, pixels for textures, etc), instead of handling that in the code. I’m not exactly sure how they currently do low vs medium vs high settings, just suggesting that they could solve that using a ton more data if they essentially had no limitations in terms of customer storage space.
And that’s completely normal. Every modern game has multiple versions of the same asset at various detail levels, all of which are used. And when you choose between “low, medium, high” that doesn’t mean there’s a giant pile of assets that go un-used. The game will use them all, rendering a different version of an asset depending on how close to something you are. The settings often just change how far away the game will render at the highest quality, before it starts to drop down to the lower LODs (level of detail).
That’s why the games aren’t much smaller on console, for exanple. They’re not including all the unnecessary assets for different graphics settings from PC. They are all part of how modern game work.
“Handling that in the code” would still involve storing it all somewhere after “generation”, same way shaders are better generated in advance, lest you get a stuttery mess.
And it isn’t how most game do things even today. Such code does not exist. Not yet at least. Human artists produce better results, and hence games ship with every version of every asset.
Finally automating this is what Unreals nanite system has only recently promised to do, but it has run into snags.
When I say “compress” I mean downscale. I’m suggesting they could have dozens of copies of each texture and model in a host of different resolutions.
Yeah, that’s generally the best way to do it for optimal performance. Games sometimes have an adjustable option to control this in game, LoD (level of detail).
Oh, they’ll do compression alright, they’ll ship every asset in a dozen resolutions with different lossy compression algos so they don’t need to spend dev time actually handling model and texture downscaling properly. And games will still run like crap because reasons.
Games can’t really compress their assets much.
Stuff like textures generally use a lossless bitmap format. The compression artefacts you get with lossy formats, while unnoticable to the human eye, can cause much more visible rendering artefacts once the game engine goes to calculate how light should interact with the material.
That’s not to say devs couldn’t be more efficient, but it does explain why games don’t really compress that well.
When I say “compress” I mean downscale. I’m suggesting they could have dozens of copies of each texture and model in a host of different resolutions (number of polygons, pixels for textures, etc), instead of handling that in the code. I’m not exactly sure how they currently do low vs medium vs high settings, just suggesting that they could solve that using a ton more data if they essentially had no limitations in terms of customer storage space.
Uuh. That is exactly how games work.
And that’s completely normal. Every modern game has multiple versions of the same asset at various detail levels, all of which are used. And when you choose between “low, medium, high” that doesn’t mean there’s a giant pile of assets that go un-used. The game will use them all, rendering a different version of an asset depending on how close to something you are. The settings often just change how far away the game will render at the highest quality, before it starts to drop down to the lower LODs (level of detail).
That’s why the games aren’t much smaller on console, for exanple. They’re not including all the unnecessary assets for different graphics settings from PC. They are all part of how modern game work.
“Handling that in the code” would still involve storing it all somewhere after “generation”, same way shaders are better generated in advance, lest you get a stuttery mess.
And it isn’t how most game do things even today. Such code does not exist. Not yet at least. Human artists produce better results, and hence games ship with every version of every asset.
Finally automating this is what Unreals nanite system has only recently promised to do, but it has run into snags.
Yeah, that’s generally the best way to do it for optimal performance. Games sometimes have an adjustable option to control this in game, LoD (level of detail).