Added a simple onboarding process for first-time users who want to know their way around the editor. This guide can be opened again at any time via the Help menu.
A big complaint in gmod was CurTime() losing precision after 9 hours of server uptime due to it being a single floating point. We have the same problem in s&box with Time.Now being a single float, as the server time increases stuff like gameplay, movement, interpolation and more begin to breakdown from the inaccuracy.We've exposed Time.NowDouble, this won't noticeably lose precision until at least 200 days We've moved all internal systems like interpolation to it now.
We are considering making Time.Now a double by default but this would be a breaking change.
When spawning in dupes, we noticed that none of it was replicated properly to other clients. This was because we batched everything -- this makes all references and Compnent methods run in order, so stuff like constraints attach properly. We were spawning them over the network inside the batch, which meant everything was empty.
We've made it so batching also batches network spawns. You can use this with Scene.BatchGroup().
And we've added a nice progress bar when spawning dupes.
mat_toolvis was broken for a while and could only be used in the editor. We've now moved it from C++ to C#, made it work in-game again as well as added it to the F2 console menu to quickly see and change modes.
We've made it so that you can now set the Launch Mode for your game to Dedicated Server Only. When set to this mode, the game can only be hosted on a Dedicated Server and when players select your game in the menu, it will bring up a list of available servers to join.
As projects' code grows in size so do compile times, when we want an editor platform where it's quick to iterate and hotload, these large compile times are not good.We've made improvements to incremental compile performance, so small changes should now recompile noticeably faster, especially in large projects.
Previously, our processors and codegen ran on the entire codebase every time you recompiled. Now, they only run on what's actually changed. Reusing more between compiles, reducing unnecessary re-parsing, allocations, and other work.
The impact of this can be pretty huge. Big projects where the difference between min/max compiles is greatest will see the most gains. In a pretty large project, compile times dropped from >8s to ~2s in my testing - a 75% reduction.
There's still more to look at with this, but hopefully a good first step. Faster iteration means better games, I prefer those personally.
Package resources were loaded and reloaded multiple times during game load, which made load times longer than necessary.We cleaned up a lot of that redundant processing. In testing, games that rely heavily on cloud packages loaded up to 20 seconds faster.
Source Engine had an extensive and overwhelming stats_display command, we've filtered and added the most useful parts to the performance overlays accessible through the F1 console.
Primarily this is the total frame time and GPU frame time, this is most useful for telling when you are CPU or GPU limited.
If physics objects fall forever they’ll eventually end up out of bounds in the physics engine. There’s now an event for that, so you can decide what to do. Disable them, or like Sandbox does, just delete the prop.
.NET’s garbage collector is really solid compared to Unity's GC, which means we generally have to worry about allocations a lot less. That said, on lower-end systems (slow memory / small CPU-Caches) we noticed that GC can significantly affect performance.
To tackle this, we’ve updated our allocation overlay to show new metrics like stutters, total GC time, and more. These allocations can come from the engine or even your own games. Let us know if you spot anything that looks like an unnecessary high allocation.
With the new overlay, we found some easy wins and shaved a few MB/s away, and we’re also seeing noticeably fewer total garbage collection runs in our Deathmatch benchmark.
This should already reduce stutters caused by GC a bit, especially on lower-end systems.
Reduced Action allocations from Component.ExceptionWrap
Avoid per-frame KeyValuePair[] allocations when using Parallel.ForEach
Avoid allocations when creating and iterating CommandList
Remove need for second lightbinner for fog, just do it all on shader, we do classification on the shader itself, edge cases like fog disable are rare to require doing it with an entire new collection of lights
LoadAllGameResource only loads new resources
Make sure SeverPackages.InstallAll only reloads packages once
Close existing project settings window when opening new one
Use InvariantCulture in various places when parsing floats @MrSoup678
Don't show Max Player slider if Min players == Max players @nixxquality
Dynamic splash screen height to avoid stretching @boxrocket
Verify CreateGameResults cookie before usage @nixxquality
this won't noticeably lose precision until at least 200 days
W/ 52-bits in the significand, it would last much longer than that. At a ~1 micro-second interval (accurate enough for virtually all use-cases), a 64-bit float should last for: 2^52 / 1000 / 1000 / 60 / 60 / 24 = ~52k days.
As a small criticism, if you used integers instead of floats, it would never lose precision (it would just have a much higher hard-cap, rather than a lower "soft-cap" that gradually gets buggier).
When you need to accumulate many small amounts into a much larger amount, floating-point is simply not a good fit, because the advantage of floating-point disappears in that scenario. That's the reason that all the system-time functions across different operating-systems use integers (usually as micro- or nano-seconds). At an interval of 1 micro-second, a 64-bit unsigned integer timer would run for ~200m days.
W/ 52-bits in the significand, it would last much longer than that. At a ~1 micro-second interval (accurate enough for virtually all use-cases), a 64-bit float should last for: 2^52 / 1000 / 1000 / 60 / 60 / 24 = ~52k days.
As a small criticism, if you used integers instead of floats, it would never lose precision (it would just have a much higher hard-cap, rather than a lower "soft-cap" that gradually gets buggier).
When you need to accumulate many small amounts into a much larger amount, floating-point is simply not a good fit, because the advantage of floating-point disappears in that scenario. That's the reason that all the system-time functions across different operating-systems use integers (usually as micro- or nano-seconds). At an interval of 1 micro-second, a 64-bit unsigned integer timer would run for ~200m days.