Sandbox Logo
01 December 2021

November 2021

This month we worked on tools, post processing, vr, water, destruction and a bunch of other stuff.
We're updated everything to use .net 6 and C# 10.

This lets us use global usings. We're using these in addons for things like the global Log and DebugOverlay. What this means for developers is that these are no longer static classes - so you can extend them with your own functions!

I don't think there was any considerable advantage to updating to net6. In fact it probably fucked a bunch of people over since they had to install VS2022.
I put some effort into making it so you can tweak your game's settings while in game, instead of via the website. This has the large advantage that you can see what changes while you're editing it - so everything needs less explaining.
After this we started looking at creating organizations and new games/addons in game too instead of using the website. After about a week of messing around with it we realised that we fucked up and this is a stupid way to do it.
I watch some woodworking channels on YouTube. And you know,  most of them don't really do much woodwork. They spend 99% of their time making things to store their tools in, or systems to make moving their machinery about, or to make cleaning up easier. The hobby feeds the hobby.

It occurs to me that I do the same thing with game development, where instead of spending time to make games, I spend 99% of my time making tools to make games. And in this case specifically, I'm making tools to make tools.

Tool Addons

So I started work binding up the tools system. There was a choice here. Do we use the existing tool system (Qt) or make our custom UI system work externally. I made the decision that tools should just use Qt. The tools system already uses Qt, we're not going to convert everything to our UI system.. so we're always going to have to access Qt - so just use that. Having a Frankenstein system of multiple UI systems is a Unity move.

So I got tool addons working, all hotloading c#, hotloading stylesheets, all feels good.

Api Stretch

When designing an API for something I like to do an api stretch. So instead of binding everything for the sake of it, I choose something that I want to make possible and focus on achiving that.

This means that when you're binding you're doing it from an outsider looking in point of view, instead of the other way around. The point of binding to C# is to make things a billion times easier and remove all the bullshitting of c++.

Node Graph

So my stretch for a week was to make a node graph editor. This was simple enough because I had the source of the one Valve made as a guide.

This was something fun to iterate on, with the hotloading c#. It took me about 5 days to make, but keep in mind that I went in knowing barely anything about Qt, or its graphics views, or its event system - and had to bind all the widgets and paint stuff in c# (in a safe way that didn't try to access pointers of deleted objects etc).


The door has been kicked open. We're inside now. This shit is happening. It's inevitable. Hammer Addons.
Something I've yearned for since we got Source 2 was making an editor mode like UE and Unity. Right now it's pretty confusing for people when they start in "-tools" mode and you get the regular game window with a separate asset browser window.

With the new tools binds that became fun and possible, so I spent a few days chasing it around. This has got to be the 9th time I re-made the console.
It's really early days with this, but so far it's feeling like the right decision, and probably something I should have looked into 6 months ago. There's a LOT we can do with this stuff that wasn't exactly possible before.

Sometimes when you ran a trace for bullets in your weapons, players would often miss because by the time the input command is processed on the server, the target has moved. So we've hooked up Source's lag compensation system and fixed it up a little.

Lag compensation happens server-side during Simulate and essentially rewinds lag compensated entities to the position they were in when the client being simulated sent their input command. This means that traces will hit entities that the client would expect to be hit from their point-of-view.

I documented it over at the wiki.
We added a custom post processing stack to the game. This much simplifies the process of making, adding and controlling post process effects.

We also introduced the standard post processing shader which contains a lot of common effects and can be controlled easily from games and addons.

If you want to learn more about it, check out the wiki pages!
I have continued to make assets for construct. This month has been a pretty broad selection. Lots of single props, like the alarms and light fixtures and updated broadband cabinets. I have also made some kits of props like the iron fences and the roller security shutter seen in the background. The roller security shutter can and be put together in hammer and scaled to fit any size of opening without texture stretching. This should hopefully be useful for everybodys own maps.
This month I have added support for custom ModelDoc nodes. These are general purpose nodes that can carry data on each model, which is then accessible from code. They work similarly to custom entities in Hammer, you define a struct or a class in C#:
The node then can be applied in ModelDoc, and accessed in an entity like so:
I have also added support of a variation of the these nodes called model break commands, they work similarly but have a callback for when the model breaks into its break pieces. The base addon has a bunch of examples for both types of  these custom nodes.
The past couple of weeks I've been working on getting more VR elements exposed to C#, with this the main menu is now fully interactable in VR.

This has been done with the new VROverlayPanel API the exact same way you're able to in your own games.

VR Overlays draw over the top of the 3D scene, they're not affected by anything in the world scene ( e.g. lighting, post processing effects ) making them ideal for HUDs or menus that should be local to the player's VR space.

Creating them is incredibly simple and you can pass your preexisting RootPanels to it for easy VR compatibility.

new VROverlayPanel( new MainMenuPanel() ) { Transform = new Transform( Vector3.Forward * 40.0f + Vector3.Up * 60.0f ), Width = 40.0f, Curvature = 0.2f, };
If your panel wants any mouse input your VR controllers will show automatically and simulate UI input.

First detailing pass and props placement to ground the map in a more realistic setting. There is still more to add but the current goal is to bring the rest of the map to this level of detail before that.

In the same way the car park finally have some floor marking and lights which were made breakable so that people can have a basic example.

Same concept in the warehouse, some hanging lights with physics were added so there is a basic setup that people can check out.

Lastly some hotspot materials were added which should help people making maps (more will probably come as needed).

We've also been working on improving the Depth Of Field to make it more physically accurate to a real camera. We've switched up how we're doing it, introduced more options to play with, and leads to a much nicer result. We have also changed how we end up blurring the depth of field leading to a nice-looking Boken. You can notice the circles on bright areas when it's out of focus
This month I've been working with Conna to improve the User Interface for his game mode Hover. That includes creating a whole bunch of icons for weapon types and character enhancements.

Character Selection

Weapon Selector
You can switch out the default load outs as well as upgrade them.
Death Screen
Also acts as a re-deploy view. It shows you who killed you and by what.
Here's water've been working on this month:

Working on water gave an opportunity to solve a lot of problems that will give value to developers; fetching color/depth buffer from shaders with ease, clip planes, tesselation, offscreen rendering, compute shaders dispatch, constant buffers from C# code, etc.

I am experimenting with having ripple simulation be a big part of water simulation, initially the player's movement will affect the reaction that comes from the water.

Underwater fog is all lit using the 3D lightmap volume from the world and dynamic objects cast nice crepuscular rays, if you go under a dark bridge, you'll see the water's surface going darker if it has thick fog.

This is just a second iteration of how I'd like water to work, it's still heavily work in progress but I think it's in a state where it can be put to public preview, I'll keep iterating on it and taking feedback on the art team and the community to keep improving it.

One of the main things we were missing in S&Box was a form of real-time dynamic reflections.

Screenspace-based reflection techniques alone look terrible most of the time and I don't like them; they are expensive and we can have a better effect for cheaper.

I've implemented a system that uses an approximation of geometric primitives of the models for reflections.

This allows us to trace a ray on world space very cheaply and with a lot of quality, including having accurate rough reflections without any sort of temporal accumulation or shooting multiple rays per pixel.

This means reflections won't dissapear when the object is occluded from the screen or that it would run slowly or blurry on your old computer, there's plenty of optimizations to make sure the ray is only cast where needed.
Our version of VRAD3 can already bake SDF information of the world so in the future might be interesting to use that information so level geometry can be reflected using the same techniques too.

Right now this is an experiment, you can enable it in your shaders by adding `D_HIGH_QUALITY_REFLECTIONS` as a combo or as a preprocessor.

Hair looked like crap and we didn't had post processing effects on UI Panels or even proper alpha support.

Terry's million-pound haircut will look as intended now on the customization screen among other details including proper HDR rendering on panels.
We've started work on our standard set of viewmodel arms.

We talked internally about whether we'd want this to be in the cartoony 4-fingered style of the citizen, or a more gritty/realistic style and settled on the latter. We wanted to make something that would work with regular weapons and makes sense in every game, even if that game chose not to use the Citizen model.

This doesn't rule out the possibility of us adding a set of hands in the same style as the Citizen, in fact it's likely we will in the future, but this will be the standard viewmodel.

Max is currently working on setting up the rig for this, so there's still a fair amount of work before you'll see it in game!
We've introduced the idea of pathing nodes and querying them to decide how AIs should be able to move. We can now create smarter AIs that are more aware of their environment.  Here's an example of them being able to duck under obstacles:

For a long time we've been wanting to port over the shatter glass entity from HLA, I finally got around to it. This is a good test of our model builder API.

I tried another way of doing destruction by using voxels. This could be extended in the future to support glass, wood, bricks etc. Both of these are also fully networked.
The artists were asking for a way to rotate the skybox in modeldoc to preview different lighting on their models. I come up with a way to do this by rotating the model and camera at the same time. A simple but very useful feature for artists.
Lots of unplanned fun stuff this month. This is the kind of development that I like. Everyone running around, chasing things that interest them, researching and discovering to come up with something new. This is the stuff that excites me about working on s&box.

I'm going to re-evaluate my plans. The tools stretch has made a ton of stuff possible that wasn't previously. We have the potential to do some really crazy stuff. It's fun and it's well worth exploring. 😍