Developer Chat: Detaching Our Engine from Unity
Detaching Our Engine from Unity
Unity is all that and a bag of potato chips – until it isn’t. When you’re doing something technically interesting or complex, especially on a non-PC platform, you can quickly hit the point where the bag pops and the chips turn to crumbs.
Background
At Boss Fight, we currently use Unity3D for all our active projects. The projects are primarily mobile based and require various server related endpoints (simulating game state, collecting stats, etc). The way we do this today is via a set of non-Unity dependent C# code that can be compiled with newer versions of Mono or .NET proper. Effectively this is our Core + GameSim framework. Unity only ends up being our presentation (UX) framework. This is a fairly large task that we off load and the Unity tech holds quite nicely overall (e.g., rarely do we deal with mundane cross platform issues at a game level). The main point of this Core + GameSim architecture is that it allows us to share and run the same exact code on the client (device) and server (which is powered by Mono 3.x on Linux machines). This also permits us to easily execute and profile our non-Unity dependent code using other .NET runtimes, such as with Microsoft’s .NET framework.
Prior to Boss Fight Entertainment, the team worked on some very successful games. The more recent release, CastleVille, was implemented a little differently that our tech today. It still required server functionality to operate, but the clients were web based. Unlike our architecture today, the client was developed in Flash while the server was done in PHP. In order for most of the game logic to be shared it ended up getting programmed in a custom language called zdscript. However, all systems outside of the game logic couldn’t be shared, including the interpreter for said script. The many headaches and bugs this pattern generated led to the ‘pure-C# game code’ mentality that exists at our company today.
Unintentional Benefits
Boss Fight is getting very close to the world wide launch of our first game, Dungeon Boss. With that, we’ve been taking a lot of time to fine tune the engine and some of the tools in critical points of our pipeline. One of the critical components that exists in both is our “BFSerialization” framework, which we use instead of Unity’s built-in serialization API. With the BFSerialization framework we’re able to convert any object graph to and from binary for storage and communication purposes. This framework is used in both our server and client code, so it must be very fast and memory efficient.
There are other serialization solutions out there, but none of them neatly fit our problem space. For example, .NET’s serialization framework relies too much on reflection and embeds very strict type information into the output stream. With BFSerialization, we’re able to offload the majority of reflection-based computations through custom code generation. As input to the generator we interpret C# attributes on our serializable types in a project build step.
Our streams of serialized objects are constantly being transmitted to and from the server. We also use this framework for storing our game-defining data, or “proto-data”. Our engine is highly data driven, so we have a lot of this proto-data. Thus the performance of loading and saving objects is of a high concern. The slower our serialization is the longer the client takes to load and the more servers we have to pay for.
Even before I was tasked with looking into our serializer’s performance we knew it was sub-optimal. However, it wasn’t really made apparent until some big, new features were integrated close to our Soft Launch. While one part of the team was celebrating the fact that we had reached a new peak on our way up World Wide Launch Mountain, another part saw we were only a few steps away from a gaping chasm between us and the next slope. Specifically, they were noticing a fairly large increase in transaction times.
Everything occurs as a transaction between the client and game servers, evolving a character, receiving loot, etc. Some of these new features introduced new types and increased the number of transactions. So the original author of our BFSerializer did some preliminary scouting of this chasm in our metaphorical mountain and confirmed a sneaking suspicion: the time to deserialize game states and other payloads for processing was consuming far too much time.
Having recently helped on other lower level memory issues in our engine earlier in the year (more on that later), I was brought in to help us cross this obstacle and keep us on track to the next peak. So when it came time to profile this code I had a handful of options and tools in my pack: Unity’s profiler, building Mono with profiler support enabled, or get the code running in .NET where there’s a sea of mature tools.
Unity: The profiler in Unity can get the job done for many tasks, but it provides no means of comparing multiple profiler sessions, is frame based and will drop results after recording for so long, and doesn’t have very granular control over what stats are on or off in recordings.
Intensive profiling in general can involve trying many code configurations, but unfortunately Unity’s out of the box build system right now isn’t as versatile as in Visual Studio or Mono Develop.
- The automatic recompiling of project code whenever Unity regains focus can be annoying.
- There’s no simple means of setting your current build configuration (debug, checked, release, playtest, etc).
- You’re limited to using prebuilt DLLs or sticking code in one of the four “Assembly-CSharp” projects it automatically generates. Unity creates .meta files for everything, including the .cs files and the folders they reside in. I can imagine a step towards configuring code that exists in its own assemblies via overridable settings saved in these .meta files.
Mono: Mono has some profiler support, but doesn’t ship with it. (At least, the Windows distro for 3.x doesn’t). In order to get profiling to work, I would need to build Mono from source, and then use the mprof-report tool for raw analysis of the profiler’s output .mlpd files. There’s also Xamarin Profiler, which consumes said .mlpd files as well. Since I didn’t end up taking this route for this task I can’t offer any feedback on it.
.NET: Finally, there’s Microsoft’s .NET proper and a long list of commercial 3rd party tools to choose from for profiling: JustTrace, Dynatrace, Red-Gate’s ANTS, and dotTrace to name a few.
I chose to use dotTrace by JetBrains, as I had prior experience with it. dotTrace offers a matrix of tracing options, from typical sampling of call time, to line-by-line stats, or even a timeline capture. Whereas Unity’s profiler will collect all information (self/total time of a method, how many times it was executed, etc), dotTrace’s configuration options let you can chose to not track the number of times a method is called (ie, sampling vs. tracing). This setup lowers the skewing of run times in the results (as the profiling overhead ends up bleeding in).
Now, I know what you may be thinking: “you’re profiling code in .NET that is ran on Mono in production?” Yes, the .NET route was mainly to get an initial sense of where to begin improvements. There are obviously going to be platform and API differences in the context of performance. I figured if we needed to go that low level on performance that I’d eventually get a build of Mono running, but going with .NET tools felt like the path of least resistance.
The logs generated by dotTrace were extremely helpful in weeding out places where we were calling APIs in an inefficient manner. There was one case where we were going over an unsorted list of objects and processing their members if any have a certain attribute applied. The member info was reflected from the object each time. And as it turned out, most objects were of types that didn’t have the attribute! So instead what I ended up doing was sorting objects by their type. From there I could build the needed member info before even touching the objects. If we were really desperate we probably could have had the code generator build some static info regarding what types actually need this processing. However, these days we don’t try to throw too much into codegen.
You’re only as good as the tools you keep company. Had we invested all our money in Unity for climbing this mountain we would have had far fewer options in how we went about getting across this chasm (i.e., profiling our non-performant C# code). While the decision to leave Unity ties out of our Core + GameSim code didn’t stem from this problem, the choice was certainly a blessing in disguise. We’re at a point on our project’s mountain were we look down and see the tiny hole that once was a chasm representing a time suck in a very critical path in our game. If that time suck and its impact were weighed as a backpack full of extra gear to get up this mountain, the changes and improvements we made via these smarter tools have us free climbing instead. Or in plain speak: we’re experiencing >50% less overhead now with the improvements.
Dodging another hole in the Earth
As I was hinting near the start of this article, earlier this year we had no choice but to create our own custom memory tools for tracking “leaks” in the managed heap. Unity uses a very old version of Mono, which also uses the Boehm garbage collector (aka libgc). While they’re starting to move non-Editor builds to their IL2CPP technology, they still use the Boehm GC (albeit, a new revision). Said GC was originally designed for C programs and has no sane debugging or tracing functionality. It was written in the early 90s, so I guess there’s a considerable amount of legacy tech that may rely upon it which deters others from modernizing it. Getting around this specific hole was like entering a Survivorman mode and just taking everything around you to survive and get out of a mess in the wilderness. We made it back to civilization, but not without our fair share of scratches and bruises.
Conclusion
Hopefully you, the reader, can take some of this experience and consider it when working on your next game that’s based in Unity, even if it’s not strictly mobile. Unity as-is has a lot of fancy machinery, albeit some of it currently falls sort of its potential or can shoot you in the foot if you’re not careful. You even have to be mindful of some of the higher level C#-idioms you use in your Unity games (we’ll have to save more details for another blog post for now).