These are the docs for 13.5, an old version of SpatialOS. The docs for this version are frozen: we do not correct, update or republish them. 14.5 is the newest →

Upgrade to the new load balancer

The new load balancer is part of the new Runtime, which is available from SpatialOS version 13.4. Before you upgrade to the new load balancer, we recommend upgrading and testing your project using the new bridge.

  • If you’ve created a SpatialOS project on or before 30th January 2019, and haven’t yet upgraded to the new Runtime, you need to do so now. Otherwise, you’ll get errors when you next try to launch a deployment.

  • If you’ve only created projects after 30th January 2019, you don’t need to upgrade to the new Runtime - you’ll be using it automatically.

GDK for Unity users:

  • If you’ve created a project with the GDK for Unity on a version earlier than 0.1.4, and haven’t yet upgraded to the new Runtime, you need to do so now.

  • If you’ve only created projects on the GDK version 0.1.4 or later, you’ll be using the new Runtime automatically.

Load balancing: background

Load balancing is splitting the world up between managed (server-side) workers so they can cooperate to simulate different parts of it. It decides which worker instances get authority over which entities, and makes sure there are the right number of workers in a deployment.

You choose a load balancing strategy (ie how the Runtime should split the world between workers) as part of setting up a deployment.

What’s changing

Entities are no longer grouped in chunks. Previously, all entities in a chunk were load-balanced together; now, entities can be load-balanced individually.

Load balancing strategy is now determined at a higher level. Previously, the load balancing for each chunk was independent of all the other chunks, so it wasn’t possible to determine what load balancing should look like globally. Now, strategies can be worked out centrally, allowing you to reason about high-level strategies.

A few things will change as part of this:

  • We’re simplifying write ACLs, to pave the way for layers.
  • The load balancing strategies available will change, along with exactly how they load balance. For example, there’s a new load balancing strategy in which authority regions can overlap.
  • The load balancing configuration in a deployment configuration file will change.
  • The presentation of load balancing in the Inspector world view will change.
  • There are two new spatial CLI commands you can use for debugging: spatial local worker replace and spatial project deployment worker replace.


The new load balancer introduces a new concept: a layer.

Learn more about layers

How to upgrade

Before you start

Before you try out the new load balancer, we recommend upgrading and testing your project using the new bridge.

1. Update your spatial CLI version

Open a terminal window and run spatial update to get the latest version of the spatial CLI.

2. Worker attributes

We’ve simplified write ACLs, which were a major source of confusion. Requirement sets and attribute sets used in write ACLs can now only have a single attribute. This means a write ACL for a component can only specify one worker attribute as a requirement.

Similarly, a worker type can only have one attribute. (Read ACLs haven’t changed - you can still have multiple things in those sets.)

If you currently have a worker that has more than one attribute, you need to change this. For example, say you have two workers:

  • Worker 1, which has the attributes “A” and “B”
  • Worker 2, which has the attribute “B”

Because each worker type can only have one attribute, you need to:

  1. Remove the attribute “B” from Worker 1.
  2. Add the attribute “A” to every entity that has the attribute “B”.

You also need to check whether you need to change anything in:

An important consequence of this change is that you can no longer delegate component authority from command handlers to the calling worker by adding the caller’s worker attribute set to the component_write_acl field within an entity’s EntityAcl component. This is because the caller’s worker attribute set in the command request op contains both the calling worker’s layer attribute and its specific attribute of the form workerId:<worker ID>. To delegate a component to a calling worker, construct the specific attribute string directly from the calling worker ID (for example workerId:ClientWorker17).

3. Update your load balancing configurations

The format of the configuration to do with load balancing in launch configuration files has changed, and the strategies have changed a bit too.

You can run a deployment without changing the configuration: the Runtime will infer the correct new configuration (as long as you’re not using one of the removed strategies). But it’s still worth switching to the new configuration.

How the configuration structure has changed

  • Load-balancing is a top-level element, rather than inside the workers element.
  • Load-balancing is per layer.
  • When setting load-balancing for a worker, you no longer use the worker type. Instead, you use the worker attribute, which acts as a label for the layer.

Use this as an example to update the structure of your configuration files:


"workers": [
        "worker_type": "example_worker_type",
        "load_balancing": {
            "auto_hex_grid": {
                "num_workers": 4
        "flags": [ ... ],
        "permissions": [ ... ]


"load_balancing": {
    "layer_configurations": [
            "layer": "my_worker_attribute",
            "hex_grid": {
                "num_workers": 4
"workers": [
        "worker_type": "example_worker_type",
        "flags": [ ... ],
        "permissions": [ ... ]

How the strategies have changed

Update your configuration files accordingly:

  • static_hex_grid (“statically allocated workers: manual grid creation”) Removed. This strategy was too prone to user error, so it’s been removed. Use the hex_grid strategy instead.
  • auto_hex_grid (“statically allocated workers: automatic grid creation”)

    This is now the only hex grid strategy, so it’s now called just hex_grid.

  • points_of_interest There’s been a slight change in behaviour. See this wikipedia diagram for an idea: it used to be (roughly) Euclidean, and is now (roughly) Manhattan. The behaviour is similar, but if you rely on entities being on particular workers, you should double-check that this still works.

  • singleton_worker Removed. Instead of this, if you don’t want the Runtime to start any workers (for example, for clients), just leave that layer out - don’t specify anything for it in the configuration file.

  • dynamic_loadbalancer Removed. We’ve removed this strategy because it wasn’t applicable to many game types, and had confusing behaviour. We’ll be adding new, better-defined and controllable dynamic load-balancing in the future.

  • rectangle_grid (new strategy) This new strategy is easier to reason about than the hex grid. Also, for <25 workers, it balances work more evenly than a hex grid arrangement. So we recommend using this strategy for small projects.

For more details, see the new Load balancer config page.

5. Check for changes in behavior

5.1. Worker authority

For the new load balancer, you should check your assumptions about what your workers have authority over. Your code might assume that a worker is authoritative over all the relevant components if it’s authoritative over one. When there are areas of authority overlap (which can exist in hex grid and possibly points of interest), there might be some delay before you get authority over all the components.

5.2. Static number of workers

Previously, if you specified a static number of workers, it was possible that they were not started. This would happen when there was no entity with a write attribute that required a worker of that type. With the new load balancer, if you specify a layer with a particular number of workers, you’ll always get that number of workers.

For the auto_hex_grid strategy, you specify a maximum number of hexagons, but it may use a lower number to optimally tile the world (this is the same behaviour as before).

5.3. Validate server-worker assembly directory

You can skip this step if you are using generated build scripts for your server-workers.

If you are using the C SDK, the C++ SDK, or custom build scripts for your server-workers, make sure they end up in the correct directory. Previously, server-workers would be started in local deployments and uploaded to the cloud from various locations within the build/ directory of your project. The worker assemblies (for example now need to be in the build/assembly/worker/ directory after you run spatial build in order for the deployment to pick them up correctly.

6. Workflow changes

If you want to connect an unmanaged worker to simulate a layer, you need to do this differently now.

There are two ways of doing this, depending on whether:

  • you want an unmanaged worker to be solely responsible for simulating a layer, or
  • you want an unmanaged worker to participate in simulating a layer, alongside managed (server-side) workers

Unmanaged worker solely responsible for simulating a layer

Tell the Runtime not to start any workers for a given layer. You can still specify a load-balancing strategy, but you’ll start and connect any managed workers yourself. To do this, set the option manual_worker_connection_only for that layer:

"load_balancing": {
    "layer_configurations": [
            "layer": "my_worker_attribute",
            "hex_grid": {
                "num_workers": 4
            "options": {
                "manual_worker_connection_only": true

Then you can start workers:

If you don’t set this option and connect an unmanaged worker, the worker you connect will not be given authority over anything (unless you’ve explicitly specified that worker instance in the ACL). Essentially, the Runtime will treat it like a client.

Unmanaged worker participating in simulating a layer, alongside managed workers

You might want an unmanaged worker to participate in load balancing alongside managed workers (for example, if you’re running a worker from your local machine with a debugger or profiler attached).

In this case, use the new worker replacement command in the spatial CLI:


spatial local worker replace --existing_worker_id <UnityWorker1> --replacing_worker_id <MyDebugWorker>


spatial project deployment worker replace --existing_worker_id <UnityWorker1> --replacing_worker_id <MyDebugWorker> --project_name <MyProject>

Known issues

  • In the Inspector, the coordinates for a worker that isn’t authoritative over anything are 0,0,0 (the center of the world), until the worker becomes authoritative over something.

Search results

Was this page helpful?

Thanks for letting us know!

Thanks for your feedback

Need more help? Ask on the forums