<< Little projects for programmers Back to index... Character Sheet for developers, D&D-style >>

Minor Gripe

2019-08-27 -- A ramble on large-scale virtual worlds

Chris Ertel

Introduction

Going back to Active Worlds, I’ve always been really fascinated by online virtual worlds. When WoW was just being released and Everquest and Asheron’s Call were the big Western MMORPGs, some friends and I had kicked around the idea of a post-apocalyptic FPSMMORPG. We burned a lot of cycles on it, learning a good deal of programming and art and game design and so forth, but ultimately never really got a product out.

We had done some interesting modeling, and figured out that the tech at the time, server-side, could maybe just barely support the size of the world we wanted to simulate: a modest goal of the continental United States (sans Alaska). We figured that the elevation data itself could be found in the .gov somewhere, and sure enough the United States Geological Survey had datasets that looked about right–a resource we’d discovered while spelunking the Virtual Terrain Project. Anyways, you can can data up to about 3-10 meters in resolution, all the way down to 1 meter resolution if you trust the methodology. You can also get water survey and temperature maps and back out a reasonable biome for seeding a procedural generator for filling in content. Ah, what could have been.

Anyways, as kind of a fun exercise, let’s step through a super rough estimation of what it would take to simulate the state of Texas. This post is going to be mainly about the storage requirements–a subsequent post is going to be about actual computation.

Note: I’m playing super fast and loose with the math here, and probably am going to commit several arithmetical errors. I’m not going to let that stop me, and will patch up the calculations as I notice errors. Please bear that in mind as you read.

Virtual Texas: Key Observations

So, we sit down with a pile of books and our calculators and a some nice tea and decide to model Texas. What key information do we need?

First, how large is the state?

Texas’ physical dimensions are:

Interesting demographic and economic information:

(I couldn’t find out how many possums we have but they’re awesome and we have a hell of a lot. Similarly, armadillos are awesome but I couldn’t find estiamtes of their population size.)

So, this gives us a bit of of a feeling for the size of the task ahead of us.

Virtual Texas: Geographic Storage

First, let’s figure out some gross measurements. We know that the east-west distance is 1.387e6 meters. We know that the north-south distance is 1.289e6 meters. The altitutde is at worst 2.667e4 meters (we tack on another order of magnitude to allow for planes or whatever, but even then we’ll have headroom). We’ll take the larger of these three distances for the next little bit of analysis.

Let’s assume we want to represent a point with millimeter precision. That being the case, we’ll have 1.387e9 unique positions we must differentiate between, and that fits again handily within 32 bits (2^32-1 is 4294967295, or 4.295e9 ).

Now, we can decide to use either 32 or 64 bit floating point numbers (IEEE754 single or double representation), but we might end up with some fuzziness as we get away from the origin (something the Dungeon Siege devs dealt with back in 2002). We could also used unsigned 32-bit integers. The cost for doing the fixed-point (as opposed to floating-point) version is that it makes the math a bit harder and if it turns out to be the wrong move it can be a bear to remove.

We’ll go with 32-bit unsigned integers for storage, and something we’ll talk about later for calculations.

The size of the state is 696,241 square kilometers, which rounds up to 6.963e11 square meters. Assuming 1 meter by 1 meter patches, this is 1.3924e12 triangles (139 billion triangles).

Every triangle has 3 vertices, each of which has 3 components ( x, y, and z, the longitude, latitude, and altitude of a vertex). We’ll pretend this is just a triangle soup without shared vertices (flagrantly false, since every patch shares edges and thus vertices with adjacent patches), and so we expect 9 32-bit numbers per triangle. This is 36 bytes per triangle, or 2.5067e13 bytes for the whole state. Since there are 1e12 bytes in a terabyte, the dataset is going to be 2.5067e1 terabytes.

Not too bad, by modern standards–25 TB is actually enough to fit in RAM. And again, this is completely without any sort of even modestly interesting compression or exploitation of the geometry of the triangles–basically, the most pessimistic case. Reducing the resolution to 4 meter patches means 1/16 the number of patches, which is less than 2TB and comfortably within modern workstation reach.

If you wanted to go and just bang on the idea that you don’t really need the exact positions of the corners of the triangles–specifically, noting that the only thing that actually changes is the altitudes, the longitude and latitude being somewhat implicit in your sampling/addressing scheme–you can shrink your storage requirements to just the 3 altitudes of the vertices of the triangles, and that’ll get you to around 8 TB without additional trickery. That’s for the 1 meter resolution case–if you scale it up to 4 meter patches, you get about 512 GB. If you picked 10-meter patches, I think it works out to something like 80 GB of RAM. That’s smaller than what you can get in a laptop form factor.

Again, note that this is all about the simplest and most naive way you could store this information. These numbers are the most pessimum estimate and are just an upper bound.

Virtual Texas: Entity Storage

Storing positions as 32-bit unsigned integers is fine, but let’s say that we’ve picked double-precision (64-bit) floats for the actual calculations we want to do–they can fully encode the integer values we’re going to care about, but still look enough like floating-point numbers to get the nice benefits of processor SIMD instructions like AVX512.

In addition to this discussion of representing a point, let’s talk about representing an orientation (rotation) in space. To be lazy, let’s go with a quaternion representation using 32-bit floats. We aren’t compressing it or anything, though lossy compression/quantization of quaternions is definitely a thing.

Any object in Texas is going to have a position, an orientation, and the linear and angular velocities associated with it, along with a type (say, an unsigned 32-bit int) and an ID (a 128-bit UUID; we might even get away with just a memory address, but who knows).

As a table, for an object this looks:

Field Type Size (bytes)
Position Uint32 Vec3 12
Quaternion Float32 Quat 16
Linear velocity Uint32 Vec3 12
Angular velocity Float32 Vec3 12
Type Uint32 4
ID Uint64 UUID 16
Total72

Okay, so at a minimum each object takes up 72 bytes. Of course, for items that have some kind of state (say, cow AI) or properties (amount of fluid in a bottle), this goes up. We’ll think about that later.

Sentient objects

Given the categories of stuff earlier, we might pool sentient objects (with AI or something) into:

Type Quantity Approx. Min. Storage Size (B)
People 29,087,070 (2.9087e7) 2.0943e9
Aggies 67,580 (6.7580e4) 4.8658e6
Cows 10,900,000 (1.09e7) 7.848e8
Horses 1,066,800 (1.0668e6) 7.6954e7
Total41,121,450 (4.1121e7)2.9608e9

So, the sentient objects can fit in 2.9608e9 bytes (before interesting state). That translates to 2.9608 GB, which is enough to fit on a Raspberry Pi 4. We also see there’s about 41M actual entities (ignoring space) to take up–we’ll be using that later for making estimates as for processing time.

Non-sentient objects

Let’s look at objects that don’t really have any AI of their own. Property and things that could show up in ones inventory.

Now, that previously mentioned 300K objects/household figure lends itself to calculation, but if you read the article they count things like books, paperclips, pictures, and so forth. For the sake of making a reasonable simulation, we’ll pretend that every house only has maybe a 1000 objects tops (by consolidating every paperclip and sheet of paper and photo and so on). Assuming ye olde nucleare familye, that’s 4 people, so we’ll further pretend that each Texan only has 250 objects in their possession (cowboy boots, Old 97s albums, coat, revolver, “Keep Austin Weird” bumper sticker). We’ll also assume that each house has a maybe a 100 “house objects” of note–toilet, stove, Texas-shaped throw pillow, gun rack, and so on.

Type Quantity Min. Storage Size (B)
Vehicles 24,000,000 (2.4e7) 1.728e9
Boats 582,478 (5.8248e5) 4.1938e7
House Stuff 10,611,386 x 100 (1.0611e9) 7.6402e10
Wind turbines 10,700 (1.07e4) 7.704e5
Water windmill 80,000 (8e4) 5.76e6
Personal stuff 29,154,650 x 250 (7.2887e9) 5.2478e11
Total8,374,474,278 (8.3744e9)6.0296e11

Non-sentient objects thus are going to be 6.0296e11 bytes, or 6.0296e2 GB. So, under a terabyte of RAM.

Virtual Texas: Storage Estimate

Making the assumptions of above for objects, and then using the 4-meter patch for terrain estimate, let’s look at the damage.

Type Storage Size (GB)
Terrain @ 4m patch 1.5625e3
Sentient objects 2.9608
Non-sentient objects 6.0296e2
Total2.1684e3

Aha!

It looks like we might just be able to fit a basic working set of a static Texas into memory.

However, that’s ignoring additional information we might want to include, things like:

Even with that being noted, I think it’s really really neat that this idea of a virtual Texas isn’t as outlandish as I would’ve thought! The storage requirements fit comfortably into what I’ve got sitting on my desk right now in external hard drives (SSDs no less!). If you prefer cloud provisioning (which you really don’t for our use-case, but let’s say you just want to set VC money on fire), an AWS x1e.32xlarge has 3.9 TB of RAM.

This is totally possible…

…or is it?

Next time, we’re going to estimate the compute resources required to run a simulation, and then the networking issues.


Tags: simulations

<< Little projects for programmers Back to index... Character Sheet for developers, D&D-style >>