The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or click here to continue anyway

How Do You Solve a Problem Like 100,000 Uncoordinated Driverless Cars?

How Do You Solve a Problem Like 100,000 Uncoordinated Driverless Cars?

What cities can learn from video games about adapting to autonomous vehicles

Image credit: Improbable.io

What you see above is a simulation of a small part of a city. Each dot is a car, and the simulation is doing a pretty boring but important thing: showing how congestion ebbs and flows around junctions as cars enter and other cars leave.

If you live in a city, your life is influenced by these kinds of simulations far more than you might realize. Theyre used for working out how to phase traffic signals, deciding where to build new roads or railways, and for planning exactly how many new homes an area can accommodate. Again, maybe a little boring. But its the difference between your commute taking half an hour, and half a day. (Ever sat in a traffic jam and thought, Whoever programmed these lights is an idiot? Trust me, they werent.)

That said, simulations are not foolproof ways of predicting the future for two big reasons:

  1. Theyre only as accurate as the model that they use, and in situations with completely unprecedented activity, it can be difficult to know to trust one simulation over another.
  2. Theyre really limited in scale. You can model an individual traffic junction, and tweak it so its just right and everything flows smoothly, only to realize its had an unintended secondary effect on another junction a mile down the road.

On the first point, it just so happens that a big, big problem is on its way for lots of citiesdriverless cars. They dont drive like humans, they dont negotiate intersections like humans, and theyre going to interact with human drivers and pedestrians in unexpected ways.

London startup Improbable thinks it might have a solution to both of these issues of precedence and scale, and it all comes from trying to build bigger video games. Heres the companys pitch: Pretty soon, we wont have to put up with simulations that are only scale models of real-world activity. Well be able to have 1:1 simulations of almost anything, even cities with millions of people in them doing tens of millions of individual activities.

Improbable was founded in 2012 by computer-science graduates from the University of Cambridgeas CTO Rob Whitehead describes it, co-founder Herman Narula was a kind of crazy person who I barely knew there, but he pitched me this video gamea first-person shooter in which the entire world was alive.

They wanted to build something that wouldnt have those annoying moments that can break the sense of immersion in a digital worldlike when you get out of a car in Grand Theft Auto V, go into a building, and come out again to find the car isnt there anymore. Its a standard part of gaming that almost everyone accepts as a limitation of the simulationyou cant keep tabs on every single object, forever, because it would be both computationally intensive and require developing much more complex in-game economies. (Its fun to burn down a store, for example, but not so fun if that stores closure means your character starves to death because now they cant buy food.)

Whitehead explained exactly why that kind of game world doesnt exist: [It] would require potentially thousands of cloud computing resources cooperating togetherand we looked around to find the technology to let us do this, to build this game, and it didnt exist. So we kind of thought, This could be a very powerful piece of technology. Lets build it.

The result is SpatialOS, built by a team of engineers with experience not in games but in the world of high-frequency financial tradingwhere being able to coordinate massive, decentralized networks is a more common challenge.

Heres how to visualize what SpatialOS does. In a normal online multiplayer game, youve typically got a map, and that map runs on a single server. Each server is its own world, and has the job of keeping tabs on everything that exists in that world (agents, as Whitehead calls them). Players, weapons, bonus objects, furniture, vehicles, destructible scenery, whateverthese are all equally important to a game simulation, which has to know where they are, and their condition, at any one time. Very quickly, even in the most advanced games, you hit a wall determined by simple computational power. The city simulation game Cities: Skylines, for example, has a hard limit of 60,000 agents; if your citys population grows bigger than that, all those extra people are just abstracted data, or lines on a graph.

Thats why, for online gaming against other people, you can typically choose to join one server from a large selectionsticking everyone into one single server just cant work when theres only one machine with the responsibility of knowing what everyones doing.

SpatialOS goes a different way, making it possible to run lots and lots of servers in parallelthere is no longer a central server somewhere with final authority over everything. Instead, you can build decentralized, patchwork blankets of simulations that run side by side, where each server is only responsible for a single, small square of the map. As a player walks through the game, theyre actually jumping from server to server, seamlessly; each server, liberated from having to know exactly where everything is in the game, only has to keep track of whats in its own patch of land.

Very quickly, this makes for world-building that, by orders of magnitude, outpaces traditional simulations. Instead of 60,000 agents across a single map, youve got 60,000 agents per, lets say, a 100-square-foot [or around 10-square-meter] patch of land. SpatialOS juggles all of this coordination in the background, like a kind of meta-engine that sits over everything else, said Whitehead.

Hes particularly excited about Worlds Adrift, a massively multiplayer online game (MMO) from Bossa Studios that will be one of the first built specifically to showcase SpatialOSitll have roughly 10 million agents in each game. The idea is that the game wont ever have to forget about something because of the limits of the simulation, which will be restricted only by the number of servers in the cloud you bolt on: You know, when youre flying in a ship, and a hundred different panels fall to the ground and they stay there and later you come back and there are creatures crawling around, laying eggs, and forming a real ecologyits like color TV versus black-and-white, said Whitehead. When you make an infrastructure that means you dont have to fake simulating a world, you can simulate real worlds, you know?

Heres where driverless cars become relevant.

It turns out that SpatialOS is much, much more useful than as just a way to make better multiplayer gamesuseful enough to attract $20 million in VC funding from Andreessen Horowitz this time last year. Whitehead wont reveal many of the partnerships that Improbable is now involved with, but most of them arent in the gaming worldtheyre with universities, government bodies, and private companies looking to dramatically improve their ability to model real-world behavior.

The theory is that SpatialOS will bring the kind of access to supercomputer-level modeling to non-specialists that, right now, only exists for expertsin just the same way that cloud computing has made supercomputer-level data analysis available to non-specialists. Thats the idea, at least.

Take the junction simulation at the top of this piece. Since SpatialOS can scale up so much, it can patchwork together a 1:1 model of central London, using industry standard traffic modeling for every real-world traffic junction. But more than that, it can also integrate other modelscar traffic, pedestrian traffic, public transit stations, cellphone and internet service, water supplies, and more, with each model running on its own cloud server as part of the patchwork. Just like a video game, each individual server might only be running a single simulation, but scale that up to thousands or millions of servers all running models in parallelsharing data with each otherand youve got a 1:1, real-world scale model of the world.

We looked around [academia and engineering], they just didnt have anything like this, Whitehead said. We saw these silo, single simulations. The industry standard for simulating a single junction in a city is just a single junction. You cant scale it up to a city, you cant link them together. You cant unify that junction simulation with an energy model, or a kind of autonomous vehicle-like fleet optimization model. The idea of taking these different bits of siloed data and unifying them together to ask if they can answer bigger questions just didnt exist.

As Whitehead describes it, SpatialOS is best suited for solving simulation problems where youve got objects in a network that interact most strongly with other objects close together, but which can have ripple effects farther away. Its about taking specialized, single simulations for cancer cells in a tumor, for instance, and daisy-chaining them together to better understand how killing one cell can affect the cells in a completely different regionjust like the ripple effects from disturbing an ecosystem in Worlds Adrift. This is also, computationally, the same kind of simulation as a cascading failure that starts in a single node in the trunk network for the internet, and for simulating how huge fleets of driverless cars will behave in complex city environments.

Whitehead explained: Imagine a city sandwich, where each layer youve not got a flavor but a kind of simulation. Not nice to eat, but youll be unifying best-in-class transport modelsa traffic simulation, a pedestrian simulation, a power grid simulation, and a telecommunications model on top of itjust to be able to simulate a single junction. And what youd do to scale up the sandwich in the cloud is run a hundred, maybe a thousand of them, and then wed stitch them all together like a patchwork quilt and orchestrate them in real time to simulate the entire city.

This is extremely important for city planners. The big players in autonomous vehiclescompanies like Googleare focusing on the nuts and bolts of getting each car to drive by itself. Theyre going to be, primarily, reactive machines, which scan their environments and choose actions based on whats safe.

We can predict that a driverless car will brake to stop running over a childwe hopebut whats harder to predict is how a hundred of them, or a thousand of them, or a hundred thousand of them, will behave en masse in a major city. How will things change over time, as first ten percent, then a quarter, then half, then almost all of the cars on the road end up driverless? And how will it incidentally affect other networks, like foot traffic and public transit? These are the most critical transport planning questions of the next few decades.

To work on answering them, Improbable is collaborating with Immense Simulations. Its a new company, launched in February of this year and spun out of the wonderfully named Transport Systems Catapult, a British government research center set up in 2012. (There are nine Catapults, each designed to support and encourage cutting-edge R&D work in emerging sectors like renewable energy or cell therapy.) Its CEO, Robin North, was previously a researcher in transport systems and their environmental impact at Imperial College, and he says theyre working on modeling the first kinds of industries to get disrupted by autonomous vehicles.

We realized that the difference between being competitive and making money out of a system (or not) is knowing where you need to be to pick up fares or serve your community, he explained. Traditionally we sketch bus routes, or taxi drivers know where to go based on experience, but if you take the drivers out of the vehicles, you lose the ability to be strategic about where your vehicle goes, and you have to replace that to some extent with a computer system.

The idea is to develop specific software tools that can be rolled out in cities to mitigate the kinds of changes driverless cars will bring. Here, Norths keen to stress that there wont be simple, one-size-fits-all solutions. Every city is going to have its own blend of responses to changing traffic and transit needs.

Potentially, we can run larger, more complex, nearer real-time scenarios, he said. Normally what you see is a distinction between planning and operationsthings like what shape should a city be in the years to come? or do we want to invest in a new bus line or tube line? And then you get different types of tools that are used for issues like should we close Victoria station for an hour and divert passengers? or should we put extra buses on this route since its very congested? In many ways they should be drawing from the same data, and what were trying to do is bring those things together easily.

This is potentially all very exciting, and its a nice fit with the smart cities movement. But could it all be snake oil? Its impossible to hear about this kind of revolution-by-the-cloud without remembering the same kind of hype about big data a decade ago, when we were promised that the barriers between reality and the digital world would dissolve, leaving one big, rational, understandable world of logic and predictability. (That didnt happen, of course.)

Every agents behavior in a simulation is only as realistic as the model used to predict it, after allif youve got a problem in just one model, it might be small enough to have little effect when applied to the real world; but when youve got 10 million errors in 10 million models, thats a potential generator of some massively flawed predictions. We can change bus routes if they arent quite efficient enough, but if were going to rely on this to choose how to build actual infrastructure, its a lot harder to rip out a bunch of roads and start again.

Whitehead, rather defensively, doesnt agree: Its that thing with simulationsall models are wrong, but some models are useful. This will show the consequences of what you think you know and provide a framework where you can compare against real-world data to refine all the time.

The whole idea of big data is about looking at the past to ask questions about the future, but the issue is when you want to plan to build a new airport, or replace 50 percent of Londons vehicles with autonomous vehicles, the nature of the disruption in that system is so great that you cant use past data to ask what will happen, because youve changed the systemthe data is no longer relevant. This is a thing to generate big data that you can use the same analytics on, but its a synthetic environment. Its a virtual city in a box.

With this, you no longer have data, but you have a hypothesis of how things will behave on a smaller level. We know roughly the dynamics of two autonomous vehicles interacting together, but we dont know what the cascading consequences of a hundred thousand autonomous vehicles driving around in swarms will be. You get these weird cascading failures youd never expect. Its the chaos that emerges from simple rules.

Regardless, we should start to see the physical consequences of analysis performed with SpatialOS over the next few years (and not just in city transport, though the company couldnt disclose some of its more current, sensitive contract work). Meanwhile, Whitehead confidently expects to soon have the ability to open his phone and get a city forecast for the coming day, just as were able to get weather forecasts now.

Why is there not by the way, based on our simulations, the city will be like thistheres going to be a major sporting event in northeast London which will affect these tube lines like this, so maybe travel like this instead? Its because you need to unify lots of models. Thats what you need this platform for, he said.

If you were a structural engineer and you didnt simulate your bridge before building it, youd be called reckless. Right now we have major infrastructural projects like building an airporthas anyone run a simulation of the city post-airport? No, because there isnt a way to do that. I think thats going to become the norm.

[Edit 05/03/2016: This piece was edited in the introduction to clarify the limitations of some simulations.]

Continue reading on howwegettonext.com