The Universe integration with Grand Theft Auto V, built and maintained by Craig Quiter's DeepDrive project, is now open-source. To use it, you'll just need a purchased copy of GTA V, and then your Universe agent will be able to start driving a car around the streets of a high-fidelity virtual world.
GTA V in Universe gives AI agents access to a rich, 3D world. This video shows the frames fed to the agent (artificially slowed to 8FPS, top left), diagnostics from the agent and environment (bottom left), and a human-friendly free camera view (right). The integration modifies the behavior of people within GTA V to be non-violent.
To get started,
a GTA V server instance. You'll need
Python library installed (no need to upgrade if you've installed it
previously). You can attach an agent by running the following code. As
usual, observations are pixels. In addition to keyboard/mouse, the
agent can use a simulated joystick:
An observation supplied to the agent.import gym import universe # register Universe environments into Gym from universe.spaces import joystick_event env = gym.make('gtav.SaneDriving-v0') env.configure(remotes='vnc://$host:$port') # point to the GTA V Universe server observation_n = env.reset() while True: steer = joystick_event.JoystickAxisXEvent(-1) # turn right throttle = joystick_event.JoystickAxisZEvent(-1) # go in reverse # Alternatively, use WASD to steer: ('KeyEvent', 'w', True) action_n = [[steer, throttle] for _ in observation_n] observation_n, reward_n, done_n, info = env.step(action_n) env.render()
DeepDrive is a platform for creating open self-driving car AI. DeepDrive uses modding frameworks and memory inspection techniques to repurpose GTA V as a self-driving car simulator; it also provides pre-trained self-driving agents and the datasets used to train them. The existing DeepDrive environment and agent are now built on top of Universe.
Work on DeepDrive started before Universe, and so it provides a good contrast for integrations with modern games before and after Universe. The original DeepDrive implementation required a local Windows PC and took about a day to fully set up the game and the agent. The new DeepDrive can be set up in about twenty minutes, supports agents on Linux or OS X, and is compatible with pre-existing Universe agents (though best results come from using the simulated joystick rather than the more common keyboard/mouse).
Today's release includes:
The integration supports selecting the camera offset and field of view. It also includes reward functions for training via reinforcement learning, including collision avoidance, distance from destination, and staying on the road.
Before Universe, DeepDrive used a DirectX hook for screen capture and required using the C++ interface to Caffe in Windows to write agents. Now the game runs on a Windows virtual machine in the cloud, and communicates with Universe via websockets and VNC. The agent can thus run on Linux or Mac and be written in any ML framework.Universe transfers pixels, keyboard, and mouse over VNC and other information over websockets. To support Joystick control of steering and throttle, we send Joystick actions to the environment over websockets. [Image: Craig Quiter]
As mentioned in the Universe launch post, the client can sustain up to 20 FPS over the public internet.
GTA V gives researchers access to a rich, diverse world for testing and developing AI. Its island setting is almost one fifth the size of Los Angeles, giving access to a broad range of scenarios to test systems. Add to that the 257 different vehicles, 7 types of bicycles, and 14 weather types, and its possible to explore a huge number of permutations using a single simulator.GTA V's 49 square-mile island of San Andreas gives researchers the ability to train AI agents across bustling metropolitan areas, winding mountain passes, flat deserts, and freeways.
The environment also enables collecting massive amounts of labelled data: you can use the underlying GTA V engine to collect 2D or 3D bounding boxes and segmentation labels for cars, pedestrians, bicycles, animals, road surface, traffic signs, or any one of GTA V's other 7000+ objects. The environment can also be extended via mods for real-world vehicles, road construction, and even entire cities.
This release includes a baseline agent, trained via imitation learning on 21 hours (about 600,000 images) on the game's AI driving. (The built-in game AI is a good initial target: it performs better than a typical human since it can access internal game state, though it still makes mistakes such as making U-turns on the freeway.) The baseline agent can drive in a variety of different weather conditions, react to traffic, and keep to its lane. This agent is a start, which we invite the community to improve upon!
Other researchers have shown that you can train vision systems in GTA V and use those to classify images in the real world. The Universe GTA V integration makes it easy to try out reinforcement learning techniques on a simulated self-driving system.
The GTA V integration into Universe automatically inherits all of the tooling and semantics that Universe provides for comparability and shareability, and makes it easy to benchmark the performance of agents on GTA V. It can be used standalone, or as yet another environment for a general Universe agent to access. We look forward to supporting more community-contributed environments like this in the future.