In the past when developers wanted to create virtual landscapes of cities, they would have to create every building individually which can be a long and painstaking process. However it seems that NVIDIA thinks that they might have a way to speed up the process considerably, and that is through the use of AI.
The company has recently published a research in which they show off their model that can take videos from real-life and translate that into a virtual/AI-generated version of that particular scene, where AI is used to generate the graphics instead of a more traditional graphics engine which is what is used most of the time.
Of course the end result isn’t quite as good compared to the graphics you might expect from AAA games, but the ability for AI to generate graphics that looks as it does is already a very impressive feat. How NVIDIA achieved this was by driving throughout cities gathering data, and then used a segmentation network to extract high-level semantics from these sequences.
They then used Unreal Engine 4 to create a basic topology of the environment, which means that while AI is used to generate the graphics, the structure still relies on traditional engines. According to Bryan Catanzaro, Vice President of Applied Deep Learning at NVIDIA, “One of the main obstacles developers face when creating virtual worlds, whether for game development, telepresence, or other applications is that creating the content is expensive. This method allows artists and developers to create at a much lower cost, by using AI that learns from the real world.”