In the past when developers wanted to create virtual landscapes of cities, they would have to create every building individually which can be a long and painstaking process. However it seems that NVIDIA thinks that they might have a way to speed up the process considerably, and that is through the use of AI.
The company has recently published a research in which they show off their model that can take videos from real-life and translate that into a virtual/AI-generated version of that particular scene, where AI is used to generate the graphics instead of a more traditional graphics engine which is what is used most of the time.
Of course the end result isn’t quite as good compared to the graphics you might expect from AAA games, but the ability for AI to generate graphics that looks as it does is already a very impressive feat. How NVIDIA achieved this was by driving throughout cities gathering data, and then used a segmentation network to extract high-level semantics from these sequences.