One of the things AI still cannot beat humans at is understanding perspectives that a neural network has not seen before. While humans can create their own perception of new places using perceptions of existing memories, artificial intelligence has been unable to replicate it until now. With Google UK’s subsidiary DeepMind demonstrating neural networks implemented in AI, the technology can create 3D environments with vivid graphical detail simply by looking at 2D environments and without requiring prior input.
DeepMind Team Leaders Danilo Rezende and Ali Eslami have developed new software that takes advantage of deep neural networks in AI which can replicate human capabilities by identifying geometric scenes without requiring 3D data input. Simple inputs of 2D data using software can help the AI create a full-fledged 3D environment using mathematical representations and deep learning. With Google investing a large part of its resources into AI, we see quite a bit if progress. AI is already capable of identifying and categorizing images, playing games, offering surveillance solutions, acting as virtual assistants and much more.
All of this is achievable without requiring any hard-coded database of prior knowledge. The AI experiences the new environment, same as humans – and is able to render output in the form of 3D data. Perspective, shadows, occlusion, and lighting, have been coded into the technology to help create a realistic 3D environment rich in detail and information.
The AI is capable of producing stellar 3D output despite not having any code to guide it when it comes to creating physics in the graphics engine. It simply ‘learns’ all of the graphical detail looking at the images thanks to the versatility of neural networks created by Google DeepMind.
What do you think of the latest advancements in AI by Google’s DeepMind team? Let us know in the comments below. Also, to get instant tech updates, Follow TechNadu’s Facebook page, and Twitter handle.