NVIDIA is researching tools and developing ways to accelerate and simplify digital human creation. Digital humans have been widely used in media and entertainment – in the shape of video game characters or CGI characters in movies. However, the process of the creation of a digital human is extremely labor-intensive and manual. It requires hundreds of domain experts such as artists, programmers, and technical artists.

What is a digital human?

A digital human is the virtual world version of ourselves. From the early days of 3D games, Virtua Fighter was one of the first to present how 3D characters can fight together. These days, players can experience the journeys of memorable characters from different games. Digital humans have also been presented in movies such as Avengers: Endgame, where Brad Pitt’s digital form was the older version of himself in Benjamin Button. There are also new cases emerging in entertainment through storytelling with digital avatars. 

There are three scales, on which digital humans are measured:

  • Realism compared to stylism
  • Real-time compared to offline
  • AI compared to human-driven

Bringing digital humans to life

There are three main components to making a digital human: 

  • Generation

To generate digital humans, teams have to create 3D models, textures, shaders, skeleton rig, and deformation of the skin so it follows the skeleton.

  • Animation

For animation and movement, artists have to create the physical elements of the digital human, from the body and face to hair and clothing. It is usually a combination of deformation and simulation to achieve the right motion for these parts. In terms of realistic performance, there are mainly two ways to achieve this. The first one is animating by hand and the second one is using various performance capture techniques to get your motion data. Often, it is the combination of the two together.

  • Intelligence

Artificial intelligence (AI) handles specific types of performance, but that is changing rapidly. The bottom line of any of these approaches is to create context-based behavior so that the digital human can act or move in a believable way. Lastly, artists bring intelligence to digital humans  through bidirectional interaction.

Every voice will have a face

NVIDIA believes that digital humans are essential for virtual world experiences and that everyone will one day have their own digital version of themselves, whether it’s an accurate or stylized avatar.

With NVIDIA Omniverse, they want to create a framework where different types of digital humans can coexist. Using Pixar’s Universal Scene Description (USD) is the standard format across the 3D industries and Omniverse is helping drive these efforts towards USD. The NVIDIA team commented:

“Over time, the connection between real humans and digital humans will grow. It will go beyond watching a puppet on the computer. Eventually, the computer will read and interact with us, just as we do in real life.”

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR