This new AI that is able to look at a bunch of unstructured motion data, like this, then place a character in a video  game and see all the amazing things that it can learn from it

17.01.2023 0 By admin

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér.

Today we are going to have a look at this new AI that is able to look at a bunch of unstructured motion data, like this, then place a character in a video  game and see all the amazing things
that it can learn from it. Walking,  running, dancing, you name it. And, look at that! Yes, this works for bipeds, and  quadrupeds at the same time. Now that is a quite a challenge, so let’s see what is going on here.

First, why is this a challenging problem?
Why do we not just copy the movements from

the training data? These were recorded
by real humans and dogs after all. Well,

that will not cut it here. You see, in video
games, we get to control these characters,

which means that we can stop any movement
at any time, and start a new one. Aha!

So this requires neural networks to look at
a big soup of motion data. And how big is

this soup? I will tell you in a moment,
and I think you will be very surprised.

And wait, even learning the essence of
these movements is not enough, it needs to

learn about transitions too by itself. Oh yes,
transitions are key. Why? Well, look at this.

Here is a previous method, and whenever we
change direction, look. These transitions are

quite unnatural, and this is not parkour, we
are still talking about just running around,

a much simpler task to animate. And it
is still not working too well. Not good.

And oh my! Are you seeing what I
am seeing? What is that? Oh yes,

READ  Making a Survival Game with 1 Pixel

that’s right. This is foot sliding. The bane
of our existence. More on that in a moment.

But now, let’s see how the new
method solves this problem.

Wow. Now that is fantastic! So much better.

And, what about the doggies? Well,
previous methods are not too bad here,

but the new one is so much more fluid.
The movement of the body is now better,

but if you don’t find that too noticeable, also
check movement of the tail too. Previous methods

seem a lot more tentative, while
the new one is much more lifelike.

This new technique can also perform
more advanced actions, I loved how

it performs the dribbling here. And note that
the ball is controlled by the physics engine,

so the character has to react to the
ball’s movement quickly and convincingly.

And, the dance moves it has are really
cool too. And it can not only dance,

but it can dance so much more convincingly
than previous methods. Loving it!

But, if we wish to get even more crazy,

we can even combine different movements for
the lower and upper body, and I know from

previous papers that we can’t just copy
paste two motions together to get this,

these are so much more challenging. And the
new method does it with flying colors. So good!

One of the many key insights in the paper
is that they propose using this diagram,

which is the phase space. And as we pick a point
and move it around there, our character starts

READ  How an AI learned to perform these amazing juggling skills

moving. However, not all spaces are this
intuitive. Look, with previous techniques,

good movements required these crazy paths
and were thus, more difficult to achieve.

But wait, we have two more incredible insights
about this paper. Now one, hold on to your

papers because this is where I fell off the
chair when reading this paper. As promised,

let’s have a look at how much training data this
technique required to learn all these beautiful,

fluid movements. What? Are you seeing
what I am seeing? That is impossible,

right? Look. It learned to animate quadrupeds
by just seeing 17 minutes of footage,

and perhaps even better, dancing, just
9 minutes. That is absolutely amazing.

And, the second insight is about foot sliding.
This new technique shows less foot sliding in

most cases than previous methods, and even
in the worst case, it is comparable. Look,

this was a huge problem for previous techniques.
But really, whatever metric we use to measure the

new one against previous techniques, it performs
better in pretty much all of them. Incredible.

And huge respect to Sebastian Starke, first
author of this work who just keeps publishing

these incredible papers, one after another, and
almost always with the source code as well. And,

yes, this also means that the source code
for this technique is also available. So,

from now on, we can soon expect
much more lifelike character

movements in our games and virtual
worlds. What a time to be alive!

Thanks for watching and for your generous
support, and I’ll see you next time!

READ  How to develop small games with artificial intelligence DeepMind