Importing a character asset into Animotive

To import a character into your project, click on the Open button beside your project.

Along the top menu, youโ€™ll see the Asset Library for your project, click on this to open it. Here you see an inventory of all the current assets in your project (characters, sets & props) and you can also import new assets from here.

Locate your character asset and once Animotive has finished loading it, youโ€™ll be invited to click on the character to open it

When you do this, youโ€™ll see your character in a Tpose. The character will not be textured at this stage, nor will any facial blendshapes be set up. The skeleton however will be mapped automatically, you can check this by clicking on the Humanoid Mapping section.

Skeletal Mapping

In order to drive the movement of your character, Animotive needs to create a reference between your characterโ€™s joints and its own system. Animotive creates this reference by matching the name of the joint / pivot found in your character against those found in its mapping algorithm

Here youโ€™ll get a graphical representation of your character skeleton to the left of the screen. Successfully mapped joints are represented by a green dot - invalid mapping is represented by a red dot.

If there are any red coloured joints that require examination, you can simply click on them and re-map them to the appropriate joint by finding it in the pop-up box that appears. If you made a mistake in your mapping, you can click on Revert to Automap.

Once youโ€™re happy that the skeleton system is set up and all joints are validated, you can save out the mapping profile by clicking on Save Profile. You can then load the profile to use it on other characters with the same skeleton.

Facial Mapping

If your character comes with facial blendshapes, you can take advantage of Animotiveโ€™s facial capture system.

Animotive automatically maps a character's imported blendshapes by parsing their names for keywords, scoring them against the keywords of each Animotive Facial Blendshape. For example, a user imported blendshape named โ€œeye_Blink_Lโ€ would score highly for Animotiveโ€™s eyeBlinkLeft and would be automatically mapped.

To begin facial mapping, click on Facial Mapping along the top menu

1 - Capture System: Refers to the method in which Animotive utilises headset hardware to drive the facial performance. There are currently 2 methods. Oculus Lip-Sync uses the microphone on the Oculus Quest to drive simple Viseme shapes. The other method, Meta Quest PRO utilises the cameras in the Meta Quest PRO headset to deliver a more accurate facial performance. To change between the two methods, just click on the dropdown arrow to select.

2 - Mapped: informs you of how many shapes are mapped to Animotiveโ€™s system

3 - Status: confirms how many blendshapes are successfully mapped to Animotiveโ€™s system

If there do happen to be blendshapes that werenโ€™t picked up and automatically mapped, then you just need to click on the dropdown arrow next to the shape. A pop-up box will appear where you can manually search for the shape and attach it

To check your blendshape, click on the Preview tab and youโ€™ll see sliders for each blendshape. To preview it on your mesh, just drag the slider to the right.

As with the skeletal mapping, you can save out a profile that you can use on other characters as long as they have the same suite of blendshapes. To save out a profile, make sure you are back in the Mapping tab and select Save Profile.

Materials and Textures

The materials created in your DCC will appear in Animotiveโ€™s Materials & Textures tab. This feature allows you to configure elements of your characterโ€™s materials and connect image textures to them. To navigate, hold the right mouse button to orbit around your object and hold the middle mouse button to pan.

Enable Align sunlight with camera POV to have the directional light follow your 3D view.

The different sections of the material attributes are explained below;

Shader: the available types of shader is Lit or Unlit. A Lit shader will take into consideration all of the active scene lights while Lit will be a flat shaded material not affected by any lights in the scene.

Surface Options:

Workflow Mode - "Metallic Workflow" and "Specular Workflow" refer to different approaches in handling the reflection properties of materials

Surface Type - โ€œOpaqueโ€ refers to a mesh that doesnโ€™t require any transparency while โ€œTransparentโ€ can be used if transparency is required, for example with a hair texture.

Render Face - This determines what side of your mesh is computed for rendering, โ€œFrontโ€, โ€œBackโ€ or โ€œBothโ€

Alpha Cliping - creates a transparent effect with hard edges by using a threshold whereby any pixels with an alpha value under that threshold do not get rendered

Receive Shadows - enables shadows to be received on the mesh

Surface Inputs:

This is the section where you can connect your image maps. Do this by clicking on the chequered icon next to the map you want to connect.

Detail Inputs

The Base Map and Detail Normal Map in this section allow you to add another layer of textures on top of the main textures. Usually, these are smaller and repeated more times across the object's surface compared to the main maps. This helps to have sharp details when you look closely, but still maintains good detail when viewed from a distance.

A Detail Mask lets you choose where the detail texture shows up on your model. You can decide to have the detail in some areas and not in others. For example, you might not want pores on the lips or eyebrows of a character.

Advanced options

Environment Reflections and Specular Highlights are enabled by default. To turn them off, just click the box beside the attribute.

Hand Poses

In this tab, you can check your characters hand poses while also tweaking and customising them.

Each hand is displayed on either side of the screen and beside each window, youโ€™ll see a list of gestures; Idle, Fist, Point, Thumbs Up & Pinch. To examine these and begin tweaking them, just click on the Edit button of either hand

When you click on Edit, youโ€™ll get a preview of the relevant pose and green nodes will appear at each joint that can be manipulated / rotated as you need. To edit any of these nodes, click on the node and youโ€™ll see a rotation tool appear. Using your left mouse button, click and drag on one of the rotation axis handles to move the joint. Press and hold the right mouse button while moving your mouse to orbit around the joint for a different angle.

When youโ€™re happy with your customisation, you can click on Mirror to Left and your changes will be reflected on the opposing hand. Pressing the Reset button will reset your pose to the default.

Eye Position

The eye position represents where your viewport will be when you embody a character. If you need to move this position, just grab the reticle in any of the views and drag it using the left mouse button.

Scaling

Animotive works off real-world scale. The scaling feature allows you to resize your character for use within an Animotive session if required.

One of the benefits of Animotive is that users can embody characters across a range of sizes while maintaining eyelines.

If you embody a small character during your session, you will be scaled to the size of that character, for example; if you embody a child character whose real-world height is 4 feet tall, then you will become 4 feet tall within your Animotive session and see things from their perspective.

To change the scale of your character, type a figure into the Scale Factor section. A figure of 0.5 will reduce your characterโ€™s height by 50%.

Last updated