Monday, May 5, 2014

Marsh Facial Rigging

Here is the facial rigging for Marsh with its controllers connected to the blend shapes. The jaw and lips are using Joint-based rigging while the eyebrows and sneer are using blend shapes.



Thursday, April 24, 2014

Technical Difficulties in Other Groups

There are also a few technical difficulties of other groups that I helped to solve during the project:

Optimization for Ogre Rig

Problem: Due to use of Set Driven Keys in the Controllers of Facial Blendshapes, the rig becomes too heavy to animate and it can't scroll through the frames smoothly.

Solution: Changing SDK to Expressions made the rig a lot faster, since Expressions are fastest to calculate.

Reconnecting Facial Blendshapes to a New Model for Dragon

Problem: Connecting Model with Blendshapes and Skinned Model

Solution: Prepare a new Mesh that is ready for Rigging and Blendshapes, connect the Blendshapes to the new mesh through a bridge Mesh and Copy the skin weights to the new mesh.

Problem: The skinned and blendshapes geometry has the wrong UV.

Solution: Use Transfer Attributes to transfer the UV from the model with correct UV.

SDK for Robotic Arm Transformation 

Problem: Linking up a Transformation Animation of multiple objects to an attribute of a single controller with an SDK. (How?)

Solution: Create a Set Driven Key from the new Attribute to an animated object and open the Node Editor. Connect the Attribute to the input of the animation graphs of the objects. The node editor will automatically create a unitConversion Node. This node will convert the Attribute value into Frame number of the animation.

The default conversion (250) will convert 1 attribute value into 1 frame of animation. Since the animation is 64 frames in total and the attribute's maximum value is 1, the conversion should be 250*64, which is 16000. And then 1 attribute value will count as 64 frames of animation.

The output of this Conversion Node is linked to the input of every single animation graphs of the animated objects in the Node Editor. Therefore when the attribute value changes, it affects all of the multiple objects' animation simultaneously.

This can still be optimized with Expressions, but the SDK works just fine.

______________________________________


Technical Difficulties faced during the Fluffymallow Project

There are a few problems that we encounter throughout the Fluffymallow Project:

Taking the HDRI Lighting in the morning (7.00 a.m.)

The lighting changes dramatically when we were shooting. The 0 exposure of the 7.00 a.m. sky is 2 stops darker than the 0 exposure of the 7.15 a.m. sky.

Solution: We took our footage and HDRI lighting at 10 a.m., which has a more stable exposure level.

Paint Weighting the Spikes of Marsh 

The spikes on Marsh's Back are painful to paint weight. And only after 1 week of paint weighting, I realized the spikes are supposed to not deform. (They are supposed to be hard.)








However, the spikes in the model was prepared for a soft-mesh deformation, and there was no time to readjust the design or the model.

Failed Solution: Create a lot of controllable joints to paint weight the spikes separately.

Solution: Limit all spine animation in the shots showing his back to avoid spike extreme deformations or intersections. Most of the shots are low angle front view shots that can't focus on the back, so the spikes deformations are not urgently important.

Referencing Files for Textures

Textures can't load because of wrong path directory, C:\Users\Desktop\...etc.

Solution:
The texture files are supposed to link from sourceimages\
so that it will follow the project instead of the path in each individual computers.

Making the rig scale-able

There are many things to look out for when making the scale of the rig adjustable.

1. Connecting the controller scale attributes to the correct attributes: For the case of global scale, group all the joints together and connect the scale of the Main controller to the Joint group. A scaling group is needed because connecting the scale attribute to an individual joint will not affect the children joints, and connecting to all joints separately will cause the rig to be inflexible for individual joint modifications.
2. Only skinned Geometries are following the Joint Scale. Geometries with Parent Constraint will need their Scale attributes connected.
3. Geometries with IKSpline Joints will need an additional group for the control joints controlling the curve.

___________________________________

Tuesday, April 1, 2014

Conveying Messages Through Senses: Visual and Sound

This week, I learned more in depth about the importance of sound in movies. The sounds are divided to Music, Sound Effects, and Voices. The volume of each sound will depend on the genre of the movie and the focus of the scene, sometimes it also depends on what the director is trying to tell the audiences.

For example, in Drama genre, voice is important in the delivery of storyline. Thank You For Smoking (2005) and Juno (2007) directed by Jason Reitman have similar sound design; the voices of the actors and actresses play a huge role, and background music or sounds are considered less important, except for some certain scenes. This also applied to Perks of Being A Wallflower (2012) by Stephen Chbosky and Her (2013) by Spike Jonze.



 


While in Action genre, the use of sound effects are more prominent because they have to make some scenes focus on the events that are happening, like explosions, hitting sounds, glass breaking, etc. Sometimes they don't even focus on the character's voices, so the sound will mostly be filled with action music and action sounds to bring up the atmosphere. The example would be Elysium (2013), Pacific Rim (2013).




Some movies with a lot of action scenes and drama scenes will use different sound design for different scenes. The example would be in The Matrix (1999) or Cloud Atlas (2012)




I think the sound designs are supposed to help in delivering what the director is trying to tell to the audience. Only recently I saw a documentary, Jiro Dreams of Sushi (2011) by David Gelb. And I listen on how they synchronize the music and the sound with the characters talking and the cinematography as well. It is to be designed as if we are experiencing the reality of the movie itself.



In the end, what makes the audience engaged in movies are 50% visual, and 50% sound, and it is interesting how they are trying to convey the message of 'smell and flavor' in this documentary, only by using images and sound. It is as if our brain is connecting all the senses together and linking them up to our emotions. 

That is what a good director should do in a movie; to make sure the audience experiences the message that the movie is trying to convey.

Monday, March 17, 2014

Rigging A Creature: Basics (Incomplete)

Hello there, this time around I'll be touching the topic: Rigging A Creature.

What are the things that we need to look for? What are the optimal location for the joints? What is our consideration?

Here's my take on the group project we've been handling for the past few weeks!

(This Character is modeled by Xin Long, and his name is Marsh)


This post will be divided to:
1. Rigging Preparation

2. Joint Placement
3. Interactive Skin Binding
4. Facial Blend Shapes and Facial Rigging
5. Connecting to the Facial Controllers with Expressions

1. Rigging Preparation

Hello, Marsh

1.1. Since this whole mesh is completely a single mesh, we can optimize the poly count for facial blend shapes by readjusting our Vertex Number. To do this, First detach the head from the body. Select the edge that contains the head, and Edit Mesh -> Detach Component



1.2. The next step would be to Delete History by Edit -> Delete by Type -> Delete History 



1.3. Select the head, then select the body (the order is important; select the head first because we want the vertex numbers to follow the head), and Combine back the mesh using Mesh -> Combine


1.4. Don't forget to merge the vertices together to make sure the object is one whole object again.

To do this easily, select the two edge loops faces that is connected to the detached edge on Step 1.1.


Once you get these faces selected, hold Ctrl+Right Click and drag to To Vertices

When you got these vertices selected, go to Edit Mesh -> Merge [box]

Set the Threshold to very low so it only merges the doubled vertices that have the same positions (We're mending the edge we detached on Step 1.1.)

1.5. Delete History

Rename this mesh to Charactername_GEO. Since my character's name is Marsh, this mesh is Marsh_GEO.

And we are done with preparing this mesh for Rigging and Blend Shape.

2. Joint Placement

Even though joint placement seems like quite a simple thing in rigging, the basics are the most important to take note of. A wrong positioning of the joint can cause you countless unnecessary corrective blendShapes and inaccurate deformations.



The fatter and softer the object is, the harder it is to place the joints and paint weight accurately.
Rounder, cylindrical objects are slightly easier to paint weight than squarish or jagged ones.

*Note that the joints of the spikes are not important as they are not helping as much with the rigging.

The steps to joint placement:

2.1. Placing the Locators



Place the locators on main positions:
Root
Pelvis
Head
Neck
Chin
*Choose Right or Left:
Thigh
Foot (Ankle)
Shoulder
Wrist

This is a step many people skipped because it seems like an unnecessary step, but I prefer to put locators so I can manage the positions well before I even place down the joints. Once I place the joints down, there's no need to re-Orient the joints.

It is best to put the joints right on the middle of the 'meat' vertices that moves with it.
Examples (Shoulder and Wrist):


Front View

Side View

Top View

There are more things to consider in the anatomical point of view, but they are holding very subtle details. (like how the elbow actually can't bend backwards so you can place the joint slightly nearer to the elbow tip, how the thumb actually has an extra joint inside the palm, or how the spines and neck joints are placed slightly to the back rather than in the middle of the body or the neck vertices)

Note that hard objects (Like fingernails, shells, carapace, teeth, spikes) are supposedly not the same geometry as the body. You can't skin a hard object easily together with the soft object it is attached to, (especially if the soft area is influenced by 3 or more joints) because your influences will break the hardness of the hard object, or rip the soft object apart, or dislocate the hard object from the soft object.

2.2. Getting Spine, Elbow, and Knee joint positions with locators.


Spine

If the character stands upright, Spine joints will be quite straightforward to place. (It can be slightly curved to follow the anatomy.) The number of joints depend on how detailed you want the deformation to be. (See how dense the body mesh is) Basically more joints will help you get more curvature for the mesh, but too many spine joints for a low-density mesh will be kind of pointless.



There are about 20 edge loops on the body-neck mesh, so I gave about 7 spine-neck joints.



The spine is located slightly to the back from the middle of the body vertices because the body is more likely to scrunch forward (bend forward). The mesh on the back will not scrunch as much, so the joints can be placed nearer to the back. 

Joints should be placed closer to the side that doesn't bend a lot, but avoid placing the joint too near to the vertices as it will make the deformation very stiff on this side and inaccurate on the scrunching side. (This applies to elbows, knees, and fingers as well)

Elbow and Knees

Place the Elbow joint slightly closer to the outer elbow. Place the Knee joint slightly closer to the knee cap. (For the deformation reasons stated before.)

Because we are preparing for IKFK Arm and IK leg, we will need another locator for a pole vector we need to make sure that the Elbow Pole Vector is on the same plane with Elbow, Shoulder, and Wrist joints, and the Knee Pole Vector with the Knee, Thigh, and Foot joints.

How to get a Pole Vector accurately placed?

Create a locator, name it ElbowPV_Loc, this will be the Pole Vector Locator. Without maintain offset, Point Constraint it between Shoulder and Wrist Locators, Aim Constraint it to the Wrist Locator.

Delete the Constraints, move it in only with X axis and try to get closer to the elbow joint.

Aim Constraint the Pole Vector Locator to the Elbow Locator, Delete the constraint, and move it in X axis to behind the Elbow Locator.

This also works the same for the Knees.

Arm Twist

To get an arm twist joint location, select the Elbow locator, duplicate it and Aim Constraint it to the Wrist joint. Delete the Aim Constraint, translate it in the X axis, put it around the middle of Elbow and Wrist locators. In this case I put it right in the middle.

3. Interactive Skin Binding, Paint Weights

Interactive Skin Binding is great for cylindrical mesh objects (upper, lower arms, thighs, knees) and multiple blending joints like clavicles, shoulder, and body.

The idea is the same as Paint Weights, but the joints are using cylinders as the influence modifiers instead of us painting on the vertices.

It takes a little time to get used to, but is much more dependable and faster than normal Paint Weighting.


4. Facial Blendshapes and Facial Rigging

4.1. To do the Facial Blendshapes, first we duplicate the main mesh and translate it away. Rename this mesh CharacterName_FB_Target. Since my character's name is Marsh I'll name it Marsh_FB_Target.

4.2. Select Marsh_FB_Target, select Marsh_GEO, and Create Deformers -> Blend Shape


Rename this blendShape1 to Marsh_FB_Target_BlendShape

4.3. Select Marsh_FB_Target, delete the faces that contained the detached edge on step 1.1. Basically we're trying to just get the head mesh out from the whole body mesh.  


Select the faces of the body mesh and delete them.

This way, we can save a lot of poly count for the blendshapes.

4.4. Remember to set your input orders so the Joint Skinning works with the blendshapes. Select your mesh and click the Inputs to selected object, click All Inputs...



Your blendShape node should go below the skinCluster node so that they both could work. So middle mouse click and drag the blendShape node like shown below.



This is how the nodes should be read.

4.5. Now we can continue with the next step: Setting up Blend Shapes and Corrective Blend Shapes.

Duplicate Marsh_FB_Target and translate up left, duplicate again and translate right.



Rename the left: Marsh_FB_BS_GEO
This will be the Geometry that takes Blend Shapes

Rename the right: Marsh_FB_CBS_GEO
This will be the Geometry that takes Corrective Blend Shapes

Select both of them, select Marsh_FB_Target, and Create Deformers -> Blend Shape

Rename the blendShape to Marsh_FB_Blend

4.6. The next steps are to Create the Blend Shapes

To create Blend Shapes, we can duplicate the default head and start sculpting it as a facial movement, for example: Smile, Frown, InnerBrowDown


4.7. Connecting the Blend Shapes to Marsh_FB_BS_GEO

When we are done with all the blendshapes, select every facial movements, and select the Marsh_FB_BS_GEO. and Create Deformers -> Blend Shape

This will connect all the facial movements mesh to the Blend Shapes head, and it will link to the base Mesh. This way we can separate the Skin Weight and the Blend Shapes inputs to make the rig more systematic.

5. Connecting to the Facial Controllers with Expressions

After we have created the controllers for the Blendshapes, we can start linking by adding expressions for the Blendshapes in Marsh_FB_Blend.

Monday, February 3, 2014

Montages: Subjective Experience of Space and Time

The uses of Montage in films have always been interesting for me. Sometimes it is like a puzzle, and sometimes it is already made clear by the director.

Let's take a look at the Cosmic Sequences and Creation of Life on Earth from the movie Tree of Life

When I first saw this, I didn't know what was happening until it reaches around 8 minutes then I started to notice the microbiology scenes, only then I kind of knew this is the flashback montage of the Earth creation and Life on Earth.

I think our brain has an amazing power to link up scenes in movies, as well as linking up montages and quickly take a conclusion.

Here is another flashback montage from the movie Her

The scene goes with Theodore talking about his previous relationship while the montages are playing. And the scene ends with Samantha saying, "The past is just a story we tell ourselves."

On this use, the narrative is going clearer because the audience is still listening to the character. I think one of Tree of Life's main problem in the scene is that some people don't have the attention span of 15 minutes to sit in a movie without any interaction but background music and visuals, which still only make a little sense after a long time.

Her uses many mood montages and rhythmic montages which I can't share here because I couldn't find them online, but I believe the movie shows a great example of the use of montages in film form.

And I just remembered beside Memento there is one movie that goes back and forth as well, it has a lot of flashback and flash-forward montages, and it incorporates 6 parallel storyline in different timelines. Cloud Atlas is one movie that requires the audience to solve the story, and not just one story but all of them. However, only after 1.5 hours that the movie starts to get clearer for the audiences (some might even need a second watch).

There are just so many ways to narrate stories in filmmaking that we can look upon, there is no best way, so I think the explorations of film form are unlimited.

Technical Journal on Rigging (Week 4)

Continuing from last week, my explorations will be revolving around Eyad Hussein's Advanced Rigging.

Summer 2013 Review – Advanced Rigging - Eyad Hussein 

And this week, I have started to apply the joint based rigging:



Placement of the joints


Basically, after positioning the joints, the model is ready to be skinned. There are some tips on positioning joints and skinning:

1. Joints are used to influence vertices in skinning, so have in mind which vertices are going to be affected when you place your joints. (Don't place unnecessary joints, unless they are used for something else.) 

2. The movement of the joints can vary from rotation to translation. (depending on the desired deformation)

3. Your topology affects the placement of the joints. (As stated in 1)

In this case, I haven't explored the whole range of joint based facial rigging yet. But here is some lips movements I experimented with joints.



I made a smile and a frown, but they still look so raw. 

I hid the joints because it's unclear and messy in the screenshot. But the deal is, when we usually use Blendshapes to do our facial expressions and deformations, here we are only using joints influences to reach the desirable result. Blendshapes are only used for extra range of movements (for the lips) and final touch, which uses the similar method of Corrective Blendshapes. (It activates when the controller reach certain point of translation or rotation.)

In the end, this week I found that this method of Facial Rigging is using a different workflow (or pipeline) than the Blendshapes Facial Rigging, because its deformations have a more detailed control from the joints. It also gives a better in-betweens for change of emotions.

And furthermore, this kind of facial rigging can still be combined with Blendshapes, because the joints are affecting the mesh on a different layer. 

However, I think this method should be done only when a lot of details are focused on the face of the character. The heavy use of SDK might also affect the usage of the rig.

Pros of Joint Based Facial Rigging:
Better control of facial features
Better inbetweens from a facial expression to another
Higher Flexibility in Animating
Easier Animation Tweaks
Able to deform with more mathematical control (using translate / rotate to slide the vertices, scale to expand the volume)

Pros of Facial Blendshapes:
Quicker to set up, easily editable
Faster for productions when the rig doesn't necessarily need to do every facial movement
Lighter to calculate
Possible to setup extreme deformations purely by sculpting (some of which joints can't do)

Both are still advised to use Corrective Blendshapes to polish up the outcome.