NodeIcon OpticalFlowWrapping

Non-rigidly fits a textured mesh to a textured scan by analyzing similarities between their textures.

Common Usecase

A common usecase is to fit a basemesh to a set of scans of facial expressions of the same actor. This process can be usually split into two steps:

  1. fitting a generic basemesh to a neutral expression, which can be done using Wrapping node. A texture from the scan is then projected onto the basemesh;
  2. fitting the resulting neutral mesh to each facial expression using OpticalFlowWrapping node.

Virtual Cameras


The node uses a set of virtual cameras that observe both models from different angles. For each camera two images are rendered: one for the basemesh and the other for the scan. An optical flow is computed for each pair of images producing a set of decisions on where to move each pixel. The decisions from all the cameras are then combined into a global solution.

Wrapping vs OpticalFlowWrapping

Even though Wrapping and OpticalFlowWrapping nodes have similar inputs and parameters, there is a big difference between them. While Wrapping node only looks for geometric similarities between the models, OpticalFlowWrapping node also looks for texture similarities. For example, we need to consider texture similarities when wrapping a set of facial expressions of the same actor. For each vertex of the neutral mesh we should find its corresponding position on the target expression. If a given vertex corresponds to some specific skin pore on the neutral texture, we should find exactly the same skin pore on the target texture. Optical flow approach automatically finds hundreds of such correspondences providing almost pixel-level accuracy of alignment.


It’s important that both input meshes have similar textures. To achieve that the textures should be captured under the same lighting conditions with minimal specularity.

Skin can drastically change its appearance especially in extreme expressions. Wrinkles and blood flow effects make it hard for optical flow to find proper solution. It’s recommended to use control points and retargeting technique in such cases.


A computation is performed in several iterations. There are pairs of node parameters ending with “initial” and “final” word. The values of such parameters are changed with each iteration starting from the “initial” value up to the “final” value. For example resolution initial parameter is equal to cameras’ render resolution at the first iteration. The camera resolution is then increased until it reaches resolution final at the last iteration.


The editor can be used to adjust camera positions, add or remove existing cameras. Single click on a camera or selecting the camera name form the camera list will switch the editor to Camera mode. In Camera mode the viewpoint is swiched to a selected camera viewpoint. You can pan, zoom and rotate in the viewport to adjust the camera position. Use Exit camera mode button to return to the standard mode.

CTRL + click on a camera will remove it.

Use transform gizmo to control position and orientation of the entire camera system.

Camera Toolbar

Camera list can be used to select a camera by name
Create camera creates a new camera from the current viewpoint
Remove camera removes a current camera. Only enabled when a camera is selected
Exit camera mode exits camera mode


Fitting only works in areas of a mesh that are observed by at least one camera. The rest of the mesh is deformed as rigid as possible. Zoom in and add new cameras at the areas of interest where you want the fitting to be more accurate.


source geometry
Geometry A basemesh to be deformed. The mesh should have texture.
target geometry
Geometry A target scan with a texture. The textures are assumed to be similar and should be captured under the same lighting conditions.
point correspondences
PointCorrespondences (optional) A set of point correspondences between the source and the target geometry
free polygons floating
PolygonSelection (optional) A set of polygons that will be excluded from fitting process, i.e they will not try to fit the target geometry but will be deformed as rigid as possible to match the rest of the polygons.


Geometry Fitted mesh


auto-compute:if set, the node will be recomputed each time a parameter or a input data is changed
compute:if auto-compute is off, starts wrapping process in a preview window

Camera Params Tab

global translation:
 translation of the cameras system. Can be also controled using a transform gizmo inside Visual editor tab.
global rotation:
 rotation of the cameras system. Can be also controled using a transform gizmo inside Visual editor tab.
resolution initial:
 camera resolution during the first iteration. It controls a size of rendered images that are then used to compute optical flow.
resolution final:
 camera resolution during the last iteration. Increasing the resolution leads to better fitting quality but at the same time drastically increase computation time and memory consumption. We recommend to use values not higher than 900x900. When increasing the resolution it is recommended to also increase smoothness final parameter.
angle of view:angle of view of virtual cameras
icon size:camera icons size inside the viewport
clipping range:cameras’ clipping range

Camera Generator Tab

Virtual cameras are generated on a surface of spherical sector defined by the following parameters:

angle:angle of the spherical sector
rings:number of rings of cameras inside the sector
cameras per ring:
 number of cameras inside a ring
radius:radius of the sphere
clear existing cameras:
 if set, all the existing cameras will be removed after clicking on Generate cameras button.
generate cameras:
 generates a set of cameras based on the parameters above

Wrapping Params Tab

iterations:number of iterations of the algorithm
smoothness initial:
 mesh rigidness during the first iteration
smoothness final:
 mesh rigidness during the last iteration. Increasing it will make it harder to deform the mesh during wrapping thus providing more smooth result. At the same time it will reduce fitting quality as small scan details will have less influence over the final result.
fitting initial:
 controls strength of fitting the basemesh to the scan during the first iteration
fitting final:controls strength of fitting the basemesh to the scan during the last iteration. The fitting force is counterbalanced with the smoothness force.
optical flow smoothness initial:
 controls smoothness of optical flow algorithm during the first iteration
optical flow smoothness final:
 controls smoothness of optical flow algorithm during the last iteration. Increasing this value will lead to more robust but less accurate fitting. It is recommended to increase this parameter for extreme expressions, when due to big changes in skin appearance optical flow can provide wrong correspondences.
optical flow sampling:
 value 10 means that every 10th pixel correspondence found by the optical flow algorithm will be used. Value 1 means that every pixel will be used. Decreasing this value leads to better accuracy but greatly increase computation speed and memory consumption.
angle threshold:
 discards a pixel correspondence if the angle between the corresponding surface normals is bigger than a specific value
distance threshold:
 discards a pixel correspondence if the distance between the corresponding surface points is bigger than a specific value. Be aware that this parameter is specified in centimeters and thus scale-dependent. If you use scale other than centimiters, please adjust this parameter accordingly.


Distance threshold parameter is specified in centimeters and discards all the correspondences with length bigger then a specific value. The default value 1.8 cm is optimized for models in centimeter scale and should be changed when working with models of different scale.


The most common parameters to adjust are:

resolution final:
 it increases quality but greatly increace computation time and memory consumption. It’s recommended to keep this value under 900x900. For big values it’s recommended to increase smoothness final
optical flow smoothness final:
 can be increased for extreme expressions when due to drastical changed in skin appearance optical flow fails to find good correspondences. Doen’t affect computation time. Increase it when there’re final mesh have distortion artefacts
smoothness final:
 can be increased when using bigger resolution final values. Increase it when there’re final mesh have distortion artefacts