Learn & Support

Everything you need to answer your questions about all Faceware products.

Explore
Pricing

Explore the many ways to purchase our products for your facial capture needs.

Pricing Options

Learn & Support

Everything you need to answer your questions about all Faceware products.

Tutorials

Our videos detail the workflows, tips, tricks, and more for all of the Faceware products.

Free Assets

Browse our collection of free training assets, including Free Rigs, performance videos, as well as finished FWR files so you can straight to animating!

Knowledge Base

Browse the Knowledge Base of all of our products. Detailed technical docs and specifications.

Troubleshooting & Help

Select a product below

Does Studio replace Live Server?

Yes, and so much more! Studio’s initial release, Horizon, ships with realtime functionality that surpasses Live Server in many areas. In future updates, Studio will serve as the platform for all of Faceware’s technology and product updates.

What is the recommended frame rate and resolution for Studio?

The ideal frame rate for working with Studio is 60 FPS or higher. Whatever resolution your camera or media can utilize and still achieve 60 frames per second is ideal.

Can I embed Studio in my game/application?

Studio itself does not come in a publicly available SDK, however there are license agreement options available. Contact sales@facewaretech.com for more details.

Does Studio animate in realtime?

Studio tracks the face and animates in realtime. The realtime animation data may be streamed over TCP/IP to any supported Faceware client plugin or custom-built solution.

Can I record the animation?

As of the Horizon update, recording of the animation data is handled in the client plugins to Unity, Motionbuilder, and Unreal Engine through their native functionality. In a future release of Studio, we hope to expand recording functionality.

What are the minimum requirements of Studio?

Currently, it is recommended that Studio be run on a contemporary i7 processor or better, with 16 gigs of RAM or better, with a GTX 1060 or above.

Can I use my own character in Studio?

While this is an experimental and unsupported feature, it is possible to use your own character by replacing the current preview character with an FBX of your choosing that mimics the hierarchy and naming conventions of the current character. Write in to support@facewaretech.com for more information!

What does a neutral pose look like for calibration?

A neutral pose is made by relaxing the face and staring directly at or slightly above the camera being used to track. For more information visit the Knowledgebase.

What client plugins are available for Studio?

Currently there are client plugins available for the following: Unity, Unreal Engine and MotionBuilder.

Can I write my own plugin?

Absolutely! The data streaming from Faceware Studio is in JSON format, streaming over TCP/IP. Connecting to the socket and parsing the data is simple and easy. Contact support@facewaretech.com for any questions.

Can I add more shapes to Faceware Studio?

At this time, there is no user-facing feature to add additional shape data to Studio, though we are very interested in this being an option, in some form, in the future.

How do I try Studio?

Visit our downloads page, create or log in with your Odoo account, and download the latest executable to begin! Check out this quickstart guide to help you get started.

Do I need a headcam for Studio?

Studio supports a variety of video sources, including stationary cameras. While a head mounted camera will yield the best results, Studio can easily be used with a webcam or cell phone video.

How do I get better tracking?

Adjusting your calibration or making changes to your environment can both improve tracking. Take a look at this support article for more information on getting the best from your realtime tracking.

How do I use Motion Effects?

Motion Effects is a powerful tool in customizing your data and results. Check out this motion effects guide for in depth information on working with motion effects.

What is Tuning and how do I do it?

Tuning is the process of dialing in the strength or weight of the given shapes in Studio to best suit your actor’s performance. Use the sliders on each control to make sure you’re getting good activations when your actor performs those expressions.

Does Studio require Codemeter?

No! Studio uses a new cloud licensing solution that does not require CodeMeter to be installed!

Does Studio require Matlab?

As of the Horizon update, Studio does not require Matlab to be installed.

Will applying tracking markers to the face help Analyzer track more accurately?

The answer to this question is slightly complicated. Analyzer, without AutoTrack, relies on the accuracy of the training frames created to perform its tracking functions as smoothly and accurately as possible. Adding ink markers to the face does not inherently increase tracking accuracy, but markers can be added in such a way as to aid the user in creating the training frames. Placing the markers in the same positions as Analyzer landmarks are laid out while tracking in the software can help the Analyzer user place the landmarks in the correct spot on each and every training frame they create, leading to faster training frame creation and more accurate tracking overall. When creating the training frames, the user can see exactly where each landmark should go and can use the ink markers to place the landmarks quickly while still maintaining the consistency needed for accurate tracking.

How long does it take to learn Analyzer?

We have found from our direct experience and feedback from our Analyzer service providers that users can start to track a shot in a few days, but plan on about a week to get new artists completely up to “Production speed.”

How many training frames do I need?

The answer really depends on a related question: 'How many unique shapes does my actor make in this shot?' Analyzer requires a training frame on each unique shape, but does not need frames for the same shapes. A 1000 frame shot where the actor isn't doing much of anything will likely require only a few, whereas a 60 frame shot where the actor is screaming or shaking their head around will often require many more. As you go through the shot and make training frames, start with the minimum you think are necessary, then train and track the shot. Add more to problem areas until the tracking landmarks follow the actor's performance.

How long does it take to process a video / shot?

The time depends on the complexity of the motion in the facial performance and the skill of the user. As shot length increases, however, the user will have to create fewer training frames as shapes are repeated, so long shots take less relative time than short ones. Additionally, our Studio Plus version allows for sharing training frames between jobs to prevent from having to start from scratch each time.

Whats the difference between studio and studio plus?

Essentially, our Studio Plus version for Analyzer allow for higher efficiency use of our software. With Analyzer, your artists can build and export “Tracking Models” specific to an actors face, which allow each subsequent shot to track quicker. In addition, Studio Plus versions allow for batch processing capabilities that provides an infrastructure to quickly process hundreds (even thousands) of lines of dialogue for larger projects. Essentially, you can train the software on specific actors face to apply quick tracking to large collections of data, dramatically reducing your animation time. Studio Plus has an entire library of API commands to create fully automated workflows, most users script in Python.

Lastly, the Analyzer Studio Plus version has the option to go offline and not require a connection to the internet.

Users also can combine Studio and Studio Plus versions, and know that you can always upgrade at any time and only pay the difference in cost. A trial of our software is really the best way to compare features and see which products will work for you.

Do I get better results with higher resolution footage?

We recommend a minimum 720p image for high-quality results, however it's worth noting we do not recommend 4K image files to achieve photoreal results as they add unnecessary processing time without adding to the emotion analysis. Analyzer does support all standard video resolutions and aspect ratios. Non-standard video resolutions (e.g. 720x496 vs. 720x486) may result in odd behavior in the application. If you experience trouble using a non-standard resolution we recommend re-exporting your video to a standard aspect ratio and size.

Where can I request a trial of the software, is the trial feature locked?

Click “Login/ Register” to register for a trial of any of our software products (typically on the top of each page).

The trial version of Analyzer is functionally equivalent to a Studio Plus license and is fully featured, including access to sharing tracking models and command-line access, but does require an internet connection whenever the software is launched. The trial will begin upon installation and will expire automatically after 30 days.

If you need a trial that does not require a connection to the internet, let us know.

Why is the tracking in Live better/different than the tracking in Analyzer?

For Realtime tracking with Live, balancing processing requirements for tracking and streaming performance data while maintaining a consistent frame rate is of utmost importance. Live lacks some of the under-the-hood advanced analysis that Analyzer can provide, such as texture tracking, to maintain this balance. Analyzer, as a non-realtime solution, has no limitations as far as frame rate is concerned and can add additional layers of data into the resulting performance file.

What type of rigs do you support?

From a technical standpoint Faceware can work with any type of rig. Joints, blendshapes, and all custom deformers can be driven and animated by our software as long as they can be keyframed. However, from a creative and artistic standpoint the layout and setup of animation controllers can have a large impact on the quality of animation that can be produced with Faceware.

Rigging Best Practices

What’s the difference between studio and studio plus?

Essentially, our Studio Plus version of Retargeter allow for higher efficiency use of our software. With Studio Plus, our artists have access to export and share their Pose Libraries. In addition, Studio Plus versions allow for batch processing capabilities that provides an infrastructure to quickly process hundreds (even thousands) of lines of dialogue for larger projects; Retargeter has an entire library of API commands to create fully automated workflows, most users script in Python.

How many poses do I need?

The poses that you create in Retargeter correspond to the shapes that your actor is making during their performance. Retargeter requires at least two poses in a group to perform its calculations, but the full number required to finish a shot will vary depending on the length of the shot and how much variety there is in the actor's expressions during their performance. We recommend starting with a few basic poses using the “Get Auto-Poses” feature, then retargeting and viewing the results. Add more poses where the animation is lacking and eventually you will work up to a completed animation without wasting time creating too many unnecessary poses.

Does adding extra face groups (e.g. jaw, cheeks) give better animation results?

The benefit of using additional face groups is that you will have specific control over those areas separate from the standard groups (eyes, brows, mouth). Some users prefer to animate the jaw separately from the rest of the mouth, for example, but the majority of our users at all levels work with the standard three groups and get excellent animation. It is also worth noting that tracking the additional groups in Analyzer is required and can add extra work, so that is an overall workflow consideration.

How long would you recommend we learn Retargeter?

We have found from our direct experience and feedback from our Retargeter service providers that users can start to animate a shot in a few days, even something moving in a few hours, but plan on about a week to get new artists completely up to “Production speed.”

What version of Maya, Max, MoBu do you support?

We generally support the same versions of Maya, 3DS Max, and MotionBuilder that Autodesk maintains support on-- most commonly is the previous 4 years of releases. We currently support the 2015-2019 versions of each package.

Where can I request a trial of the software? Is the trial feature locked?

Click “Login/ Register” to register for a trial of any of our software products (typically on the top of each page)., The trial version of Retargeter is functionally equivalent to a Studio Plus license and is fully featured, including access to sharing pose libraries and command-line access. The trial will begin upon installation and will expire automatically after 30 days.

What is your support policy?

Click here for more info.

Which AJA Ki Pro recording devices does Shepherd currently support?

AJA Ki Pro models are file-based recording and playback devices that create high-quality files on computer-friendly media. The following Ki Pro devices are supported for use with Faceware Shepherd:

  • Ki Pro Mini
  • Ki Pro Rack
  • Ki Pro Ultra Plus
  • Ki Pro Classic*

* Please note that the Ki Pro Classic is no longer in production.

Supported Ki Pro Devices

What Motion Capture systems can Shepherd Lync to?

Shepherd currently supports the following body motion capture systems: Vicon Blade https://www.vicon.com/products/software/blade Vicon Shogun https://www.vicon.com/products/software/shogun Optitrack Motive http://optitrack.com/products/motive

It is important to note that most all other motion capture systems support Timecode for cross-system integration. Shepherd is designed to work with other motion capture systems that do not have Lync capability.

For more information, visit: Supported Motion Capture Systems

What is Shepherd “Lync”?

Lync is an innovative feature that allows Shepherd to listen for broadcast capture commands on your network from supported Mocap Systems. When Lync is enabled, Shepherd will start and stop face capture recording on your Ki Pro(s) when your Mocap System starts/stops body capture recording.

What information is available for export out of Shepherd?

A JSON or Excel .XLSX file can be exported from Shepherd which containts the following details:

  • Session Data Information (Date, IP address, file location)
  • KiPro Device List, including model version
  • TC, Duration, and date created

Download Example Session Document

How can I try Shepherd to see if it's the right solution for my needs?

We offer a free trial of Shepherd so you can try it for yourself. Sign up by clicking the link below and we’ll send you everything you need to get started.

You will need at least one supported AJA KiPro device to get started testing. Don’t have a KiPro? Look to test drive one of our Headcam systems to fully evaluate Shepherd.

How long does it take to set up an actor?

As a new user, it may take 10 to 15 minutes to properly fit an actor's helmet, build the headcam, and frame and focus the actor's face. Once you've mastered this process, it'll get much faster. It takes roughly 5 to 7 minutes per actor for Headcam operators to do this process. Once the actor has a helmet and camera fit and framed, things go even faster. For example, coming back from a break or lunch takes about a minute to get the Mark IV back on them.

How long does it take to calibrate the system?

No calibration process is necessary for our Mark IV Headcam to be used for recording. If you plan to use Live Studio (Faceware's realtime software), have the actor hold a neutral face, and Live Studio has a 1 button, 1-second calibration process.

What is the battery life of the system?

If the Mark IV System is running in wireless mode, then a single battery will last approximately 5 hours. If the Mark IV System is wired, then a single battery will last at least 12 hours.

Do you recommend applying facial markers to the performer's face?

This is entirely up to the person who will track the footage in Analyzer, our technology does not require markers, although about 1/2 our user-base prefers to apply marks on the face. Markers drawn on the actor's faces can help with consistent landmark placement but does not directly impact how the software ingests the data. Small, clean dots, placed consistently day to day work best.

Your bar seems long. Why is that?

We use a near-zero distortion 4.3mm lens. That lens mm means we need to push the camera out a little further than other cameras with fisheye lenses. This produces excellent footage for animator reference that looks like a properly proportioned face.

What frame rate and resolution do you recommend?

This is dependant on the project itself. We usually recommend shooting at the highest evenly divisible framerate of your animation timeline’s fps. For example, you are working on a game and are animating in Maya at 30fps, we would recommend 60fps capture. If you are animating in Motion Builder at 29.97 for television delivery, we would recommend 59.94fps. We sell cameras that run at either 24/25/30/50/60 fps or 23.976/29.97/59.94 fps. Here is the full list supported by our cameras:

720p 60fps, 720p 59.94fps, 720p 50fps, 1080p 60fps, 1080p 59.94fps, 1080p 50fps, 1080p 30fps, 1080p 29.97fps, 1080p 25fps, 1080p 24fps, 1080p 23.98fps

If you are unsure, we recommend 720p 60 or 59.94 fps as our general best practice. It is a great balance between file size and detail.

Are the cameras infrared?

They are not. For better animator reference and to better work with our tracking algorithms, we use color video in our cameras.

What is actually being recorded?

The Mark IV records color HD video recorded with Prores or DNxHD compression in a .MOV file.

Does the Mark IV system record audio? How do I integrate my audio?

Our Mark IV headcams do not come with a built-in microphone, but audio can be embedded in the face video by connecting an audio feed to the audio inputs on the back of the AJA Ki Pro Rack. The Ki Pro decks have Line or Mic level XLR analog inputs or AES digital audio inputs. In PCAP scenarios, most audio engineers prefer to attach microphones to the helmet itself or on the boom arm that holds our camera and then run the cable in our cable wrap and then down to the belt, where they can clip the transmitter. Then the receiver can be then fed into the mixer and then fed to the Ki Pro, or even directly plugged into the Ki Pro. In a VO booth scenario, all that’s needed is to take the feed from your audio engineers board and send a signal to the Ki Pro.

Does the Mark IV work with timecode?

Yes, it does. We combine the video with the TC and audio at the AJA Ki Pro Rack when it is recorded. The Ki Pro Rack has an LTC input (and loop-through output) that any external timecode source (a timecode generator or Sync HD Pro Tools box) can plug into. The timecode data is recorded into the metadata of the MOV as a timecode track.

How many actors can I capture simultaneously?

You can capture up to 5 actors wirelessly. For wired setups, there's no limit unless you are limited by the number of digital video recorders that you own. Any number of Ki Pro recording decks can be controlled from a single interface. Timecode and audio can be daisy-chained from deck to deck to assure all the components stay in sync and identical.

What is the range of your wireless system?

We use the Teradek Bolt 500 system for wireless video. The working range for each unit is 500 feet (150 meters) between the transmitter and receiver.

Are you compatible with Vicon, Xsens, Optitrack, Perception Neuron, Motion Analysis, Rokoko, or Qualysis mocap systems?

Yes, all of them. Integrating the systems is simply a matter of attaching markers or sensors to an actor's helmet and positioning the performance capture belt in a place it doesn't occlude or interfere with the body data. For deeper integration, our Shepherd software package can take file name and start/stop triggers from Vicon, Optitrack, and Xsens systems to automatically and accurately control the recording functionality of the Mark IV system.

Can you attach markers to the helmet?

Yes, markers can be placed anywhere on the helmet. We include some Velcro loop medallions in the kit to place markers anywhere you need on the helmet.

Does your system include a warranty?

Yes, our system includes a one-year end-to-end warranty that covers every single part and component. The warranty ensures that any part or component will replace or be repaired at no additional cost, including roundtrip shipping from our Austin repair facility.

How long does it take to receive my order?

Depending on our order queue most systems ship within 7-10 business days. If needed sooner, please contact us. As soon as your order is placed, our operations manager will be in touch with shipping timelines.

Request a
Free Trial

Click the button below to download one or all of our software products for a free trial.

Request a Trial

Pricing

Explore our different licensing and product options to find the best solution for your facial motion capture needs. If you need a more tailored solution, talk to us about our Enterprise Program.

Pricing Options