cyruswilson.com Open in urlscan Pro
160.153.95.197  Public Scan

Submitted URL: https://parameter-domain.com/
Effective URL: https://cyruswilson.com/
Submission: On September 20 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Cyrus A Wilson

Digital Production Technology Consultant
Pasadena, CA

Brief bio


PROJECTS

Here are some recent and previous projects, in order of roughly decreasing
spatial scale: from environments to faces to cells to proteins & DNA. In my case
that corresponds to reverse-chronological order.


EMOTION CHALLENGE: BUILDING A NEW PHOTOREAL FACIAL PIPELINE FOR GAMES

In AAA games featuring photoreal characters, it is increasingly common to base
the likeness of those characters on actors, and for the performances of the
actors to drive their digital counterparts. A particular challenge is faithful
preservation of nuanced emotional content of such performances when conveyed
in-game. At Activision Central Tech, we made several advances in these areas,
which are covered in detail in this fxguide article. You can also download the
abstract and slides from our DigiPro presentation from research.activision.com.


SEMANTICALLY-AWARE BLENDSHAPE RIGS FROM FACIAL PERFORMANCE MEASUREMENTS

So, you want to create a whole cast of realistic animated faces based on real
people?

If you're going for the highest fidelity to each individual, you independently
create a facial animation rig for each person, incorporating blendshapes derived
from detailed scans of the given individual's facial expressions.

But if you're going for consistent behavior of each face rig in the hands of
animators, you're better off with a generic rig incorporating generic template
blendshapes, which are then manually resculpted to better reflect variation
between individuals while retaining the expressive/emotional "semantic" function
each shape plays in the rig.

Or is there a way to automate the individualization of the template shapes,
while still preserving the intent of the artist who sculpted the original shapes
and built the face rig? Research led by Alex Ma suggests that yes, yes there is!
Paper.


AN INTERACTIVE SYSTEM FOR SET RECONSTRUCTION FROM MULTIPLE INPUT SOURCES

A fundamental challenge of visual effects is to integrate the real world with
the digital world, seamlessly. Easier said than done! Among other things, you'll
want digital representations of the geometry of real-world elements, such as the
on-set environment. (After all, your digital characters will have to "perform"
on this set!) So, do you model it by hand? For real production use, that's
actually the most reliable approach... for man-made structures. For organic,
natural environments—like rocks, hills, mountains—not so much! Do you try your
luck at a fully automatic geometry reconstruction method? Too bad you don't have
the option of going back on set to collect more data when that automated
reconstruction fails!

How about an interactive approach which incorporates your direction to
computationally refine a model to conform to input data? That's one of the
capabilities of the system implemented by Mikhail Smirnov and teammates.
Website.


FACIAL CARTOGRAPHY

So, you want to create a realistic animated face based on a real person?

 * Step 1: scan the subject’s face in the key poses you’ll use to build your
   blendshape rig. No problem: current scanning technologies provide tremendous
   detail.
 * Step 2: correspond the different facial expression scans. Crap! Automatic
   correspondence methods come in many flavors, but they all have one thing in
   common: they sort-of almost work. So... do you clean up a little here,
   sacrifice some detail there? ...no, wait, the mouth’s not quite right; relax
   this, smooth that... Or... instead just sculpt the key poses using the scans
   as reference? Sure, you’re discarding interesting details from all but the
   neutral scan, but at least you’re in control...

There must be a better way! What if we combine the precision that computation
can give us, with the skilled direction the artist can provide? That’s the
motivation behind Facial Cartography. Artist and computer work together to
correspond each non-neutral expression scan to the neutral data. They do this
simultaneously, interactively, finding a solution which best lines up fine
features in the detail maps like skin pores, for a desired animation mesh which
may be at much lower resolution. That means that you don’t just get morph
targets for mesh vertices; you get high-resolution detail maps for each
expression, precisely mapped into a common domain so you can blend them together
in the animation rig!

And by the way: no dots!

Website.


CIRCULARLY POLARIZED SPHERICAL ILLUMINATION REFLECTOMETRY

Polarization of light: Some days you think you finally understand the
phenomenon. Those are usually followed by days when you realize you still don't.
Fortunately there are some useful mathematical tools for describing polarization
effects, and they work on both types of days! Polarization state can be
completely described by a Stokes vector: Conveniently the components are in a
linear basis, which means changes to polarization state (due to reflection off a
surface, transmission through a polarizing filter, etc.) can be expressed as
multiplication with a 4x4 Mueller matrix. Neat! Well, except the matrix will
vary with incoming and outgoing light directions (and let's not even talk about
wavelength!). So how can we apply these tools in real world scenes where we're
observing an integral? Work led by Abhijeet Ghosh shows that if we use a
spherical field of circularly polarized illumination, then it becomes practical
to relate the combined effect of that illumination field to the Stokes vector of
the observed radiance, as a function of surface orientation, index of
refraction, and a specular roughness parameter. What? Does this mean we can
measure per-pixel refractive index of an object? Yes it does! And more! Website.


TEMPORAL UPSAMPLING OF PERFORMANCE GEOMETRY USING PHOTOMETRIC ALIGNMENT

When we need to compute motion in image sequences, one of the techniques we
might apply from our toolbox is optical flow. Well, if only it weren’t for that
brightness constancy assumption. This is a real problem for photometric methods
which need to measure a subject under multiple illumination conditions, given
that the subject might be moving. In particular, a live subject, such as a human
face. So do we put optical flow away and try something else? “Sorry, optical
flow; better luck next time.” Not necessarily. What if we have control over the
illumination conditions? Could we design them such that the sum of two
conditions is equal to a third? If so, we can simultaneously compute two optical
flows, aligning each of the first two complementary illumination conditions to
the sum (the third). We call this the complementation constraint, and we’ve
applied it to several tasks, including the capture of multiple modes of data
during a performance without requiring insanely fast capture frame rates.
Website.


MYOSIN II CONTRIBUTES TO CELL-SCALE ACTIN NETWORK TREADMILLING THROUGH NETWORK
DISASSEMBLY

Okay, this one breaks from the order somewhat, but it’s been in the works for a
long time. Let’s get back to the problem of cell motility: below we look at
organization of actin filament growth at the front; but if a cell is to keep
going, the actin network needs to be taken apart somewhere, in order to recycle
the actin subunits. How and where does that happen? What’s going on at the rear
of the cell? The problem of whole cell scale coordination was the question for
me. Conventional wisdom says that the cell rear is pulled forward by the myosin
II motor acting on actin filaments much like it does in muscle contraction. But
when I inhibited that process in keratocytes, they kept going. That wasn’t
supposed to happen! If I stabilized actin filaments (inhibiting disassembly) and
then inhibited myosin contraction, then they stopped. That really wasn’t
supposed to happen. What was going on? To get to the bottom of this, I combined
the molecular manipulations with computer vision analysis of the movies taken
through the microscope; all evidence pointed to myosin disassembling the actin
network in the rear of the cells. Really? To make sure it wasn’t something else,
my colleague Mark Tsuchida actually ripped open the cells to get direct access
to the actomyosin network in relative isolation. The experiments again showed
myosin disassembling the actin network; and furthermore, they recapitulated the
spatial organization of myosin mediated disassembly from live cells. (In other
words, in the rear.) Could it be that myosin-mediated actin network disassembly
could help orchestrate the whole cell scale organization needed for coordinated
movement? In case you hadn’t guessed from the excessive use of italics, this is
a big deal. That’s why it’s published in Nature. And here's a longer
description.


ESTIMATING SPECULAR ROUGHNESS AND ANISOTROPY FROM SECOND ORDER SPHERICAL
GRADIENT ILLUMINATION

Why decompose reflectance functions in silico when the computation can be done
in situ? Let light (and physics) do the math for you! Previous work demonstrated
that zeroth-order and first-order (linear gradients) computational illumination
can be applied to recover albedo and photometric surface normal measurements
from zeroth- and first-order statistics of a reflectance function, respectively.
In this work, led by Abhijeet Ghosh, second-order gradients are applied in a
computational illumination approach to recover second-order statistics of a
reflectance function, yielding an estimate of specular roughness, assuming that
the specular lobe of a BRDF can be approximated as a Gaussian distribution.
Anisotropy? No problem! The second-order real-valued spherical harmonics form a
steerable basis which can be used to determine major and minor axes of
anisotropy (and associated spherical roughness values) after the fact. This
approach leverages calculations performed in both the physical and computational
realms to obtain per-pixel specular roughness estimates, even for anisotropic
materials, from only 9 input photographs. Website.


GLARE AWARE PHOTOGRAPHY: 4D RAY SAMPLING FOR REDUCING GLARE EFFECTS OF CAMERA
LENSES

Lens flare caused by a bright light is a low spatial frequency phenomenon,
right? Not quite. While it is low-frequency in the 2D integral projection
measured by the sensor (or film; remember film?) of a conventional camera,
certain components of lens flare, resulting from reflections off of elements in
the lens, are in fact high-frequency in the 4D ray-space inside the camera body.
In work I performed at Mitsubishi Electric Research Labs, we showed that by
sampling said 4D ray-space at the sensor (whether using a lightfield camera or
other approach) we could distinguish contributions due to glare from those of
the true scene outside the camera. Video. Website. Patent.


DECOMPOSING NON-RIGID CELL MOTION VIA KINEMATIC SKELETONIZATION

When we study the spatial organization of molecular processes inside a moving
cell, we are confronted by the question: in which moving reference frame should
we describe and analyze said processes? If we can approximate the cells as rigid
objects (see below), then the problem is not hard. Well, not so hard. Well, not
SO hard (see below). Anyway, given that most moving eukaryotic (non-bacterial)
cells have no interest in approximating rigidity (and why should they, given
that at the cellular spatial scale there’s nothing rigid about
actin-polymerization-based cell motility?), what do we do? Can we find a
compromise: a representation of non-rigid cell motion which is easy to
understand yet faithfully reconstructs the underlying reality? This is a subject
I started to explore in research presented in a SIGGRAPH 2007 poster, and would
be happy to develop further, one day. Video.


ACTIN-MYOSIN NETWORK REORGANIZATION BREAKS SYMMETRY AT THE CELL REAR TO INITIATE
POLARIZED CELL MOTILITY

The actin polymerization engine pushes forward the leading edge of polarized,
moving keratocytes. The actin polymerization engine is running in symmetric,
stationary keratocytes. So, what’s different? Work led by Patricia Yam details
the sequence of changes, with regard to both molecular processes and larger
scale spatial reorganizations, that engage the machinery in an idling keratocyte
to give rise to concerted directional motion. Paper.


EMERGENCE OF LARGE-SCALE CELL MORPHOLOGY AND MOVEMENT FROM LOCAL ACTIN FILAMENT
GROWTH DYNAMICS

The leading edge of a crawling cell is pushed forward by the addition of actin
subunits to growing filaments, right? Right. But we’re talking thousands of
filaments, and millions of molecules of the protein actin. <Expletive!> How is
this assembly process organized into architecture, and not chaos? Work led by
Catherine Lacayo explores one of the molecular mechanisms responsible for
orchestrating the filament meshwork construction process, and the larger scale
phenomena that emerge as a result. Paper.


A CORRELATION-BASED APPROACH TO CALCULATE ROTATION AND TRANSLATION OF MOVING
CELLS

One of the reasons we use fish keratocytes as a model system for studying
actin-polymerization-based cell motility is that these cells are able to move in
a directionally persistent manner (well, apart from turns) and preserve their
overall shape as they do. Wait a second, that’s somewhat like movement of a
rigid object! Not entirely, but it can be approximated as such. Read the paper
to find out how I managed this approximation, and worked out a method to track
these cells, globally, quickly, and non-iteratively. By computing the
relationship between the stationary reference frame observed through the
microscope and the moving reference frame of the cell, I was then able in later
work to analyze various processes, especially those relevant to dynamic spatial
organization, in both contexts. Paper.


NORMAL MODE ANALYSIS OF MACROMOLECULAR MOTIONS IN A DATABASE FRAMEWORK:
DEVELOPING MODE CONCENTRATION AS A USEFUL CLASSIFYING STATISTIC

Some proteins undergo intramolecular motions as part of their function, or
changes of state, etc. Others might not experience conformational changes
themselves, but might differ from related proteins by a similar change in shape.
Werner Krebs led work to compute principle modes of these deformations, and then
assessed the suitability of such modes as a way to classify these proteins by
type of motion. Paper. Website.


PARTSLIST: A WEB-BASED SYSTEM FOR DYNAMICALLY RANKING PROTEIN FOLDS BASED ON
DISPARATE ATTRIBUTES, INCLUDING WHOLE-GENOME EXPRESSION AND INTERACTION
INFORMATION

When comparing proteins which aren’t all that closely related, must we assume
them to be different from the ground (the sequence level) up? Not necessarily.
Though the overall diversity of proteins is massive, they share a smaller
library of protein “folds”: subassemblies that are combined and specialized in
different ways to give rise to said diversity. It could therefore be quite
useful in our study of various proteins to be able to consider them in terms of
these mid-level “parts”. Jiang Qian and Brad Stenger led an effort to inventory
and categorize these parts. Paper. Website.


ASSESSING ANNOTATION TRANSFER FOR GENOMICS: QUANTIFYING THE RELATIONS BETWEEN
SEQUENCE, STRUCTURE AND FUNCTION THROUGH TRADITIONAL AND PROBABILISTIC SCORES

What’s in a gene? What does its sequence tell us about the role of the product
(usually a protein) that it codes for? Yes, protein structure is specified by
amino acid sequence (in turn coded by the nucleotide sequence of the gene), and
function is ultimately determined by structure and sequence. But computing the
structure that a sequence will assume (the protein folding problem) is a
significant challenge and will remain that way for some time. In the mean time,
can we say something about a protein’s structure and function by analogy to a
protein of similar sequence with a known (measured) structure and
(experimentally characterized) function? Or more specifically, by homology? If
related proteins have not diverged too much, it is likely that they share the
same structure and function. But how much is “too much”? See figure 7. In this
work I found similarity thresholds, beyond which the predictive value of
sequence similarity to indicate functional and structural similarity drops off
considerably. Paper. Website.