VFX RnD

2023_Aug14

RnD VFX: Creative lens maps

Broke dropbox limits, so are new links:

Missed Siggraph last week, but always appreciated the feeling of community and sharing it brought.

In that spirit…

I’ve been thinking about datasets, coming to the conclusion that the strength of a library is not its contents, instead it’s in the curation/usage of the data.

Recently I’ve been diving back into RnD of cinema lenses for consistent workflows with houdini, unreal and flame.

Lens distortion is one of many factors that give different lenses a particular personality.

Over the last couple of decades I’ve accumulated a wide range of lens grids so I decided to make a coherent set of stMaps that are easy to plug in.

Each set includes distort/undistort map in 32bit EXR.  These maps will work in any app that supports the workflow.

Cinema lenses are largely handmade so DO NOT assume these will replace shooting production grids for your tracking team.

Even lenses of same type/manufacturer can vary.  

If you want your 3d tracks to stick, trust the stMaps that come back from your tracking team.

These ARE helpful to create looks for MoGFX or to add to CG renders to make them feel more cinematic.

This set of 133 spherical and 83 anamorphic lens maps should work for such creative pursuits.

These are only the initial building blocks for creating true lens models which would include: breathing, flares/glares, vignettes and bokeh as well as mappings for focus, iris and zoom pulls.

An example shot of my personal database:

WIP example of procedural access via houdini:

Addtl links worth checking out for more RnD about lenses:

https://yedlin.net/NerdyFilmTechStuff/index.html

https://vimeo.com/search?q=five%20pillars%20of%20anamorphic

https://www.sharegrid.com/learn/lens-sets

https://www.cinelensmanual.com/

2022_Sept08

RnD Flame: Quick Shotgun Demo

Quick demo linking Flame to Shotgun using Sequence Publish.

Music: The Showdown High Noon – P5

Sample TOKENs REFERENCE

ALL TIMELINE EXPORTS DONE AT JOB LEVEL OF PROJECT

refSEQ:

export: Movie
Video Format: Movie (Chosen instead of Sequence Publish so audio is correctly included)
Shared Preset:
PUBLISH_refSEQ
PATTERN:
SHOT/refSEQ/<name><YYYY>_<MM><DD>.

shotPUB:
export: Sequence Publish
Shared Preset:
PUBLISH_shotPUB
Use Top Video Track: On
Export in FG: On (default but set to taste)
Video Format: File Sequence
Media: Original Media
Handles: 10 frames
MultiChannel and Alpha: ON
Set to Generate Media. (Needs to be on to resolve issues w PNGs and alphas working correctly)
PATTERN:

<shot name>/<track name>/<segment name>/<segment name>.
openClip PATTERN
openClip/_elem/<shot name>/<segment name>.
NO SHOT SETUP ON PUBLISH. (Although this is tempting to use, I’ve found it to be more hassle than it’s worth. Creating v000 shot templates w the published openClips is a better solution.)
Frame Pad: 4
Start Frame: 1001
Resolution: Same as Clip

out_DPX:
export: Sequence Publish
Shared Preset:
PUBLISH_outDPX
Video Format: File Sequence
Media: Media w FX (will include any repos)
Format: DPX
Uncompressed (important for 10b DPX)
out/<YYYY><MM><DD>_01/dpx/<segment name>/<segment name>.
Bit Depth: 10 bit (If not set to Uncompressed above, instead of Packed, this will always spit out 12 bit)
Note:
DPX out to client. Defaults w number “01”. Change to 02 manually for 2nd delivery of the day, etc… Export at season level.

out_QT:
export: Sequence Publish
Shared Preset:
PUBLISH _outQT
Video Format: Movie
Format: Quicktime
Compression: Apple ProRes 422 LT (or other client pref)
Use LUT: ON
Tag Only as rec709. (seems otherwise will look at source file of EXR which hasn’t been changed by the CDL and Cube)
PATTERN:
out/<YYYY><MM><DD>_01/editorial/<segment name>.
Note:
QT out to client. Defaults w number “01”. Change to 02 manually for 2nd delivery of the day, etc… Export at job level.

ElemLibrary PUB:
IMPORTANT: ALWAYS PUBLISH ELEMENTS TO THE SHOW, DO NOT LEAVE LINKED TO SOURCE LIBRARY.
This ensures that the element will always live in the project structure, and will remain unbroken if cleanup is happening in source library.
export: Sequence Publish
Shared Preset:
PUBLISH_elemLibrary
PATTERN:
assets/element_library/<segment name>/<segment name>.
openClip PATTERN:
openClip/assets/element_library/<segment name>

2020_Apr22

RnD Flame 2021:

Particle generated game assets: http://imag4media.com/2020/04/22/vfx-rnd-particle-gen-assets-in-flame-2021/


2020_Apr08

Flame setup for realistic Depth of Field:

https://imag4media.com/2020/04/08/realistic-depth-of-field-calculator-in-flame/


2019_Mar11

VFX Sketchbook – reference St_Maps for anamorphic lenses

I’ve been deconstructing cameras for my own RnD and decided I needed a diverse set of lenses for testing. Since I couldn’t find an online reference set, I decided to set aside an afternoon and process the grids I have in my reference library.

Using ST Maps are a great way to distort and undistort plates and cg elements so that they match your lenses and feel more cinematic. These sets are all Anamorphic, which means that the lens squashes the image to the sensor, in this case 2:1, and upon projection would use a projection lens that widens it back out to appear normal. This was a tech that brought us CinemaScope, among others, which allowed for a great wide-screen theater experience starting in the 1950s-1960s. Anamorphic lenses have distinctive qualities that make them notably different than spherical, or flat, lenses. They can also be a lot more work for tracking so it’s important to recognize the extra effort required.

The nature of anamorphic lenses mean that the distortion can change depending on how the lens locks into the camera body. Even a few degrees off will mean the distortion along the width can give different results. Because of this, these should be used as reference only. They were all shot on Alexa Mini w 2880×1620 size. In production, you’d want camera dept to grid the lenses to the camera body used, which will help your tracking dept.

In all there are 13 manufacturers, 14 sets, and 59 pairs of lens distort/undistort maps. If peeps wanted to share additional grids to add, send me an PM.

I did the work using 3dequalizer, nuke and flame. The setups for 3de and the nuke distort nodes it created are uploaded along w the 32-bit EXR distort/undistort pairs of ST_Maps.

Here’s a place to download the goods:

https://www.dropbox.com/sh/epg4a51k2j6dre2/AAAcNtc59Xq20RJcOmemaG22a?dl=0


2019_Jan28

VFX Sketchbook – Houdini w Mapbox

Houdini 17 has a bunch of fun tools among them a suite of GameDev tools which I’ve found helpful for VFX workflow. One such tool is using Mapbox, which seems to be designed for AR-type work, but I think could be very interesting for location scouts, etc. 
By entering Lat/Long coordinates, it’ll search and download the height maps, texture images and OSM data which can contain building heights and roads, etc. Currently there is a manual global offset for aligning the tiles. Ideally I’ll solve this procedurally. Another aspect I’m trying to solve is to autoAlign the height of the tiles to it’s neighbors, which is also currently manual. 
This is a WIP houdini 17 setup which will not only search for the tile selected, but also build the ones around them procedurally. Apologies for not getting the kinks worked out yet, but figured it’s worth a share for anyone interested.

I’ve included pre-made setups to play around with for:
Venice Beach
-GrandCanyon
-ElCapitan
-NiagaraFalls
-MonumentValley

dropbox link:
https://www.dropbox.com/…/ourt51…/AAAc5J-kuMfVpatJxuaq4sEfa…

Houdini will automatically create the directories based on the name of the file. If you want to change to EmpireStateBldg, enter the LatLong, and then as you download the textures and build the FBXs they will be sorted accordingly.

Like all things CG, it’ll take some patience.
When updating to new location, turn AutoUpdate off so you can move faster.
When you enter the new location, this is what I’ve found to expect on my creaky trashcan mac:

15 min.
Download the mapBox textures for all of the matrix, think it’s the HERO tile plus two rows outside on all directions. This is unfortunately manual at the moment, trying to automate.

45 min.
Render out the FBX for all the tiles.
These are high res by default. Expect a 1.5 GB type FBX.
Again, currently annoyingly manual but you can select ALL the FBX nodes and hit render and they’ll all queue up.

You may want to make the FBX’s lower res so they load easier.
They will load into Flame as is, but because of the size, expect around 10-15 min to load each, whether Houdini or Flame.


2018_Jun09

Flame RnD:  Modular/procedural approach to problem solving allows extra time for creativity.

Spend less time connecting things so you can spend more time creatively. By adapting your workflow to a modular/procedural approach can reap great creative rewards. Boiling down tasks to manageable pieces brings clarity and allows sections w/o animation to be locked down, so you can continue sketching and creating. Hidden connections makes this seamless, especially w multi-output nodes like Action.  Something to note w the OUT/IN mux nodes.  I group them so they are clearly visible from far zoom out schematic.  The groups also can also show proxies, meaning they can appear a bit more like clips if it helps.
Setup comes from a lively discussion w my favorite danish Viking. I’ve labeled things to accommodate his native language. Sketching freeform involves exploring ideas which is more/less randomly trying things. Some ideas, like color correcting z-depth is honestly not the preferred approach, but the demo shows how the workflow can encourage experimentation, warts and all.
"Train in the Way of the Sword with your hands." -Musashi Miyamoto
When working on a complex problem, which map is easier to follow? Time spent cleaning schematics is important to maintain clarity and critical if someone else needs to open it. If you don’t need to see them all, that means the time saved can be used to play around w creative stuff. Which map is more intimidating?

 

Flame 2019 setup to play with:
demo movie:

2018_Jun06

Flame RnD:  ColorTransform for ARRI footage to ARRI_sceneLinear

I’ve been kicking this around for awhile until I got the final CIE-XYZ piece of the puzzle.
Correct colorTransform is to create two viewing rules under colorManage prefs. First create one for Alexa Rendering and enable for any log. This will display the Alexa files as they should. Second, create a rule for Linear (gamma corrected) and allow for any linear. This will give you viewing options in the viewport. Next, create a colorManage node as a ColorTransform. Set the Tagged space as scene-linear Alexa Wide Gamut. Hit Custom and add two layers: camera and primaries. Set camera to LogC to CIE-XYZ. Set primaries to CIE to AlexaWideGamut. Then results of the sceneLinear will match the Alexa render of the log file.
I’ve pumped the saturation to make the differences more obvious. Also included some of the many wrong answers I encountered.
The full rez ref pics are also with the setup.

2018_May22

Camera/Lens RnD

During recent research into lenses and cameras, these links were helpful:

The Five Pillars of Anamorphic

The Five Pillars are great explanations from Panavision’s Dan Sasaki.  The anamorphic look goes beyond flares and bokeh.  Although painful addtl VFX work, the look can certainly be worth the effort.

Depth of Field and Bokeh Zeiss PDF

DoF/Bokeh PDF from Zeiss clearly explains the general science of lenses. Worth wrapping your head around.  Lots of diagrams and pictures in a dense 45 pages.

Ultimate Vintage Lens Test

Ultimate Anamorphic Lens Test

Especially helpful for comparing lenses.  Same stage/setup/cameras with many dozens of lenses tested.

VFX Camera Database

Huge amounts of great technical info on this site.


2018_Apr10

Flame/Nuke: Optical Flares wrap-up

Optical Flares thru Pybox wrap up:
Wanted to leave this in a useful state. I think Pybox can be somewhat useful, but think many ideas would be better solved thru a Python script instead of Pybox. (example: I’d like to figure out how to export a FBX, unwrap the UVs in houdini, and reimport the results back into a batch script. It’s not a situation where I’d want interactivity, so guessing the script is likely more efficient than the Pybox. Also it’s only needed for one frame.)

Field notes:
– It was important to run Flame from a terminal shell, so I could see where the python errors were happening.
– Occasionally I would just create colorBars in the Nuke script and have Pybox return that image to troubleshoot connectivity.
– My use case was over black since I wanted to comp additively in Flame. For this reason, I didn’t choose to pass useful imagery to Nuke, only to enable the Pybox connection.
– Although it took overnight, I was able to set up all 193 presets and let it run.
– It’s important to set the adsk_result node explicitly to EXR to prevent clipping issues.

Overall, performance in Pybox is pretty abysmal in it’s responsiveness. That said, I think it’s helpful to enable workflows using plugins that aren’t available to Flame, in this case set up flare performance and export to Nuke to run the plugin accessing the deep selection of presets. It’s actually somewhat useful to be able to have Flame open the script in Nuke, tweak the Nuke script and then reload into Flame. Seemed most handy to be able to launch the Nuke script from Flame, as it was already connected to the pipeline.

The Pybox controls from Flame to Nuke were wonky. Had problems getting a 3 vector axis to pass data correctly between the programs. For my needs, I chose to just export as FBX from action, Axis for light control as well as a Camera. I then imported the FBX data into the Nuke script. This was better for me as I could compare the results of Flame and Nuke from a similar baseline.

I’ve uploaded some stuff to play with:
download setup

Included:
– Nuke Script used (Nuke 10.5.5 to make useful to wider audience)
– Nuke Scripts prepped to load all the presets
Important for me as the browser in the plugin is irritating, and wanted to access the creative tools easily.
(These are Nuke 11.2 which I was running for my RnD. Simple but tedious to recreate in 10.5.5 if someone wanted to.)
– Half-rez previews for reference of the presets.

Workflow:
– Create Action in Batch w an Axis and 3dCam.
– Select the Axis and 3dCam and export as FBX. Note: Exported geo seems to bake in transforms, but not the case w lights.
– Load a Pybox into batch.
– Choose the nuke_px.py setup.
– Hit the Nuke Composition button and choose the desired Nuke Script.
– Load the FBX data into Nuke.
– Result can rendered in Nuke or piped back thru Pybox to view in Flame.

Also:
Here’s a ref clip of the 193 opticalFlares presets run thru flame:


2018_Mar25

Flame/Nuke: Pybox RnD w Optical Flares

Wanted to use the action flares for interactivity, but have the info then render thru Nuke so I can use OpticalFlares plugin.
Currently, I’ve exported thru FBX so I can verify camera, flare position, etc. which works great.
It’s a little sluggish, but the adsk_controller knobs do indeed work and pybox updates appropriately. With all the presets that are avail in OpticalFlares, seemed like a good solution. Use Flame for interactivity, Pybox/Nuke for tweaking the presets. Updating the comp is as easy as reloading the Nuke Script into Pybox.

Ideally I’d like to build out the custom knobs to drive pivot, scale, brightness, etc as well as 3d position of the flare from the adsk_controller.


2018_Jan30

Flame: Stab/Unstab

stab_UNstab.png

Demo of using perspGrid in 2D mode for stabilizing moves w inverse for retrack. This tool is a Swiss Army knife. Obviously good for screens but also useful for clean reflections, far bg fix, etc. If track doesn’t completely lock, then adding an Action in the “FIX” area w a little old-school tracking can get you really close quickly. Tedious to link w expressions, instead copy/paste of new perspGrid nodes and turning on/off “invert” for refining track.

Takes longer to watch it track than to do the work. Here’s hoping for a new Mac Pro which can take an nVidia card.

flame 2018.3 setup:

download setup


2018_Jan29

Flame: expressions let you drive an axis from Tangent Panels.

tangentAxis

Although there are a few nodes that let you use the wheels and knobs, only the Color Corrector will give many usable animation channels of data for expressions.
Since you need to be on a CC node, you must view the result in a Context. Further complications come from Offset, Gamma and Gain reacting differently so each required a slightly different expression.
It would be great to wrap this in a GLSL “UI only” shader, allowing renaming of the panel readouts, but currently stuck w the default CC displays. If any Matchbox/Autodesk/Tangent wizards have any insight into a way to “trick” a UI_Only setup so that the panels will think it’s a CC node, please chime in.

Color Wheels = x, y, z translate
RGB Gamma knobs = x, y, z rotate
RGB Offset knobs = x, y, z scale
Contrast knob = proportional scale

Keep in UserBin to drag out, then copy/paste the slave axis to your 3D scene and parent.
If you drag multiple times from UserBin, each slave axis will retain it’s connections by default. Renaming the CTL axis will require updating the expressions. If the CTL axis is deleted, then the animation keyframes are baked in.

Aside from being a novelty, I think the wheels and knobs are a better creative tool than sliders where they can be implemented. Especially true w cameras where controlling focus w knobs seems more intuitive.

flame 2018.3 setup:

download setup


2018_Jan29

Flame ROI for big plates updated

ROI fix for big plates.

flame 2018.3 setup:

download setup


2018_Jan25

ARRI sensor crop

sensorCrop

More good info from that same ARRI page.
Shows how the different ARRI formats crop on the openGate sensor.

flame 2018.3 setup:
download setup


2018_Jan24

ARRI lens illumination

ARRI website has lots of great info and among them is Lens Illumination Guide. It shows how different lenses are expected to vignette and they create a handy web app so you can preview and even download the image.

OpenClips are a really handy way of sorting arrays of data.
Downloaded the variations of the focalLengths and lenses from the ARRI site and placed in directories ready to be imported into flame as openClips. After creating the openClips, reimport them into flame and the video versions point to the different focalLengths renders, or any renders you choose.

After creation, I’ve moved the openClips to sort them and they retain their connections.

Flame archive 2018.3 which has the intact openClips in it. Contains all the variations so it’s big.
https://www.dropbox.com/…/a9xltw…/AADnLwB9sfHwwtxPDenH8dfQa…

sourceImages: The original ARRI jpgs sorted in directories. A couple of the images aren’t good, but that’s what they have. This is what you’d use to create your own openClips.


2018_Jan10

Flame – drag nodes across the viewports.

The boundary is only in your mind. Brought to you by the hotkeys “Shift + A”, “Shift + F” and “Alt + 2”


2018_Jan05

The Tao of CTL+SHIFT+D.

schema_selectALL


Duplicate w upstreams connections intact.  Use it. It’ll change yer life.  If you delete the geo and media in the action, it’s a handy tool to drop in yer userBin.

flame 2018.3 setup:
download setup


2018_Jan04

Flame DoF continued…

Dof_Fix_REV

Here’s a solution that addresses some of the aliasing issues w Z depth after discussion. The settings need to be tweaked, but this is where I’d start.  Works well w DoF but not Blur3d which can take the kernel.

flame 2018.3 setup:

download setup


2018_Jan03

Flame DoF

Made a comparison setup to compare similar settings between DepthOfField node and 3DBlur node. (Also added a bokeh kernel to 3DBlur, since it’s a useful option.

Basically, if you use the DoF node in the yellow box, it’s expression-linked to the nodes below it. Then you can compare.

FYI, something that peeps might not be aware of:
Camera near/far clipping planes are a great way to control your Z-Depth pass coming out of Action. Like many out there, I used to use CC to get into a manageable range, but I’ve found this is a better solve since you can see it clearly from the Action Top View.

flame 2018.3 setup:

download setup


2017_Nov13

Flame ROI for large images

Sample setup for ROI fix. Keeping the resolution of ROI divisible by 4 seems to avoid softening.  All that’s needed is to set ROI crop and t-Click to match orig size.

flame 2018.3 setup:

download setup


2017_Oct26

Flame note on OpenClip

Interesting note for Mac Flame guys creating openClip: The openClip creator app seems to use the macOS keystrokes instead of flame. CMD+C vs. Alt+C for copy as an example. Also, the up/down arrow shortcut for naming doesn’t work. FYI


2017_Oct26

Camera RnD

ARRI_lensIllum

Doing some camera RnD. Here’s a setup which uses data from ARRI website to emulate lens illumination. OpenClips tracks contain tStop and Mux switches for lens, focal distance, sensor crop. Trying to decide whether openClip image track is better used to sort matrix of tStop or lens.

flame 2018.3 setup:

download setup


2017_Oct23

Substance texturing in Flame

Last tests were with Mantra. These are from my mac flame. IBL and PBS are pretty cool tech.


2017_Oct23

Substance texture RnD in Houdini

substanceSkull
substanceSkull2