A hybrid production environment where keyframes, node graphs, expressions, and generative automation converge into a single property evaluation pipeline.
Manual editing and procedural generation share the same properties, the same timeline, the same render path. No translation layers.
Layer-based editing with viewport handles, bezier keyframes, time remapping, masks, effects, and a multi-panel workspace that adapts to your thinking.
Node graphs, shape pipelines, field-based distribution, block automation, and expressions — all feeding the same property evaluation system.
Every animatable value is a multi-input computation — keyframes layered with modulators, expressions, and node outputs — evaluated per-frame into a final result.
Bezier, spring, bounce, elastic, overshoot. Per-dimension control with independent curves on each axis.
Wiggle, noise, pendulum, sequencer, envelope, random walk — stackable in any order with blendable weights.
Reference other properties, read time and layer context, compute values with a lightweight scripting engine.
Pipe any node output into a layer property. DAG validation, topological sort, partial evaluation, copy-paste.
Circular, grid, fibonacci, path-follow spatial rules — plus value staggers, palette sequences, and timing offsets.
Visual programming that executes per-frame. Loops, conditionals, layer ops — readable by humans and AI agents alike.
UI shell → connectors → core model → procedural layer → render backend. The model owns the truth. The views are thin.
Render queue, undo branches, auto-save, command palette, multi-format I/O, audio sync — the full surface of a shipping tool.
Layers become textured planes in 3D. Cameras, lights, materials — same property system, same evaluation.
Trim, merge, offset, repeat, warp, scatter — a dedicated DAG with instance-aware context propagation.
Full manipulation controller. Transform handles, path editing, gradient control, 3D gizmos, snap, measurements.
Waveform analysis, beat detection, volume envelopes. Bind audio data to any property for sound-driven motion.
Natural language → tool calls. Build layers, wire graphs, set properties, create animations — not text, actions.
Compute shaders, SDF rendering, 3D compositing when available. CPU fallback when not. No dead ends.
Be first to work where every property is a pipeline and every workflow converges.