Computer Vision Tuft Detection for Aerodynamic Video Analysis

I led an R&D engagement to determine whether computer vision could automate tuft-test video interpretation and produce repeatable aerodynamic insights without manual review.

To prove feasibility under real-world conditions (small, thin tufts; variable lighting/backgrounds), I delivered a working prototype combining a baseline CV approach with an ML segmentation model, supported by a scalable synthetic-data pipeline to reduce labeling cost and accelerate iteration.

Example Tuft CV-based Analysis

  1. Deliverables: Research report, feasibility assessment, synthetic dataset pipeline concept + initial dataset, baseline CV approach, U-Net FCN prototype segmentation model, and example results on real images.

  2. Value Proposition: Validated a credible path to a product workflow: Ingest tufting videos → segment tufts → extract per-tuft direction/turbulence metrics → output heatmap overlays, plots over time, and A/B comparisons between vehicle setups.

  3. Commercial Output: Established technical viability and a concrete roadmap (data scaling, per-tuft feature extraction, video pipeline + visualization, and speed optimization for near-real-time processing).

  4. Core Achievement: Demonstrated that a compact (<10MB) segmentation model trained primarily on synthetic data can detect tufts on real vehicles across multiple scales/environments, with a clear path to full per-tuft aerodynamic analytics.

Previous
Previous

Validating Generative Video for Panoramic Cinema

Next
Next

Open-Sourced PruDAQ