EncodeBox

A few years ago, a post-production company approached me with a quiet frustration.

They were finishing every project in high-end tools (Avid, Resolve, etc.) and exporting clean ProRes 422 HQ or DNxHR masters. Those masters looked perfect. The problem came when those files hit the studio’s existing enterprise transcoding system.

The “official” automated pipeline was built for traditional broadcast delivery. It was slow, rigid, and (most painfully) produced noticeably worse-looking H.264 files than the editors could achieve manually on their own machines. The output had ringing around edges, mosquito noise in shadows, and lost fine texture, even at relatively high bitrates.

They needed a new system that:
– Took production-grade masters straight from the edit bay
– Required zero manual tweaking
– Delivered a full bitrate ladder for streaming
– Looked visibly cleaner than the raw master (yes, cleaner)
– Achieved dramatically lower bitrates than anything they’d seen before

That’s when I built EncodeBox.

Why Off-the-Shelf Solutions Couldn’t Touch It

Most enterprise encoders at the time (and honestly still today) use generic filter chains and one-size-fits-all x264 presets. They don’t really “see” the image the way an editor or colorist does.

My background is in signal processing and imaging, so I approached it differently:
– Treat noise as a signal, not just something to obliterate
– Separate true detail from compression artifacts and sensor noise
– Sharpen only what the eye actually perceives as edges
– Preserve or synthesize natural-looking grain where it belongs
– Use psychovisual models that actually match modern perceptual research

The result was a preprocessing chain (built in VapourSynth) that made the source look better before a single bit of compression was applied. Then a heavily customized x264 build did the rest.

The Numbers That Made Everyone Stop Talking

Across their real-world library:

– 60% average bitrate reduction compared to their previous enterprise encoder
– Higher VMAF scores (96–98 typical) measured against the lightly filtered source
– Zero visible compression artifacts even under heavy scrutiny
– Fine texture (skin, fabrics, hair, film-like grain) preserved or enhanced

One representative example they still reference:
– Source master: ProRes 422 HQ @ ~140 Mbps
– EncodeBox 1080p deliverable: ~9.2 Mbps average
– Visibly cleaner shadows, sharper text, no ringing, better color gradation
– VMAF 97.6

The head of post literally asked, “How is this possible? You made it smaller and better.”

How EncodeBox Actually Worked

1. Drop master into watched folder
2. Automatic analysis (resolution, cadence, interlacing, noise profile, dynamic range, scene complexity)
3. Intelligent, content-aware pre-filtering (my own signal-processing filters):
– Adaptive wavelet/3D denoising
– Detail-preserving sharpening
– Debanding only where banding exists
– QTGMC deinterlacing when needed
4. Per-title encoding with custom x264 + patched rate control
5. Automated quality gate (VMAF/SSIM checked against filtered source → re-encode with higher bitrate if needed)
6. Multi-rendition ladder + metadata + rsync upload to delivery server
7. Simple web dashboard for monitoring

The whole thing ran 24/7 on a single Linux workstation they already owned.

It wasn’t built to be a product. It was built to solve a real pain point using actual image science instead of marketing presets.

And it proved something I still believe: with the right signal-processing foundation, you can make files dramatically smaller and objectively prettier, fully automatically.

Sometimes the best tool isn’t the most expensive one. It’s the one someone wrote because the existing million-dollar system just wasn’t good enough.