Back in early 2020, before the words “diffusion” and “latent space” were on every hacker’s lips, I got obsessed with one question: Why do all upscalers turn crisp pixel art and old photos into oily plastic?
So I disappeared for six months and came out with Newton, my personal super-resolution tool. GitHub repo is private, but here are the receipts.
The 2020 Origin Story
Mid-pandemic. I was working to restore an old film of my father’s from the early 90s, and every single AI upscaler at the time (ESRGAN, EnhanceNet, whatever Topaz was calling itself) either melted faces or invented creepy teeth. I wanted something that respected edges, film grain, and the fact that real detail often repeats at different scales — without hallucinating extra fingers.
Newton was born on a 2010 MacBook Pro, trained on Google Collab.
Total dataset was under 12 000 images, but every single one was curated by hand. Quality over quantity, 2020 style.
What Makes Newton Different
Most modern upscalers chase perceptual metrics on pretty landscapes. Newton was trained to never, ever break four things:
- Straight lines stay straight (no wobbly telephone wires)
- Film grain and sensor noise stay stochastic, not blotchy
- 1-bit art stays 1-bit clean at any scale
- Text and UI elements remain readable even at 16×
The architecture is ancient by today’s standards, but the constraints I baked in still beat most public diffusion-based upscalers when the source has any geometric structure.
Maybe One Day…
I’ve been tempted to clean it up and drop a compiled binary with a simple GUI.
Until then, Newton remains the 2020 side project that accidentally aged like fine wine while the rest of the world chased ever-larger Stable Diffusion checkpoints.