LeastAverageImage
Andrew Eckel November 14, 2021, last updated March 18, 2021


In July, 2019, I went to a park across a bay from Logan Airport with a camera and tripod. With the camera in a stationary position, I snapped photos every time a plane went overhead.

These would be the test images for a program I was writing called LeastAverageImage.


I took 408 photos. Here are a few of them.


I started by calculating the average.

Averaging images is a common technique, super useful to photographers and artists. In the average of the photos I took, the details that changed from one image to the next got "averaged out", leaving only an empty sky, the land, and a waveless bay.


What I really wanted to produce was the opposite of an average.

At each discrete location in the image (each pixel location), my program would consider all of the colors that existed at that location in all of the photos and choose the one that was the furthest from the average.

Let's look at a specific location near the center of the image. In all of the photos except one, the color at this location is sky blue, but there is a clear outlier. The "least average" color that appears at this location is black, from the underside of a plane.


In locations where planes never appear, the least average color isn't very different from the average color.

For example, the green of this tree from the lower part of the image changed only slightly with the fading sunlight. The average color there is some shade of green, and the least average color is also some shade of green.


This is where we get technical.

If you don't like thinking, scroll down to all the airplanes.

So, "least average" means "most different" from the average. But how do you measure the "difference" of one color from another?

Black should be considered to be more different from white than from gray, right? But what about red or blue? Colors are represented in code by a trio of values -- Red, Green, Blue -- RGB, each from 0 to 255. Pure red is (255, 0, 0). Orange is (255,165,0). White is (255, 255, 255) and black is (0, 0, 0). I thought up and implemented several different definitions of "difference".

Given two colors, C1 and C2, with values (R1, G1, B1), (R2, G2, B2) respectively, the most basic definition ("regular") is:


Why did I square the three differences and then take the root, instead of just summing the absolute values? I don't know. Anyway, that "regular" definition works great. Another notable one I call "color ratio":


(I'm leaving out a little extra calculation that's needed to avoid the possibility of dividing by zero.)

The green channel of a color has a greater effect on the perceived difference than the red or the blue. So why did I multiply the only part of the equation that doesn't involve green by 2? I tried my color ratio equation a bunch of ways, and this way gave the best result. Simple as that.

Using the test images, it didn't matter much which difference definition was used. They all chose basically the same stuff. The color ratio version turned out to be the most versatile for other types of input, which I'll get to in a moment.

Now I'm going to skip a whole bunch of stuff about how to deal with the "runners-up" -- the colors that are almost as different as the most different -- whether to merge them in and how much weight to give them, etc.

Here's the result of running my program, LeastAverageImage, on the airplane photos:


Aaaaaaa!!!!

By selecting the least average colors at each location, the algorithm produced an image of a very crowded sky.

Having refined the algorithm on a set of photos for which I knew exactly what I wanted (a sky full of planes), it was finally time to unleash LeastAverageImage on input for which I had no idea what to expect.

I gathered up every photo I ever took during the last decade and numbered them all, 1 to 10,300. That includes photos of friends, family, nature, art, everything all-in.


With such a large number of photos, the average is just a gray rectangle.


The least average image, on the other hand, has a lot going on. I see some flowers, some block letters, a spiral, and some streaks of light.

(For those who didn't skip the mathy part: I used the color ratio difference definition for this.)


For my brand-new book of lyrics and artwork, I started with that same collection of 10,300 photos, but instead of running LeastAverageImage on the whole collection at once, I wrote a second program to choose random combinations of photos from the collection and use those as the input.

I had the program do that 5,000 times, creating 5,000 images.


I selected my favorites to include in the book. Here are a few of them.







The book is called Fignerales and it's in the Andrew Eckel Store, a long with a few larger art prints.

The "Fignerales, Part 1" music video was also made using LeastAverageImage. The input for each frame of the video is one frame from a video of me performing the song, plus 50 randomly chosen photos from the collection of 10,300. Every half a second, one of those 50 photos is removed and another (also randomly chosen) is added.


I have a few other ideas of ways to apply LeastAverageImage to processing video, but if I write about them here and then never do them, I'll look like a jerk.

The LeastAverageImage program is now open source and free for anyone to use!
Visit the LeastAverageImage GitHub page.

More interesting results from my program are here:
LeastAverageImage: Inflatable Boat(s) and More [Imgur]
LeastAverageImage: "Treehouse of Horror V" [Imgur]

More cool images to come, maybe.


[Andrew Eckel Store]

[Andrew Eckel main page]