Is there some algorithm that I can use to analyse image representation accuracies? Do people such as compression algorithm designers have some sort of objective way of comparing two image representations?
Say I'm trying to display a circle as a raster image; the higher the resolution, the closer the image comes to a perfect circle. The representations clearly become more accurate as you go along.
Now, how can I measure how close a particular representation of the circle is to the circle? One method I came up with was to measure the area of the bits that didn't match between the high res and low res image (the XOR):
But how would I apply this to a non-silhouette image such as a photo or an anti-aliased image?







I assume that you are not thinking of mosaic images, which are easy to detect from the pattern of repeated values.
For a natural image, the question does not make sense. The image is as accurate as it can, performing area sampling (and in any case you have no ground truth).
This is your antialiased image: