Image scaling is a computer graphics process that increases or decreases the size of a digital image. An image can be scaled explicitly with an image viewer or editing software, or it can be done automatically by a program to fit an image into a differently sized area. Reducing an image, as is done to create thumbnail pictures, can use several methods but largely employs a type of sampling called undersampling to reduce the image and maintain the original quality. Increasing the size of an image can be more complex, because the number of pixels required to fill the larger area is greater than the number of pixels in the original image. When image scaling is used to increase the size of an image, one of several algorithms is used to approximate the color of the additional pixels in the larger image.
There are three main types of algorithms that can be used in image scaling to increase the size of an image. The simplest version takes each original pixel in the source image and copies it to its corresponding position in the larger image. This will leave gaps between the pixels in the larger image that are filled by assigning to the empty pixels the color of the source pixel to the left of the current location. This, in effect, multiplies an image and its data into a larger area. While this method, called nearest-neighbor, is effective at preventing data loss, the resulting quality after image scaling usually suffers, because the enlarged blocks of individual pixels will be clearly visible.
Other image scaling algorithms work by filling in the empty spaces in an enlarged image with pixels whose color is determined by the color of the pixels surrounding it. These algorithms, called bilinear interpolation and bicubic interpolation, essentially average the color of the source pixels surrounding a given pixel, and then fill the empty spaces in the larger image with the calculated color average. While the results are smoother than nearest-neighbor image scaling, images that are scaled too large can become blurry and full of indistinct blocks of color.
A third type of image scaling algorithm uses a form of pattern recognition to identify the different areas of an image that are being enlarged, and then attempts to structure the missing pixels. This method can yield good results, but also can start to create visual artifacts within an image the more times the algorithm is applied. Scaling images in this way is potentially computationally expensive for full-color photographic images and also can require more memory than other types of scaling.
Image scaling also can be used to reduce the size of a digital image. The smaller image will have fewer pixels than the source image, so most algorithms will provide fairly good results. Algorithms to reduce the size of an image are similar to those used to increase the size, although the process is performed in reverse. The pixels in the source image are averaged for an area and combined into a single pixel that is placed in the new, smaller image at the appropriate location.