b2cloud

23rd August 2011

Obtaining pixel data from a UIImage

Tutorial By 5 years ago

Bridging images in a UIImage into other API’s such as OpenCV can be painful, however there is 1 format that every image processing library understands, and that is the format of “raw”, where each pixel is represented as an unsigned byte in an array. Unless you are dealing with video formats, which can use YUV, you will either be using an array where each index represents a grayscale value between 0 and 255 at some point in the image, or an array where each 3 or 4 index’s will represent the red, green, blue and alpha values at a certain point. To make the process of extracting pixels from UIImage’s easy, I have made a category called UIImage+Pixels to make methods attach to the UIImage class such as grayscalePixels, rgbPixels and rgbaPixels:
UIImage+Pixels.h
UIImage+Pixels.m
To cycle through the pixels in grayscale you could do something like this:

for(int i=0;i

For a black image this would return a bunch of 0x0's, and for a white image it would return a bunch of 0xFF, this is because 0x0 represents no light, and 0xFF (or 255) represents the maximum amount of light you could have in a pixel, or otherwise known as white.
To cycle through the RGBA pixels you could use something like this:

for(int i=0;i

On a red image you would see the following in your console:
r = 0xFF g = 0x0 b = 0x0 a = 0xFF
and for a blue image you would see:
r = 0x0 g = 0x0 b = 0xFF a = 0xFF
1 pixel is represented by a red, blue, green and alpha value, and together they make up a vast majority of the colour scale. For those of you that are confused about what alpha is, it is simply the transparency value, I find it best to ignore in most image processing tasks.
Now let's say you wanted to access a pixel that was in a certain coordinate on the image, depending on what kind of pixel data you have (grayscale or RGBA) this would be different, in grayscale it simply multiplying the y coordinate by the images width and then adding the x coordinate on at the end, this is because the pixel data is laid out in a grid style where the first index will be the coordinate (0,0) and the second index will be (1,0):

int x = 10;
int y = 2;
unsigned char pixel = pixels[(y*((int)whiteImage.size.width))+x];

For RGBA it is the same principal but slightly different, you simply have to account for the fact that RGBA uses 4 bytes rather than 1 to represent a pixel:

int x = 10;
int y = 2;
unsigned char pixel = pixels[(y*((int)whiteImage.size.width)*4)+(x*4)];

Finally, once you are done using the data the method has given you, be sure to clean it up using free, otherwise it will just stick around hogging up memory (this is what we call a memory leak).
Now as useful and straightforward as working with pixel data is, people developing on iOS should not use it as a shortcut to perform image tasks that could be done with a properly supported API such as cropping or overlaying lines, the reason I used an array of unsigned characters to store this data is that it was much faster than an NSArray, since I primarily made this for image processing I would not recommend this for doing trivial tasks, only for ones that require lots of processing or speed.

Recommended Posts

UIImage Pre-Processing Category

Post by 5 years ago

An important step in performing any kind of image processing is the dumping of unnecessary data and the extraction of useful data. Now usually I would recommend OpenCV for this kind of task, but there

Got an idea?

We help entrepreneurs, organizations and established brands from around
the country bring ideas to life. We would love to hear from you!