Copyright 2004 by M. Uli Kusterer Fri, 29 Nov -1901 11:06:32 GMT Comments on article blog-removing-transparency-from-nsimage at Zathras.de http://www.zathras.de/angelweb/blog-removing-transparency-from-nsimage.htm blog-removing-transparency-from-nsimage Comments witness_dot_of_dot_teachtext_at_gmx_dot_net (M. Uli Kusterer) witness_dot_of_dot_teachtext_at_gmx_dot_net (M. Uli Kusterer) en-us Comment 2 by Ken Ferry http://www.zathras.de/angelweb/blog-removing-transparency-from-nsimage.htm#comment2 http://www.zathras.de/angelweb/blog-removing-transparency-from-nsimage.htm#comment2 Hey Uli!

The recommended way to do something like this is to make a new NSBitmapImageRep in a _known_pixel_format_, draw the image into it, then examine the data of that bitmap. This is fast because you aren't churning through making objects or indirecting through pointers or anything like that. This is safe because the drawing machinery is basically canonicalizing your abstract image into a single format you understand - no need to deal specially with arbitrary depths or arbitrary anything.

There's a discussion the AppKit release notes under "NSBitmapImageRep: CoreGraphics impedance matching and performance notes".

"So, to sum up:
(1) Drawing is fast. Playing with pixels is not.
(2) If you think you need to play with pixels, (a) consider if there's a way to do it with drawing or (b) look into CoreImage.
(3) If you still want to get at the pixels, draw into a bitmap whose format you know and look at those pixels."

This is case 3.
Comment 1 by Peter Hosey http://www.zathras.de/angelweb/blog-removing-transparency-from-nsimage.htm#comment1 http://www.zathras.de/angelweb/blog-removing-transparency-from-nsimage.htm#comment1 Peter Hosey writes:
> I tried creating an NSBitmapImageRep with a fixed depth and examining its pixels using -getPixel:atX:y:, but that didn't seem much faster.

The fastest way would be to create a CGBitmapContext with a pixel format of your choice, draw the source image into it, iterate directly on its backing buffer to determine the rect, and then use CGBitmapContextCreateImage and CGImageCreateWithImageInRect to crop out the desired image.

I don't think you can use Core Image for this. You already have the alpha channel, and there's nothing a CIFilter can do with it that can help you solve the problem, and with no data-dependent loops, CI Filter Language can't solve the complete problem alone. You would have to dynamically generate the CIFL code to (looplessly) find the bounds of the image.