Quantcast
Channel: FlexMonkey
Viewing all articles
Browse latest Browse all 257

Fast Core Image Filter Chaining in Swift with Shinpuru Image

$
0
0
The first iteration of Shinpuru Image, was great to simplify the implementation of Core Image and vImage filters but was a little suboptimal when chaining filters together. This is because each invocation of a filter converted the input from UIImage to a CIImage and, after filtering, converted the filter output back to CIImage

The better approach would be to keep the inputs and outputs of each step of the chain as CIImage and only do the conversion to UIImage at the end of the chain when Core Image accumulates all the filters and does the image processing work in one step.

To support that technique, I've extended Shinpuru Image with fast filtering support. The first part of this is to use a slightly different CIContext which is created from an EAGL context. This is Apple's recommended approach for realtime performance and looks like this:

    staticlet ciContextFast = CIContext(EAGLContext: EAGLContext(API: EAGLRenderingAPI.OpenGLES2), options: [kCIContextWorkingColorSpace: NSNull()])

For each of my existing filter functions which were extensions to UIImage, I've created a duplicate version which is an extension to CIImage, for example:

    func SIWhitePointAdjust(#color: UIColor) -> CIImage
    {
        let inputColor = KeyValuePair(key: "inputColor", value: CIColor(color: color)!)
        
        let filterName = "CIWhitePointAdjust"
        
        returnShinpuruCoreImageHelper.applyFilter(self, filterName: filterName, keyValuePairs: [inputColor])

    }

The CIImage version of applyFilter() doesn't do any conversion, it simply sets the value on the filter using the supplied key-value pairs:

    staticfunc applyFilter(image: CIImage, filterName: String, keyValuePairs: [KeyValuePair]) -> CIImage
    {
        let ciFilter = CIFilter(name: filterName)
        
        let inputImage = KeyValuePair(key: kCIInputImageKey, value: image)
        ciFilter.setValue(inputImage.value, forKey: inputImage.key)
        
        keyValuePairs.map({ ciFilter.setValue($0.value, forKey: $0.key) })
        
        return ciFilter.valueForKey(kCIOutputImageKey) as! CIImage

    }

At the end of the filter chain, to toUIImage() method does the final conversion to a UIImage:

    func toUIImage() -> UIImage
    {
        let filteredImageRef = ShinpuruCoreImageHelper.ciContextFast.createCGImage(self, fromRect: self.extent())
        let filteredImage = UIImage(CGImage: filteredImageRef)!
        
        return filteredImage

    }

The demo application's Histogram demo has been fleshed out to chain together five filters: SIWhitePointAdjust, SIColorControls, SIGammaAdjust, SIExposureAdjust and SIHueAdjust, and there's a toggle button to turn fast filtering on and off. 

The code for fast filtering is turned off is:

    image = UIImage(named: "tram.jpg")?
        .SIWhitePointAdjust(color: targetColor)
        .SIColorControls(saturation: saturationSlider.value, brightness: brightnessSlider.value, contrast: contrastSlider.value)
        .SIGammaAdjust(power: gammaSlider.value)
        .SIExposureAdjust(ev: exposureSlider.value)
        .SIHueAdjust(power: hueSlider.value)

..and for fast filtering is turned on is:

    image = SIFastChainableImage(image: UIImage(named: "tram.jpg"))
        .SIWhitePointAdjust(color: targetColor)
        .SIColorControls(saturation: saturationSlider.value, brightness: brightnessSlider.value, contrast: contrastSlider.value)
        .SIGammaAdjust(power: gammaSlider.value)
        .SIExposureAdjust(ev: exposureSlider.value)
        .SIHueAdjust(power: hueSlider.value)
        .toUIImage()

The speed difference is pretty impressive: with the 'regular' code each step takes around 0.055 seconds and with fast filtering, each step takes around 0.015 seconds.

Shinpuru Image is an open source project and available at my Git Hub repository here.

Viewing all articles
Browse latest Browse all 257

Trending Articles