Quantcast
Channel: FlexMonkey
Viewing all articles
Browse latest Browse all 257

Convolution Filters in Swift with Accelerate and vImage

$
0
0


Convolution filters are a commonly used image processing technique typically used for effects such as blurring, sharpening, embossing and edge detection. A convolution filter uses a kernel: a matrix of numbers that define the value by which pixels are affected by their neighbours. 

There are at least three obvious ways to implement a convolution filter in Swift: using a Core Image CIImageFilter filter, creating your own filter with compute shaders in Metal and using the vImage framework which is part of Accelerate and which is what this post will be discussing. 

Apple’s own recommendation is that vImage is best used for processing very high resolution images of for repeating an operation successively. For processing a single, average sized photograph, you should look at Core Image.

Unlike, for example, Metal, Accelerate is CPU based but still manages to achieve amazing performance by using the vector processor.  It uses a class of parallel computer known as SIMD that allows a single interaction to act upon multiple data points.

The Accelerate API isn’t super friendly to Swift developers. If you want to work with arrays under Accelerate, I’d suggest a look at Mattt Thompson’s Surge library. This wraps up all the mathematical functions in a nice Swift friendly layer. According to Mattt, a piece of code that sums 100,000 Doubles in an array takes 251 seconds in Swift and only 0.02 seconds using Accelerate via Surge - roughly a 9,000 increase in speed!

When I started looking at using vImage under Swift, I found one resource, AImageFilters, by Jannis Mouthing which I’ve used for this project - my Convolution Explorer.

Jannis’s code was pre-Swift 1.2, so I’ve updated it and simplified it to a single, reusable method which accepts a UIImage, a one dimensional representation of the kernel and a divisor:

    func applyConvolutionFilterToImage(image: UIImage, #kernel: [Int16], #divisor: Int) -> UIImage

Convolution Explorer allows users to select kernel items by paining over them and then adjusting their values with a slider. This adjustment immediately updates a vImage convolution filter which is reflected in a UIImageView on the screen. The user is able to adjust the divisor and the size of the convolution kernel. 

My user interface is built entirely with with Shinpuru Layout components. The KernelEditor class contains an SLVGroup and each row is an SLHGroup containing seven cells.

As the user moves their touch across the KernelEditor, I use hitTest() to see if the touch is over a cell. If it is and that cell’s selected status hasn’t changed during this gesture, I toggle it:

    overridefunc touchesMoved(touches: Set<NSObject>, withEvent event: UIEvent)
    {
        super.touchesMoved(touches, withEvent: event)
        
        iflet touch = touches.firstas? UITouch
        {
            let obj = (hitTest(touch.locationInView(self), withEvent: event))
            
            iflet obj = obj as? KernelEditorCellwhere obj.enabled&& find(touchedCells, obj) == nil
            {
                obj.selected = !obj.selected
                touchedCells.append(obj)
            }
        }
    }

When the touch gesture ends, KernelEditor sends a ValueChanged event which is consumed by the main view controller

When the selection changes, the slider in the view controller sets itself to the average of the selected cells and any user changes to the slider are applied to each selected cell.

Inside the view controller, when the UI controls that affect the convolution filter’s parameters change, applyKernel() is invoked. This function builds a one dimensional array of values from the KernelEditor and passes it, via applyConvolutionFilterToImage() to vImage:

    imageView.image = applyConvolutionFilterToImage(image!, kernel: kernel, divisor: divisor)

The main points of applyConvolutionFilterToImage() are to create two vImage_Buffer instances - one for the input image and one for the output, then pass both those buffers into vImageConvolve() and finally, using an extension to UIImage, convert the out buffer to a UIIMage that I can display to the user in an UIImageView


Of course, all of the layout in the view controller is also handled with a few Shinpuru Layout controls. This has been a nice little test project to prove it works outside of a few demonstrations! 

The entire source code for this project is available at my GitHub repository here.

Viewing all articles
Browse latest Browse all 257

Trending Articles