Quantcast
Channel: FlexMonkey
Viewing all 257 articles
Browse latest View live

Core Image for Swift Available for Pre-Order!

0
0

I'm super excited to be able to announce that Core Image for Swift, my guide to Apple's awesome framework for image processing, is now available for pre-order at iBooks Store. This is the first book in a two part series: it introduces the framework and concentrates on working with still images. The second book will look at working with video and integrating with other frameworks such as SpriteKit.

The book is over 200 pages long with supporting Swift projects and playgrounds. The chapters are:


  1. Introducing Core Image:
  2. Core Image Filters in Detail: (this is the sample chapter you can read now!)
  3. Displaying Filtered Images: looks at different approaches for displaying Core Image output and covers UIImageView, using OpenGL with GLKView and using Metal with MTKView.
  4. Custom Composite Filters: looks at creating new filters with composites of existing built-in filters
  5. Custom Filters with Core Image Kernel: looks at writing filters from scratch with GLSL
  6. Color Kernels: A detailed look at CIColorKernel
  7. Warp Kernels: A detailed look at CIWarpKernel
  8. General Kernels: A detailed look at CIKernel
  9. Advanced Kernels: Discusses advanced kernels for creating fluid dynamic and reaction diffusion systems
  10. Custom Filters with Metal: Discusses wrapping Metal compute shaders in a Core Image filter to create effects impossible with CIKernels
The book is due for release March 1st, but feel free to place a pre-order now!

Properly Typed Selectors in Xcode 7.3 beta 4

0
0


How many times have you had some innocuous looking code like this:

    let button = UIButton(type: .InfoDark)
    button.addTarget(self, action: "showInfo", forControlEvents: .TouchDown)

...which compiles without a problem but throws a runtime error when the hapless user touches the button:

    Terminating app due to uncaught exception 'NSInvalidArgumentException', 
    reason: '-[__lldb_expr_20.SelectorDemo showInfo]: unrecognized selector 
    sent to instance 0x7ffb3a414ac0'

The runtime error is caused by Swift not being able find the showInfo function or maybe there is a showInfo, but it expects an argument. The root cause is, of course, the action argument of addTarget is being passed the string literal showInfo rather than a properly typed object. 

Well, with Xcode 7.3 beta 4, that problem is resolved! Selectors can now be constructed from directly with the #selector expression. In practice, this means that if your code has a function that looks like this:

    func showInfo(button: UIButton)
    {
        print("showInfo(_:)", button.debugDescription)
    }

We can add a target to the button with the following syntax:

    let selector = #selector(showInfo(_:))
    button.addTarget(self, action: selector, forControlEvents: .TouchDown)

If showInfo takes no arguments...

    func showInfo()
    {
        print("one: showInfo()")

    }

...the syntax is slightly different:

    let selector = #selector(SelectorDemo.showInfoas (SelectorDemo) -> () -> ())

Either way, the great advantage is that if the method name doesn't exist or the arguments are different, we get an error at compile time rather than run time:

    let selectorTwo = #selector(displayInfo(_:))

    // results in:

    Playground execution failed: SelectorPlayground.playground:20:37: error: use of unresolved     
    identifier 'displayInfo'
        let selector = #selector(displayInfo(_:))
                                 ^~~~~~~~~~~

I've created a small playground to demonstrate this new feature - it obviously needs Xcode 7.3 beta 4 - which is available at my GitHub repository.  Enjoy!

Core Image for Swift: Advanced Image Processing for iOS

0
0

I'm delighted to be announce that my book, Core Image for Swift, is now available for sale in Apple's iBooks store. 

Core Image is Apple's awesome image processing and analysis framework. My book is a comprehensive guide that discusses all aspects of still image processing:  from simple image filtering for blurring and applying color treatments, right through to creating advanced custom kernels with optical effects such as refraction (see above!). 

Whether you're developing games or writing an image editing app, Core Image is the technology that allows you apply amazing visual effects with high performance, GPU based filters. 

All the examples are backed with Swift Playgrounds or specific projects. Many of the filters created are hosted in my Filterpedia app. The book is targeted at iOS development, but much of the content is suitable for OS X.

You can download a sample chapter and buy an electronic copy by following this link. Enjoy!


Sweetcorn: A Node Based Core Image Kernel Builder

0
0

While I've had some free time during a fortnight of conferences (try! Swift and RWDevCon were both amazing!), I've taken a small break from writing my book to create Sweetcorn - an open source OS X application to create Core Image color kernels using a node based interface.

The generated Core Image Shading Language code can be used to create CIColorKernels for both iOS and OS X and the node based user interface allows complex shaders to be written with the minimum of typing. 

The current version in my GitHub repository has mainly been written on planes and trains or in hotel rooms with little or no internet access. My knowledge of writing OS X apps in Cocoa is pretty limited - so I suspect some of my approaches may be sort of "special". However, the application is pretty solid and even with fairly complex networks, the UI is responsive and Core Image does a fantastic job of building and applying the filter in next to no time.

Of course, Sweetcorn would benefit from more functionality. I'm hoping to add support for warp and general kernels and implement the entire supported GLSL "vocabulary".

Sweetcorn can save and load projects and comes bundled with a few examples including this shaded tiling kernel:



If you want to learn more about Core Image filters using custom kernels, I heartily recommend my book, Core Image for Swift. Sweetcorn is available in my GitHub repository here - enjoy! 

Creating a Bulging Eyes Purikura Effect with Core Image

0
0


One of my fellow speakers at try! SwiftMathew Gillingham, suggested I look at the "bulging" eyes effect used in Japanese Purikura photo booths. Always up for a challenge, I thought I'd spend a quick moment repurposing my cartoon eyes experiment to simulate the effect using Core Image. The final project is available in my Purikura GitHub repository.

The main change to this project is in the eyeImage(_:) method. After ensuring that the Core Image face detector has data for the left and right eye positions:

    iflet features = detector.featuresInImage(cameraImage).firstas? CIFaceFeature
        where features.hasLeftEyePosition&& features.hasRightEyePosition
    {
    [...]

..I use a small CGPoint extension to measure the distance between the eye positions on screen:

    [...]
    let eyeDistance = features.leftEyePosition.distanceTo(features.rightEyePosition)
    [...]

I use that eye distance, divided by 1.25, as the radius for two Core Image Bump Distortion filters which are centred over the eye positions:

    [...]
    return cameraImage
        .imageByApplyingFilter("CIBumpDistortion",
            withInputParameters: [
                kCIInputRadiusKey: eyeDistance / 1.25,
                kCIInputScaleKey: 0.5,
                kCIInputCenterKey: features.leftEyePosition.toCIVector()])
        .imageByCroppingToRect(cameraImage.extent)
        .imageByApplyingFilter("CIBumpDistortion",
            withInputParameters: [
                kCIInputRadiusKey: eyeDistance / 1.25,
                kCIInputScaleKey: 0.5,
                kCIInputCenterKey: features.rightEyePosition.toCIVector()])
        .imageByCroppingToRect(cameraImage.extent)
    }

If the detector finds no eye positions, eyeImage(_:) simply returns the original image.

This project has been tested on my iPad Pro and iPhone 6s and works great on both!

If you'd like to learn more about the awesome power of Apple's Core Image, my book, Core Image For Swift, is now available through the iBook Store and, now though Gumroad

Creating a Selective HSL Adjustment Filter in Core Image

0
0

The other day, while idly browsing Stack Overflow, I came across this question: the questioner was asking whether they could use Core Image to adjust the hue, saturation and luminance of an image within defined bands. For example, could they darken the blues while desaturating the greens and shifting the hue of the yellows to be redder. Well, I know of no built in filter that can do that, but I'm always up for a challenge. The final result is, in my opinion, quite an interesting Core Image exercise and deserves a little blog post.

The Filter In Use

Before diving into the technical explanation of how I put the filter together, let's look at some results. Sadly, Timothy Hogan was busy, so I took it upon myself to photograph this beautiful still life:


My filter has attributes to change the hue, saturation and lightness for eight color bands. Each attribute is a CIVector with x setting an additive hue shift, and y and z setting saturation and lightness multipliers. The default value for each color attribute is: (x: 0, y: 1, z: 1)So, the following settings will desaturate red and shift the blues and purples:

        filter.inputRedShiftCIVector(x: 0, y: 0, z: 1)
        filter.inputAquaShift = CIVector(x: 0.2, y: 1, z: 1)
        filter.inputBlueShift = CIVector(x: 0.4, y: 1, z: 1)
        filter.inputPurpleShift = CIVector(x: 0.2, y: 1, z: 1)

And give this result:


Or, we could shift the reds and greens and desaturation and lighten the blues with:

        filter.inputRedShiftCIVector(x: 0.1, y: 1, z: 1)
        filter.inputOrangeShift = CIVector(x: 0.2, y: 1.1, z: 1)
        filter.inputYellowShift = CIVector(x: 0.3, y: 1.2, z: 1)
        filter.inputGreenShiftCIVector(x: 0.2, y: 1.1, z: 1)
        filter.inputAquaShift = CIVector(x: 0.1, y: 1, z: 1.5)
        filter.inputBlueShift = CIVector(x: 0.05, y: 0, z: 2)
        filter.inputPurpleShift = CIVector(x: 0, y: 1, z: 1.5)

        filter.inputMagentaShiftCIVector(x: 0.05, y: 1, z: 1)

Which gives:


We could even desaturate everything apart from red - which we'll shift to blue with:

        filter.inputRedShiftCIVector(x: 0.66, y: 1, z: 1.25)
        filter.inputOrangeShift = CIVector(x: 0.25, y: 0, z: 1)
        filter.inputYellowShift = CIVector(x: 0, y: 0, z: 1)
        filter.inputGreenShiftCIVector(x: 0, y: 0, z: 1)
        filter.inputAquaShift = CIVector(x: 0, y: 0, z: 1)
        filter.inputBlueShift = CIVector(x: 0, y: 0, z: 1)
        filter.inputPurpleShift = CIVector(x: 0, y: 0, z: 1)
        filter.inputMagentaShiftCIVector(x: 0.25, y: 0, z: 1)

With this result:


Note that there are some artefacts caused by JPEG compression.

Creating The Filter

Although Core Image for iOS now has over 170 filters, to create this effect, we'll need to take a deep dive into Core Image and write a custom filter using a kernel program written in Core Image Kernel Language (CIKL).  CIKL is a dialect of GLSL, and the programs written in it are passed as a string into a CIColorKernel which, in turn, is wrapped in a CIFilter and can be used as any other built in filter. 

First things first, the eight colors for the bands need to be defined. It's the hue of these eight colors which the kernel will use to apply the "shift values" above. I've written a small extension to UIColor which returns the hue:

    extensionUIColor
    {
        func hue()-> CGFloat
        {
            var hue: CGFloat = 0
            var saturation: CGFloat = 0
            var brightness: CGFloat = 0
            var alpha: CGFloat = 0
            
            self.getHue(&hue,
                        saturation: &saturation,
                        brightness: &brightness,
                        alpha: &alpha)
            
            return hue
        }
    }

Using that, we can define eight constants for the required hues using familiar UIColor RGB values:

    let red = CGFloat(0
    let orange = UIColor(red: 0.901961, green: 0.584314, blue: 0.270588, alpha: 1).hue()
    let yellow = UIColor(red: 0.901961, green: 0.901961, blue: 0.270588, alpha: 1).hue()
    let green = UIColor(red: 0.270588, green: 0.901961, blue: 0.270588, alpha: 1).hue()
    let aqua = UIColor(red: 0.270588, green: 0.901961, blue: 0.901961, alpha: 1).hue()
    let blue = UIColor(red: 0.270588, green: 0.270588, blue: 0.901961, alpha: 1).hue()
    let purple = UIColor(red: 0.584314, green: 0.270588, blue: 0.901961, alpha: 1).hue()
    let magenta = UIColor(red: 0.901961, green: 0.270588, blue: 0.901961, alpha: 1).hue()

With those Swift constants, we can start building the kernel string:

    var shaderString = ""

    shaderString += "#define red \(red) \n"
    shaderString += "#define orange \(orange) \n"
    shaderString += "#define yellow \(yellow) \n"
    shaderString += "#define green \(green) \n"
    shaderString += "#define aqua \(aqua) \n"
    shaderString += "#define blue \(blue) \n"
    shaderString += "#define purple \(purple) \n"
    shaderString += "#define magenta \(magenta) \n"

The color passed into the kernel will be RGB, but we're going to need to convert this to HSL and back again to work with it. Luckily, Patricio Gonzalez Vivo has this Gist which contains the code to do exactly that. So, those functions are added to the shader string:

    shaderString += "vec3 rgb2hsv(vec3 c)"
    shaderString += "{"
    shaderString += "    vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);"
    shaderString += "    vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));"
    shaderString += "    vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));"

    shaderString += "    float d = q.x - min(q.w, q.y);"
    shaderString += "    float e = 1.0e-10;"
    shaderString += "    return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);"
    shaderString += "}"

    shaderString += "vec3 hsv2rgb(vec3 c)"
    shaderString += "{"
    shaderString += "    vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);"
    shaderString += "    vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);"
    shaderString += "    return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);"
    shaderString += "}"

Now for the magic. My nattily titled function, smoothTreatment(), accepts five arguments:
  • hsv - is of type vec3 (a vector containing three elements) and is the color of the pixel currently being computed by the kernel.
  • hueEdge0 and hueEdge1 - are both of type float and are lower and upper bounds of the hue "band" the current pixel is in. 
  • shiftEdge0 and shiftEdge1 - are both of type vec3 and are the values of the two filter shift attributes for the edge hue values above.

    shaderString += "vec3 smoothTreatment(vec3 hsv, float hueEdge0, float hueEdge1, vec3 shiftEdge0, vec3 shiftEdge1)"

The first job of the function is to use smoothStep() to get a Hermite interpolated value between zero and one that describes the current pixel's color in relation to its position between hueEdge0 and hueEdge1:


    shaderString += " float smoothedHue = smoothstep(hueEdge0, hueEdge1, hsv.x);"

Then it's some simple maths to figure out what needs to be added (or multiplied) to the current pixel's HSV based on the shift edges:

    shaderString += " float hue = hsv.x + (shiftEdge0.x + ((shiftEdge1.x - shiftEdge0.x) * smoothedHue));"
    shaderString += " float sat = hsv.y * (shiftEdge0.y + ((shiftEdge1.y - shiftEdge0.y) * smoothedHue));"
    shaderString += " float lum = hsv.z * (shiftEdge0.z + ((shiftEdge1.z - shiftEdge0.z) * smoothedHue));"

Finally, the function returns a vec3 by building a vector from those values:

    shaderString += " return vec3(hue, sat, lum);"

Core Image will invoke the kernel function which is declared with the kernel keyword. It will need the value of the current pixel which is of type __sample (this is actually a vec4 - with components for red, green, blue and alpha) and values for each of the shift parameters:

    shaderString += "kernel vec4 kernelFunc(__sample pixel,"
    shaderString += "  vec3 redShift, vec3 orangeShift, vec3 yellowShift, vec3 greenShift,"
    shaderString += "  vec3 aquaShift, vec3 blueShift, vec3 purpleShift, vec3 magentaShift)"

Using the utility function above, pixel is converted to HSV:


    shaderString += " vec3 hsv = rgb2hsv(pixel.rgb); \n"

Using the current pixel's hue, we can set the correct parameters for smoothTreatment():


    shaderString += " if (hsv.x < orange){ hsv = smoothTreatment(hsv, 0.0, orange, redShift, orangeShift);} \n"
    shaderString += " else if (hsv.x >= orange && hsv.x < yellow){ hsv = smoothTreatment(hsv, orange, yellow, orangeShift, yellowShift); } \n"
    shaderString += " else if (hsv.x >= yellow && hsv.x < green){ hsv = smoothTreatment(hsv, yellow, green, yellowShift, greenShift);  } \n"
    shaderString += " else if (hsv.x >= green && hsv.x < aqua){ hsv = smoothTreatment(hsv, green, aqua, greenShift, aquaShift);} \n"
    shaderString += " else if (hsv.x >= aqua && hsv.x < blue){ hsv = smoothTreatment(hsv, aqua, blue, aquaShift, blueShift);} \n"
    shaderString += " else if (hsv.x >= blue && hsv.x < purple){ hsv = smoothTreatment(hsv, blue, purple, blueShift, purpleShift);} \n"
    shaderString += " else if (hsv.x >= purple && hsv.x < magenta){ hsv = smoothTreatment(hsv, purple, magenta, purpleShift, magentaShift);} \n"
    shaderString += " else {hsv = smoothTreatment(hsv, magenta, 1.0, magentaShift, redShift); }; \n"

Finally converting the updated HSV back to RGB and returning it:


    shaderString += "return vec4(hsv2rgb(hsv), 1.0);"

That lengthy string is passed into a CIColorKernel's initialiser to create an executable Core Image kernel:

    let multiBandHSVKernel = CIColorKernel(string: shaderString)! 

To actually invoke the kernel, we wrap it in a CIFilter and override the outputImage getter where applyWithExtent is invoked:

    overridevar outputImage: CIImage?
    {
        guardlet inputImage = inputImageelse
        {
            returnnil
        }
        
        returnmultiBandHSVKernel.applyWithExtent(inputImage.extent,
          arguments: [inputImage,
            inputRedShift,
            inputOrangeShift,
            inputYellowShift,
            inputGreenShift,
            inputAquaShift,
            inputBlueShift,
            inputPurpleShift,
            inputMagentaShift])
    }

The final filter is available in my Filterpedia GitHub repository.

More on Core Image!


If this post has whetted your appetite and you want to dive deeper into Core Image, may I suggest my book, Core Image for Swift. Along with a comprehensive look at the built in filters, it discusses custom kernels in "proper" detail including the other kernel types: general and warp.

Core Image for Swift is available from the iBook Store and from Gumroad.











Core Image for Swift v1.2 Released!

0
0
I'm pleased to announce that today sees the release of version 1.2 of my book, Core Image for Swift. To the best of my knowledge, Core Image for Swift is the only book dedicated to Apple's awesome image processing framework. It starts from the beginning, discussing the fundamentals of applying filters to images, singly and in chains, and moves to discussing advanced topics including writing custom kernels and integrating Metal with Core Image.

Don't just take my word for it! There's a great review at sketchyTech here.

Version 1.2 includes two new chapters:


Convolution



At the heart of many filters, such as blurs, edge detections and embossing filters, is a convolution kernel. From simple 3x3 box blurs to a look at separable kernels for use in Core Image's one dimensional kernels: Core Image for Swift now has an entire chapter dedicated to the subject.


Model I/O Texture Generation


Introduced in iOS 9, Model I/O is Apple's framework for managing 3D assets and resources. It also includes some great tools for creating procedural textures such as atmospheric skies and smooth color noise. Core Image for Swift includes a chapter discussing how to use these tools with Core Image.


Available from iBooks and Gumroad

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.



Creating a Custom Variable Blur Filter in Core Image

0
0

Last night, I gave a talk at NSLondon on Apple's Core Image framework. Speaking to attendees afterwards, there were two frequent comments: "Core Image is far easier than I thought" and "Core Image is far more powerful than I thought". 

One of the topics I covered was creating a custom filter to create a variable blur effect - something that may initially sound quite complex but is actually fairly simple to implement. I thought a blog post discussing the custom filter in detail may help spread the word on Core Image's ease of use and awesome power.

This post will discuss the kernel code required to create the effect and the Swift code required to wrap up the kernel in a CIFilter. It's worth noting that Core Image includes its own CIMaskedVariableBlur filter which we'll be emulating in this post.


Core Image Kernels

At the heart of every built in Core Image filter is a kernel function. This is a small program that's executed against every single pixel in the output image and is written in a dialect of OpenGL Shading Language called Core Image Kernel Language. We'll be looking at Core Image's  CIKernel class which manages a kernel function and expects code that looks a little like:


    kernel vec4 kernelFunction(sampler image)
    {
        return sample(image, samplerCoord(image));
    }

In this example, the function accepts a single argument, image, which is of type sampler and contains the pixel data of the source image. The sample function returns the pixel color of the source image at the co-ordinates of the pixel currently being computed and simply returning that value gives is a destination image identical to the source image. 

Variable Blur Kernel

Our filter will be based on a simple box blur. The kernel function will sample neighbouring pixels in a square centred on the pixel currently being computed and return the average color of those pixels. The size of that square is a function of the blur image - the brighter the corresponding pixel in the blur image, the bigger the square.

The code is a little more involved than the previous "pass-through" filter:


    kernel vec4 lumaVariableBlur(sampler image, sampler blurImage, float blurRadius) 
    
        vec3 blurPixel = sample(blurImage, samplerCoord(blurImage)).rgb; 
        float blurAmount = dot(blurPixel, vec3(0.2126, 0.7152, 0.0722)); 

        int radius = int(blurAmount * blurRadius); 

        vec3 accumulator = vec3(0.0, 0.0, 0.0); 
        float n = 0.0; 

        for (int x = -radius; x <= radius; x++) 
        { 
            for (int y = -radius; y <= radius; y++) 
            { 
                vec2 workingSpaceCoordinate = destCoord() + vec2(x,y); 
                vec2 imageSpaceCoordinate = samplerTransform(image, workingSpaceCoordinate); 
                vec3 color = sample(image, imageSpaceCoordinate).rgb; 
                accumulator += color; +
                n += 1.0; +
            }     
        
        accumulator /= n; 

        return vec4(accumulator, 1.0); 

    

In this kernel we:

  • Calculate the blur amount based on the luminosity of the current pixel in the blur image and use that to define the blur radius.
  • Iterate over the surrounding pixels in the input image, sampling and accumulating their values.
  • Return the average of the accumulated pixel values with an alpha (transparency) of 1.0

To use that code in a filter, it needs to be passed in the constructor of a CIKernel:


    let maskedVariableBlur = CIKernel(string:"kernel vec4 lumaVariableBlur....")


Implementing as a Core Image Filter

With the CIKernel created, we can wrap up that GLSL code in a Core Image filter. To do this, we'll subclass CIFilter:


    class MaskedVariableBlur: CIFilter
    {
    }

The filter needs three parameters: the input image which will be blurred, the blur image which defines the blur intensity and a blur radius which, as we saw above, controls the size of the sampled area:


    var inputImage: CIImage?
    var inputBlurImage: CIImage?
    var inputBlurRadius: CGFloat = 5

The execution of the kernel is done inside the filter's overridden outputImage getter by invoking applyWithExtent on the Core Image kernel:


    overridevar outputImage: CIImage!
    {
        guardlet
            inputImage = inputImage,
            inputBlurImage = inputBlurImageelse
        {
            returnnil
        }
        
        let extent = inputImage.extent

        let blur = maskedVariableBlur?.applyWithExtent(
            inputImage.extent,
            roiCallback:
            {
                (index, rect) in
                return rect
            },
            arguments: [inputImage, inputBlurImage, inputBlurRadius])
        
        return blur!.imageByCroppingToRect(extent)
    }

The roiCallback is a function that answers the question, "to render the region, rect, in the destination, what region do I need  from the source image?". My book, Core Image for Swift takes a detailed look at how this can affect both performance and the effect of the filter. 

Note that the arguments array mirrors the kernel function's declaration. 

Registering the Filter

To finish creating the filter, it needs to be registered. This is done with CIFilter's static registerFilterName method:


    CIFilter.registerFilterName("MaskedVariableBlur",
                            constructor: FilterVendor(),

                            classAttributes: [kCIAttributeFilterName: "MaskedVariableBlur"])

The filter vendor, which conforms to CIFilterConstructor, returns a concrete instance of our filter class based on a string:


    class FilterVendor: NSObject, CIFilterConstructor
    {
        func filterWithName(name: String) -> CIFilter?
        {
            switch name
            {
            case"MaskedVariableBlur":
                returnMaskedVariableBlur()

            default:
            returnnil
            }
        }
    }

The Filter in Use

Although the filter can accept any image as a blur image, it might be neat to create a radial gradient procedurally (this could even be controlled by a Core Image detector to centre itself on the face!). 


    let monaLisa = CIImage(image: UIImage(named: "monalisa.jpg")!)!


    let gradientImage = CIFilter(
        name: "CIRadialGradient",
        withInputParameters: [
            kCIInputCenterKey: CIVector(x: 310, y: 390),
            "inputRadius0": 100,
            "inputRadius1": 300,
            "inputColor0": CIColor(red: 0, green: 0, blue: 0),
            "inputColor1": CIColor(red: 1, green: 1, blue: 1)
        ])?
        .outputImage?

        .imageByCroppingToRect(monaLisa.extent)

Core Image's generator filters, such as this gradient, create images with infinite extent - hence a crop at the end. Our radial gradient looks like:


We can now call our funky new filter using that gradient and an image of the Mona Lisa:



    let final = monaLisa
        .imageByApplyingFilter("MaskedVariableBlur",
            withInputParameters: [
                "inputBlurRadius": 15,

                "inputBlurImage": gradientImage!])

Which yields this result: where the blur applied to the source is greater where the blur image is lighter:




Core Image for Swift

My book, Core Image for Swift, takes a detailed look at custom filters and covers the high performance warp and color kernels at length. Hopefully, this article has demonstrated that with very little code, even the simplest GLSL can provide some impressive results.

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.




A Look at Perspective Transform & Correction with Core Image

0
0

Hidden away in Core Image's Geometry Adjustment category are a set of perspective related filters that change the geometry of flat images to simulate them being viewed in 3D space. If you work in architecture or out-of-home advertising, these filters, used in conjunction with Core Image's rectangle detector, are perfect for mapping images onto 3D surfaces. Alternatively, the filters can synthesise the effects of a perspective control lens.


Project Assets

This post comes with a companion Swift playground which is available here. The two assets we'll use are this picture of a billboard:



...and this picture of The Mona Lisa:



The assets are declared as:

    let monaLisa = CIImage(image: UIImage(named: "monalisa.jpg")!)!
    let backgroundImage = CIImage(image: UIImage(named: "background.jpg")!)!

Detecting the Target Rectangle

Our first task is to find the co-ordinates of the corners of the white rectangle and for that, we'll use a CIDetector. The detector needs a core image context and will return a CIRectangleFeature. In real life, there's no guarantee that it will not return nil, in the playground, with known assets, we can live life on the edge and unwrap it with a !.


    let ciContext =  CIContext()

    let detector = CIDetector(ofType: CIDetectorTypeRectangle,
        context: ciContext,
        options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

    let rect = detector.featuresInImage(backgroundImage).firstas! CIRectangleFeature

Performing the Perspective Transform

Now we have the four points that define the corners of the white billboard, we can apply those, along with the background input image, to a perspective transform filter. The perspective transform moves an image's original corners to a new set of coordinates and maps the pixels of the image accordingly: 


    let perspectiveTransform = CIFilter(name: "CIPerspectiveTransform")!


    perspectiveTransform.setValue(CIVector(CGPoint:rect.topLeft),
        forKey: "inputTopLeft")
    perspectiveTransform.setValue(CIVector(CGPoint:rect.topRight),
        forKey: "inputTopRight")
    perspectiveTransform.setValue(CIVector(CGPoint:rect.bottomRight),
        forKey: "inputBottomRight")
    perspectiveTransform.setValue(CIVector(CGPoint:rect.bottomLeft),
        forKey: "inputBottomLeft")
    perspectiveTransform.setValue(monaLisa,
             forKey: kCIInputImageKey)

The output image of the perspective transform filter now looks like this:




We can now use a source atop compositing filter to simply composite the perspective transformed Mona Lisa over the background:


    let composite = CIFilter(name: "CISourceAtopCompositing")!

    composite.setValue(backgroundImage,
        forKey: kCIInputBackgroundImageKey)
    composite.setValue(perspectiveTransform.outputImage!,

        forKey: kCIInputImageKey)

The result is OK, but the aspect ratio of the transformed image is wrong and The Mona Lisa is stretched:




Fixing Aspect Ratio with Perspective Correction

To fix the aspect ratio, we'll use Core Image's perspective correction filter. This filter works in the opposite way to a perspective transform: it converts four points (which typically map to the corners of an image subject to perspective distortion) and converts them to a flat, two dimensional rectangle. 

We'll pass in the corner coordinates of the white billboard to a perspective correction filter which will return a version of the Mona Lisa cropped to the aspect ration of the billboard if we were looking at it head on:


    let perspectiveCorrection = CIFilter(name: "CIPerspectiveCorrection")!

    perspectiveCorrection.setValue(CIVector(CGPoint:rect.topLeft),
        forKey: "inputTopLeft")
    perspectiveCorrection.setValue(CIVector(CGPoint:rect.topRight),
        forKey: "inputTopRight")
    perspectiveCorrection.setValue(CIVector(CGPoint:rect.bottomRight),
        forKey: "inputBottomRight")
    perspectiveCorrection.setValue(CIVector(CGPoint:rect.bottomLeft),
        forKey: "inputBottomLeft")
    perspectiveCorrection.setValue(monaLisa,
        forKey: kCIInputImageKey)



A little bit of tweaking to centre the corrected image to the centre of the billboard rectangle:


    let perspectiveCorrectionRect = perspectiveCorrection.outputImage!.extent
    let cropRect = perspectiveCorrection.outputImage!.extent.offsetBy(
        dx: monaLisa.extent.midX - perspectiveCorrectionRect.midX,
        dy: monaLisa.extent.midY - perspectiveCorrectionRect.midY)


    let croppedMonaLisa = monaLisa.imageByCroppingToRect(cropRect)

...and we now have an output image of a cropped Mona Lisa at the correct aspect ration:



Finally, using the original perspective transform filter, we pass in the new cropped version rather than the original version to get a composite with the correct aspect ratio:


    perspectiveTransform.setValue(croppedMonaLisa,
        forKey: kCIInputImageKey)

    composite.setValue(perspectiveTransform.outputImage!,

        forKey: kCIInputImageKey)

Which gives the result we're probably after:




Core Image for Swift

Although my book doesn't actually cover detectors or perspective correction, Core Image for Swift, does take a detailed look at almost every aspect of still image processing with Core Image.

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.




New Custom Core Image Filters

0
0

While writing Core Image for Swift, I've created quite a few custom Core Image filters all of which have been added to my Filterpedia app. To demonstrate the potential of filters based on custom kernel routines, here's a little roundup of some of the recent ones I've written.

If you're writing a photo editing app or want to add a post effect to your SceneKit or SpriteKit based game or app, writing custom kernels can really help your product stand out from the crowd. 

Color Directed Blur



This filter borrows a technique from the Kuwahara Filter: it creates four square "windows" around the current pixel in the North East, North West, South East and South West directions and calculates the average color of each. The shader then finds and returns the averaged color with the closest color match to the current pixel by measuring the distance in a 3D configuration space based on red, green and blue.

Color Directed Blur's final result is quite "painterly" - areas of similar colors are blurred, but edges are preserved. 

Difference of Gaussians



My Difference of Gaussians filter subtracts one Gaussian blurred version of the input image from another. Although the radius of both blurs is the same, the sigma differs between the two. The blurring weight matrices are generated dynamically and the blur filters are implemented as separable horizontal and vertical 9 pixel Core Image convolution kernels.

This filter uses existing Core Image filters in a composite filter rather than a custom kernel.

Polar Pixellate 




This filter uses code written by Jake Gundersen and Brad Larsen from GPUImage. Rather than creating square pixels, it creates arc shaped pixels surrounding an arbitrary center. 

Compound Eye



My Compound Eye filter simulates the eye of an insect or crustacean by creating multiple reflected images in a hexagonal grid. The lens size, lens refractive index and the background color can all be controlled. 

This filter is discussed in length in Core Image for Swift

Refracted Text


My Refracted Text filter accepts a string parameter which it renders as a virtual lens over a background image. It uses Core Image's Height Field from Mask filter which is passed to a custom kernel that uses the height field's slope at every point to calculate a refraction vector. Both the refracted image and the background image can be individually blurred.

This filter is discussed in length in Core Image for Swift

CMYK Registration Mismatch


Some color printing is based on four ink colors: cyan, magenta, yellow and black. If the registration of these colors isn't accurate, the final result displays blurred color ghosting. My CMYK Registration Mismatch filter simulates this effect by sampling cyan, magenta, cyan and black from surrounding pixels and merging them back to RGB colors for a final output. 

CMYK Levels


My CMYK Levels filter applies individual multipliers to the cyan, magenta, yellow and black channels of an RGB image converted to CMYK. 

Core Image for Swift

If you want to learn more about creating custom Core Image filters, may I recommend my book, Core Image for Swift. It discusses different kernel types (color, warp and general) and is a great introduction to Core Image Kernel Language. There are template filters with all the boilerplate code to start writing custom kernels straight away. 

If you're already familiar with writing shader code, you may want to take a look at my open source Sweetcorn app - it creates kernel routines using a node based user interface and now supports both color and warp kernels:



Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.




Creating Procedural Normal Maps for SceneKit

0
0


Normal and bump mapping are techniques used in 3D graphics to fake additional surface detail on an object without adding any additional polygons. Whilst bump mapping uses simple greyscale images (where light areas appear raised) normal mapping, used by SceneKit, requires RGB images where the red, green and blue channels define the displacement along the x, y and z axes respectively. 

Core Image includes tools that are ideal candidates to create bump maps: gradients, stripes, noise and my own Voronoi noise are examples. However, they need an additional step to convert them to normal maps for use in SceneKit. Interestingly, Model I/O includes a class that will do this, but we can take a more direct route with a custom Core Image kernel. 

The demonstration project for this blog post is SceneKitProceduralNormalMapping.

Creating a Source Bump Map


The source bump map is created using a CIGaussianGradient filter which is chained to a CIAffineTile:


    let radialFilter = CIFilter(name: "CIGaussianGradient", withInputParameters: [
        kCIInputCenterKey: CIVector(x: 50, y: 50),
        kCIInputRadiusKey : 45,
        "inputColor0": CIColor(red: 1, green: 1, blue: 1),
        "inputColor1": CIColor(red: 0, green: 0, blue: 0)
        ])

    let ciCirclesImage = radialFilter?
        .outputImage?
        .imageByCroppingToRect(CGRect(x:0, y: 0, width: 100, height: 100))
        .imageByApplyingFilter("CIAffineTile", withInputParameters: nil)
        .imageByCroppingToRect(CGRect(x:0, y: 0, width: 500, height: 500))

Creating a Normal Map Filter

The kernel to convert the bump map to a normal map is fairly simple: for each pixel, the kernel compares the luminance of the pixels to its immediate left and right and, for the red output pixel, returns the difference of those two values added to one and divided by two. The same is done for the pixels immediately above and below for the green channel The blue and alpha channels of the output pixel are both set to 1.0:


        float lumaAtOffset(sampler source, vec2 origin, vec2 offset)
        {
            vec3 pixel = sample(source, samplerTransform(source, origin + offset)).rgb;
            float luma = dot(pixel, vec3(0.2126, 0.7152, 0.0722));
            return luma;
        }
            
        kernel vec4 normalMap(sampler image) \n" +
       
            vec2 d = destCoord();" +
            
            float northLuma = lumaAtOffset(image, d, vec2(0.0, -1.0));
            float southLuma = lumaAtOffset(image, d, vec2(0.0, 1.0));
            float westLuma = lumaAtOffset(image, d, vec2(-1.0, 0.0));
            float eastLuma = lumaAtOffset(image, d, vec2(1.0, 0.0));
            
            float horizontalSlope = ((westLuma - eastLuma) + 1.0) * 0.5;
            float verticalSlope = ((northLuma - southLuma) + 1.0) * 0.5;
            
            return vec4(horizontalSlope, verticalSlope, 1.0, 1.0);
        }

Wrapping this up in a Core Image filter and bumping up the contrast returns a normal map:




Implementing the Normal Map

A SceneKit material's normal content can be populated with a CGImage instance, so we can update the code above to chain the tiled radial gradients to the new filter and, should we want to, a further color controls filter to tweak the contrast:


    let ciCirclesImage = radialFilter?
        .outputImage?
        .imageByCroppingToRect(CGRect(x:0, y: 0, width: 100, height: 100))
        .imageByApplyingFilter("CIAffineTile", withInputParameters: nil)
        .imageByCroppingToRect(CGRect(x:0, y: 0, width: 500, height: 500))
        .imageByApplyingFilter("NormalMap", withInputParameters: nil)
        .imageByApplyingFilter("CIColorControls", withInputParameters: ["inputContrast": 2.5])
    
    let context = CIContext()

    let cgNormalMap = context.createCGImage(ciCirclesImage!,
                                            fromRect: ciCirclesImage!.extent)

Then, simply define a material with the normal map:


    let material = SCNMaterial()
    material.normal.contents = cgNormalMap

Core Image for Swift

All the code to accompany this post os available from my GitHub repository. However, if you'd like to learn more about how to wrap the Core Image Kernel Language code in a Core Image filter or explore the awesome power of custom kernels, may I recommend my book, Core Image for Swift.
Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.




Recreating Kai's Power Tools Goo in Swift

0
0
If, like me, you're old enough to fondly remember Kai's Power Tools Goo software, here's a fun little Swift and Core Image demo that will take you back to those heady days.

SwiftGoo uses a custom Core Image warp kernel to push an image's pixels around the screen based upon a user's touch. There are alternative techniques, such as mesh deformation which is used in PhotoGoo written by fellow Swifter Jameson Quave.

SwiftGoo's code isn't complicated - in fact, I rustled it up on a plane journey on my trusty MacBook Pro. 

A Goo Engine in Under Ten Lines

The engine that drives SwiftGoo is a short Core Image warp kernel. A warp kernel is designed solely for changing the geometry of an input image: given the coordinates of the pixel it's currently computing in the output image, it returns the coordinates of where the filter should sample from in the input image. For example, a warp kernel that returns the destination coordinates with the 'y' component subtracted from the image height will turn an image upside down.

The SwiftGoo kernel accepts arguments for the effect radius, the force to be applied (i.e. how much distortion to apply), the location of the user's touch and the direction that the touch is travelling in.

It calculates the distance between the current pixel coordinate (destCoord()) and the touch location. If that distance is less than the effect radius, it returns the current coordinate offset by the direction multiplied by the distance smoothly interpolated between zero and the radius.  

The Core Image Kernel Code used to create the kernel is passed to a CIWarpKernel as a string, so the code required is:


    let warpKernel = CIWarpKernel(string:
        "kernel vec2 gooWarp(float radius, float force,  vec2 location, vec2 direction)" +
        "{ " +
        " float dist = distance(location, destCoord()); " +
        "  if (dist < radius)" +
        "  { " +
        "    float normalisedDistance = 1.0 - (dist / radius); " +
        "    float smoothedDistance = smoothstep(0.0, 1.0, normalisedDistance); " +
            
        "    return destCoord() + (direction * force) * smoothedDistance; " +
        "  } else { " +
        "  return destCoord();" +
        "  }" +
        "}")!

Implementation 

The kernel is executed inside my view controller's touchesMoved function. In here, I loop over the touch's coalesced touches and use the difference between the locationInView and previousLocationInView to create a CIVector - which is the direction of the touch movement. The radius and force are, depending on whether 3D touch is available, either hard coded or based on the touch's force.

With these values in place, the warp kernel defined above is invoked with:

    let arguments = [radius, force, location, direction]
    
    let image = warpKernel.applyWithExtent(
        accumulator.image().extent,
        roiCallback:
        {
            (index, rect) in
            return rect
        },
        inputImage: accumulator.image(),
        arguments: arguments)

Throughout the loop, the updated image is accumulated with a CIImageAccumulator and the final image is displayed using OpenGL with a GLKView.

The final project is, as always, available in the SwiftGoo GitHub repository

Core Image for Swift

If you'd like to learn more about custom image filters using Core Image kernels or how to display filter output images using OpenGL, may I suggest my book, Core Image for Swift.

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.




New Core Image Procedural Noise Generators for Filterpedia

0
0


One of the great advantages of Core Image Kernel Language being such a close relative of GLSL is that I can scour through sites such as Shadertoy and GLSL Sandbox and copy some of the amazing work produced by their contributors pretty much verbatim to create new Core Image filters. 

Two such examples my CausticNoise (based on this Shadertoy project) and VoronoiNoise (based on this GLSL Sandbox project) filters. Both of these generator filters have now been included in Filterpedia and can be used like any other Core Image filter. 

Caustic Noise


Caustic Noise generates a tileable and animatable noise reminiscent of the caustic patterns you may be at the bottom of a swimming pool. On its own, it creates a nice effect, however, I've chained together with my height field refraction kernel that powers my RefractedTextFilter, to create a CausticRefraction filter:



Since the filter is animatable by its inputTime attribute, it works very nicely with live video. My NemoCam project applies caustic refraction to a live video feed from an iOS device to give a wobbly, underwater effect:


Voronoi Noise


Voronoi noise (also known as Worley or cellular noise) is created by setting the color of each pixel based on its distance to the nearest randomly distributed point. This filter is also animatable (as you can see above) by incrementing its inputSeed attribute. 

Using my normal map filter, both of these noise generators make excellent sources for normal maps for SceneKit materials:


Core Image for Swift 

If you'd like to learn more about the power of Core Image for generating textures, including how to leverage Metal compute shaders to create other types of noise, may I recommend my book, Core Image for Swift.

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.




Histogram Functions in Accelerate vImage

0
0

Whilst Core Image has an amazing range of image filters, the iOS SDK's Accelerate framework's vImage API includes some impressive histogram functions that CI lacks. These include histogram specification, pictured above, which applies a histogram (e.g. calculated from an image) to an image, and contrast stretch which normalises the values of a histogram across the full range of intensity values. 

vImage's API isn't super friendly to Swift developers, so this blog post demonstrates how to use these functions. These examples originate from a talk I gave at ProgSCon and you can see the slide show for the talk here

Converting Images to vImage Buffers and Vice Versa

Much like Core Image has its own CIImage format, vImage uses its own format for image data: vImage_Buffer. Buffers can easily be created from a Core Graphics CGImage with a few steps. First of all, we need to define a format: this will be 8 bit per channel and four channels per pixel: red, green, blue and, lastly, alpha:


let bitmapInfo:CGBitmapInfo = CGBitmapInfo(
    rawValue: CGImageAlphaInfo.Last.rawValue)

var format = vImage_CGImageFormat(
    bitsPerComponent: 8,
    bitsPerPixel: 32,
    colorSpace: nil,
    bitmapInfo: bitmapInfo,
    version: 0,
    decode: nil,
    renderingIntent: .RenderingIntentDefault)

Given a UIImage, we can pass its CGImage to vImageBuffer_InitWithCGImage(). This method also needs an empty buffer which will be populated with the bitmap data of the image:


let sky = UIImage(named: "sky.jpg")!


var inBuffer = vImage_Buffer()

vImageBuffer_InitWithCGImage(
  &inBuffer,
  &format,
  nil,
  sky.CGImage!,
  UInt32(kvImageNoFlags))

Once we're done filtering, converting a vImage buffer back to a UIImage is just as simple. This code is nice to implement as an extension to UIImage as a convenience initialiser. Here we can accept a buffer as an argument, create a mutable copy and pass it to vImageCreateCGImageFromBuffer to populate a CGImage:


extensionUIImage
{
  convenienceinit?(fromvImageOutBuffer outBuffer:vImage_Buffer)
  {
    var mutableBuffer = outBuffer
    var error = vImage_Error()
    
    let cgImage = vImageCreateCGImageFromBuffer(
      &mutableBuffer,
      &format,
      nil,
      nil,
      UInt32(kvImageNoFlags),
      &error)
    
    self.init(CGImage: cgImage.takeRetainedValue())
  }
}

Contrast Stretching

vImage filters generally accept a source image buffer and write the result to a destination buffer.  The technique above will create the source buffer, but we'll need to create an empty buffer with the same dimensions as the source as a destination:


let sky = UIImage(named: "sky.jpg")!

let imageRef = sky.CGImage

let pixelBuffer = malloc(CGImageGetBytesPerRow(imageRef) * CGImageGetHeight(imageRef))

var outBuffer = vImage_Buffer(
  data: pixelBuffer,
  height: UInt(CGImageGetHeight(imageRef)),
  width: UInt(CGImageGetWidth(imageRef)),
  rowBytes: CGImageGetBytesPerRow(imageRef))

With the inBuffer and outBuffer in place, executing the stretch filter is as simple as:


vImageContrastStretch_ARGB8888(
  &inBuffer,
  &outBuffer,
  UInt32(kvImageNoFlags))

...and we can now use the initialiser above to create a UIImage from the outBuffer:


let outImage = UIImage(fromvImageOutBuffer: outBuffer)

Finally, the pixel buffer created to hold the bitmap data for outBuffer needs be freed:


free(pixelBuffer)

Contrast stretching can give some great results. This rather flat image:



Now looks like this:




Conclusion

vImage, although suffering from a rather unfriendly API, is a tremendously powerful framework. Along with a suite of histogram operations, it has functions for de-convolving images (e.g. de-blurring) and some interesting morphology functions (e.g. dilating with a kernel which can be used for lens effects such as star burst and bokeh simulation). 

My Image Processing for iOS slide deck explores different filters and the companion Swift project contains demonstration code.

The code for this project is available in the companion project to my image processing talk. I've also wrapped up this vImage code in a Core Image filter wrapper which is available in my Filterpedia app

It's worth noting that the Metal Performance Shaders framework also includes these histogram operations.


Addendum - Contrast Stretching with Core Image

Many thanks to Simonas Bastys of Pixelmator who pointed out that Core Image does indeed have contrast stretching through its auto-adjustment filters.

With the sky image, Core Image can create a tone curve filter with the necessary values using this code:


let sky = CIImage(image: UIImage(named: "sky.jpg")!)!

let toneCurve = sky
    .autoAdjustmentFiltersWithOptions(nil)
    .filter({ $0.name == "CIToneCurve"})
    .first

The points create a curve that looks like this:



Which, when applied to the sky image:


iflet toneCurve = toneCurve
{
    toneCurve.setValue(sky, forKey: kCIInputImageKey)
    
    let final = toneCurve.outputImage
}

....yields this image:



Thanks again Simonas!

FYI, the histograms for the images look like this (thanks to Pixelmator). vImage stretches the most!

Original image:


vImage contrast stretch


Core Image contrast stretch





vImage Histogram Functions Part II: Specification

0
0



In my last post, Histogram Functions in Accelerate vImage, I looked using vImage's contrast stretching function to improve the contrast of an image. In this post, I'll look at another vImage histogram function: specification.This function allows us to apply an arbitrary histogram (typically from a suppled image) to an image. In the example above, the histogram from the leftmost photograph is applied to the sunset image in the centre to create the final blue tinted image on the right.

Technically, the histogram that was calculated from the leftmost image has been specified as the new histogram for the centre image.

Histogram Calculation

The first step to recreate the example above is to calculate the histogram. We'll use the same technique described in my previous post to convert a supplied CGImage to a vImage buffer. However this time, we're not populating a target image, we're populating four arrays of unsigned integers that will contain the histogram values for each of the four colour channels (red, green, blue and alpha).

vImage's API shows its C roots and to calculate the histogram of inBuffer,  we initialise four arrays, create pointers to those arrays and create pointers to an array of those pointers:

let alpha = [UInt](count: 256, repeatedValue: 0)
let red = [UInt](count: 256, repeatedValue: 0)
let green = [UInt](count: 256, repeatedValue: 0)
let blue = [UInt](count: 256, repeatedValue: 0)

let alphaPtr = UnsafeMutablePointer<vImagePixelCount>(alpha)
let redPtr = UnsafeMutablePointer<vImagePixelCount>(red)
let greenPtr = UnsafeMutablePointer<vImagePixelCount>(green)
let bluePtr = UnsafeMutablePointer<vImagePixelCount>(blue)

let rgba = [alphaPtr, redPtr, greenPtr, bluePtr]


let histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>>(rgba)

The histogram object, along with the source image, are passed to vImageHistogramCalculation_ARGB8888(_:_:_:):

vImageHistogramCalculation_ARGB8888(&inBuffer, histogram, UInt32(kvImageNoFlags))

The original arrays, alpha, red, green and blue, are now populated with the histogram data. 

Histogram Specification

Given the histogram data, we now need to specify it against another image (the sunset in our case). In my example, I've separated these two steps into two functions and my specification function requires a histogram tuple of the form:

histogram: (alpha: [UInt], red: [UInt], green: [UInt], blue: [UInt])

The vImage specification function, vImageHistogramSpecification_ARGB888(_:_:_:_:) requires an unsafe mutable pointer to an unsafe pointer of the pixel counts for each bucket for each channel. The following code prepares histogram to pass into vImage (vImagePixelCount is a type alias for UInt):

let alphaPtr = UnsafePointer<vImagePixelCount>(histogram.alpha)
let redPtr = UnsafePointer<vImagePixelCount>(histogram.red)
let greenPtr = UnsafePointer<vImagePixelCount>(histogram.green)
let bluePtr = UnsafePointer<vImagePixelCount>(histogram.blue)


let rgba = UnsafeMutablePointer<UnsafePointer<vImagePixelCount>>([alphaPtr, redPtr, greenPtr, bluePtr])

Using the inBuffer and outBuffer technique from the previous post, we can now specify the histogram for a source image and populate a vImage buffer:

vImageHistogramSpecification_ARGB8888(&inBuffer, &outBuffer, rgba, UInt32(kvImageNoFlags))

Wrapping in a Core Image Filter


If all that logic seems a bit daunting to implement every time you want to calculate and specify a histogram, it can all be wrapped up in a Core Image filter. Take a look at my VImageFilters.swift file under Filterpedia to see it in action. The final filter can be used in the same way as any other Core Image filter with one caveat: vImage sometimes crashes when run in background threads, so Filterpedia always runs the filter in the main thread. 

Conclusion

Histogram specification is a great tool for colorising monochrome images and matching colours of images for composition. vImage offers tools lacking in Core Image and, with a little work, its sightly arcane API can be hidden away inside a CIFilter wrapper. 

The demo code to accompany this post is available here.



Loading, Filtering & Saving Videos in Swift

0
0


If you've ever used an application such as Adobe's After Effects, you'll know how much creative potential there is adding and animating filters to video files. If you've worked with Apple's Core Image framework, you may well have added filters to still images or even live video feeds, but working with video files and saving the results back to a device isn't a trivial coding challenge. 

Well, my VideoEffects app solves that challenge for you: VideoEffects allows a user to open a video file, apply a Core Image Photo Effects filter and write the filtered movie back to the saved photos album. 

VideoEffects Overview

The VideoEffects project consists of four main files:


The first action a user needs to take is to press "load" in the bottom left of the screen. This opens a standard image picker filtered for the movie media type. Once a movie is opened, it's displayed on the screen where the user can either play/pause or use the slider as a scrub bar. If any of the filters are selected, the save button is enabled which will save a filtered version of the video back to the file system.

Let's look at the vendor and writer code in detail.

Filtered Video Vendor

The first job of the vendor class is to actually open a movie from a URL supplied by the "load" button in the control panel:

  func openMovie(url: NSURL){
    player = AVPlayer(URL: url)
    
    guardlet player = player,
      currentItem = player.currentItem,
      videoTrack = currentItem.asset.tracksWithMediaType(AVMediaTypeVideo).firstelse {
        fatalError("** unable to access item **")
    }
    
    currentURL = url
    failedPixelBufferForItemTimeCount = 0

    currentItem.addOutput(videoOutput)
    
    videoTransform = CGAffineTransformInvert(videoTrack.preferredTransform)
    
    player.muted = true

  }

There are a few interesting points here: firstly, I reset a variable named failedPixelBufferForItemTimeCount - this is a workaround for what I think is a bug in AVFoundation with videos that would occasionally fail to load with no apparent error. Secondly, to support both landscape and portrait videos, I create an inverted version of the video track's preferred transform.

The vendor contains a CADisplayLink which invokes step(_:)

  func step(link: CADisplayLink) {
    guardlet player = player,
      currentItem = player.currentItemelse {
        return
    }
    
    let itemTime = videoOutput.itemTimeForHostTime(CACurrentMediaTime())
    
    displayVideoFrame(itemTime)
    
    let normalisedTime = Float(itemTime.seconds / currentItem.asset.duration.seconds)
    
    delegate?.vendorNormalisedTimeUpdated(normalisedTime)
    
    if normalisedTime >= 1.0
    {
      paused = true
    }

  }

With the CADisplayLink, I  calculate the time for the AVPlayerItem based on CACurrentMediaTime. The normalised time (i.e. between 0 and 1) is calculated by dividing the player item's time by the assets duration, this is used by the UI components to set the scrub bar's position during playback. Creating a CIImage from the movie's frame at itemTime is done in displayVideoFrame(_:):

  func displayVideoFrame(time: CMTime) {
    guardlet player = player,
      currentItem = player.currentItemwhere player.status == .ReadyToPlay&& currentItem.status == .ReadyToPlayelse {
        return
    }
    
    ifvideoOutput.hasNewPixelBufferForItemTime(time) {
      failedPixelBufferForItemTimeCount = 0
      
      var presentationItemTime = kCMTimeZero
      
      guardlet pixelBuffer = videoOutput.copyPixelBufferForItemTime(
        time,
        itemTimeForDisplay: &presentationItemTime) else {
          return
      }
      
      unfilteredImage = CIImage(CVImageBuffer: pixelBuffer)
      
      displayFilteredImage()
    }
    elseiflet currentURL = currentURLwhere !paused {
      failedPixelBufferForItemTimeCount += 1
      
      iffailedPixelBufferForItemTimeCount> 12 {
        openMovie(currentURL)
      }
    }

  }

Before copying a pixel buffer from the video output, I need to ensure one is available. If that's all good, it's a simple step to create a CIImage from that pixel buffer. However, if hasNewPixelBufferForItemTime(_:) fails too many times (12 seems to work), I assume AVFoundation has silently failed and I reopen the movie.

With the populated CIImage, I apply a filter (if there is one) and return the rendered result back to the delegate (which is the main view) to be displayed:

  func displayFilteredImage() {
    guardlet unfilteredImage = unfilteredImage,
      videoTransform = videoTransformelse {
        return
    }
    
    let ciImage: CIImage
    
    iflet ciFilter = ciFilter {
      ciFilter.setValue(unfilteredImage, forKey: kCIInputImageKey)
      
      ciImage = ciFilter.outputImage!.imageByApplyingTransform(videoTransform)
    }
    else {
      ciImage = unfilteredImage.imageByApplyingTransform(videoTransform)
    }
    
    let cgImage = ciContext.createCGImage(
      ciImage,
      fromRect: ciImage.extent)
    
    delegate?.finalOutputUpdated(UIImage(CGImage: cgImage))

  }

The vendor can also jump to a specific normalised time. Here, rather than relying on the CACurrentMediaTime, I create a CMTime and pass that to displayVideoFrame(_:):

  func gotoNormalisedTime(normalisedTime: Double) {
    guardlet player = playerelse {
      return
    }

    let timeSeconds = player.currentItem!.asset.duration.seconds * normalisedTime
    
    let time = CMTimeMakeWithSeconds(timeSeconds, 600)
    
    player.seekToTime(
      time,
      toleranceBefore: kCMTimeZero,
      toleranceAfter: kCMTimeZero)
    
    displayVideoFrame(time)

  }

Filtered Video Writer

Writing the result is not the simplest coding task I've ever done. I'll explain the highlights, the full code is available here.

The writer class exposes a function, beginSaving(player:ciFilter:videoTransform:videoOutput) which begins the writing process.

Writing is actually done to a temporary file in the documents directory and given a file name based on the current time:

  let urls = NSFileManager
    .defaultManager()
    .URLsForDirectory(
        .DocumentDirectory,

        inDomains: .UserDomainMask)

  videoOutputURL = documentDirectory
      .URLByAppendingPathComponent("Output_\(timeDateFormatter.stringFromDate(NSDate())).mp4")

  do {
    videoWriter = tryAVAssetWriter(URL: videoOutputURL!, fileType: AVFileTypeMPEG4)
  }
  catch {
    fatalError("** unable to create asset writer **")
  }

The next step is to create an asset writer input using H264 and of the correct size:

    let outputSettings: [String : AnyObject] = [
      AVVideoCodecKey: AVVideoCodecH264,
      AVVideoWidthKey: currentItem.presentationSize.width,
      AVVideoHeightKey: currentItem.presentationSize.height]
    
    guardvideoWriter!.canApplyOutputSettings(outputSettings, forMediaType: AVMediaTypeVideo) else {
      fatalError("** unable to apply video settings ** ")
    }
    
    videoWriterInput = AVAssetWriterInput(
      mediaType: AVMediaTypeVideo,

      outputSettings: outputSettings)

The video writer input is added to an AVAssetWriter:

    videoWriterInput = AVAssetWriterInput(
      mediaType: AVMediaTypeVideo,
      outputSettings: outputSettings)
    
    ifvideoWriter!.canAddInput(videoWriterInput!) {
      videoWriter!.addInput(videoWriterInput!)
    }
    else {
      fatalError ("** unable to add input **")

    }

The final set up step for initialising is to create a pixel buffer adaptor:

    let sourcePixelBufferAttributesDictionary = [
      String(kCVPixelBufferPixelFormatTypeKey) : Int(kCVPixelFormatType_32BGRA),
      String(kCVPixelBufferWidthKey) : currentItem.presentationSize.width,
      String(kCVPixelBufferHeightKey) : currentItem.presentationSize.height,
      String(kCVPixelFormatOpenGLESCompatibility) : kCFBooleanTrue
    ]
    
    assetWriterPixelBufferInput = AVAssetWriterInputPixelBufferAdaptor(
      assetWriterInput: videoWriterInput!,

      sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)

We're now ready to actually start writing. I'll rewind the player to the beginning of the movie and, since that is asynchronous, call writeVideoFrames in the seek completion handler:

    player.seekToTime(
      CMTimeMakeWithSeconds(0, 600),
      toleranceBefore: kCMTimeZero,
      toleranceAfter: kCMTimeZero)
    {
      _inself.writeVideoFrames()

    }

writeVideoFrames writes the frames to the temporary file. It's basically a loop over each frame, incrementing the frame with each iteration. The number of frames is calculated as:

    let numberOfFrames = Int(duration.seconds * Double(frameRate))

There was an intermittent bug where, again, hasNewPixelBufferForItemTime(_:) failed. This is fixed with a slightly ugly sleep:

    NSThread.sleepForTimeInterval(0.05)

In this loop, I do something very similar to the vendor: convert a pixel buffer from the video output to a CIImage, filter and render it. However, I'm not rendering to a CGImage for display, I'm rendering back to a CVPixelBuffer to append to the asset write pixel buffer. The pixel buffer adaptor has a pixel buffer pool I take pixel buffers from which are passed to the Core Image context as a render target:

    ciFilter.setValue(transformedImage, forKey: kCIInputImageKey)
    
    var newPixelBuffer: CVPixelBuffer? = nil
    
    CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &newPixelBuffer)
    
    self.ciContext.render(
      ciFilter.outputImage!,
      toCVPixelBuffer: newPixelBuffer!,
      bounds: ciFilter.outputImage!.extent,
      colorSpace: nil)

transformedImage is the filtered CIImage rotated based on the original assets preferred transform.  

Now that the new pixel buffer contains the rendered filtered image, it's appended to the pixel buffer adaptor:

    assetWriterPixelBufferInput.appendPixelBuffer(
      newPixelBuffer!,

      withPresentationTime: presentationItemTime)

The final part of the loop kernel is to increment the frame:

    currentItem.stepByCount(1)

Once I've looped over each frame, the video write input is marked as finished and the video writer's finishWritingWithCompletionHandler(_:) is invoked. In the completion handler, I rewind the player back to the beginning and copy the temporary video into the saved photos album:

    videoWriter.finishWritingWithCompletionHandler {
      player.seekToTime(
        CMTimeMakeWithSeconds(0, 600),
        toleranceBefore: kCMTimeZero,
        toleranceAfter: kCMTimeZero)
      
      dispatch_async(dispatch_get_main_queue()) {
        UISaveVideoAtPathToSavedPhotosAlbum(
          videoOutputURL.relativePath!,
          self,
          #selector(FilteredVideoWriter.video(_:didFinishSavingWithError:contextInfo:)),
          nil)

      }

...and once the video is copied, I can delete the temporary file:

  func video(videoPath: NSString, didFinishSavingWithError error: NSError?, contextInfo info: AnyObject)
  {
    iflet videoOutputURL = videoOutputURLwhereNSFileManager.defaultManager().isDeletableFileAtPath(videoOutputURL.relativePath!)
    {
      try! NSFileManager.defaultManager().removeItemAtURL(videoOutputURL)
    }
    
    assetWriterPixelBufferInput = nil
    videoWriterInput = nil
    videoWriter = nil
    videoOutputURL = nil
    
    delegate?.saveComplete()

  }

Easy! 

Conclusion

I've been wanting to write this code for almost two years and it proved a lot more "interesting" than I anticipated. There are two slightly hacky workarounds in there, but the end result is the foundation for a tremendously powerful app. At every frame, the normalised time is available and this can be used to animate the attributes of filters and opens the way for a powerful After Effects style application.





Creating a Lens Flare Filter in Core Image

0
0


Everybody loves a lens flare, don't they? If you want to give your iOS game or app that distinctive JJ Abrams look, you may be a tad disappointed that Core Image doesn't come bundled with a nice lens flare filter and be scratching your head wondering how to create one - especially one with some nice hexagonal artefacts.

Well stop scratching and start flaring! My Core Image lens flare filter may just be the solution. 

Lens Flare Basics

Lens flare is caused by internal reflections between the lens elements of a camera lens. Often the flares appear as polygonal shapes caused by the shape of the iris. My filter is a simple implementation where the flare artefacts are all the same colour and are all hexagonal. 

The line of flares begins at a light source, passes through the centre of the image and ends at the opposite point from the origin around the centre.  The filter displays eight reflection artefacts - one at the furthest point and the remaining seven positioned anywhere along that line. The size of each is also user defined. 

To render the light source (at the origin), I use a Core Image Sunbeams generator. To render the hexagonal artefacts I use a little bit of Core Image Kernel Language (CIKL) magic:

Rendering Hexagonal Reflection Artefacts

A Core Image color kernel function is invoked for every pixel in a destination image. To render a hexagon, the kernel function needs to ask the question, "are the coordinates of the pixel currently being computed within a hexagon of some size at some point?". The kernel function returns a colour based on the answer to that question.

Luckily, I'm not the first person to ask this question and there's a fantastic solution at playchilla.com

You may notice that the hexagons in the image above aren't a solid colour: they're brighter towards the edges. To achieve this, I took the playchilla.com code and tweaked the return value to be a function of the distance from the hexagon centre. 

My final CIKL function, brightnessWithinHexagon(), accepts the co-ordinates of the current pixel, the co-ordinates of the centre of a hexagon and the size of a hexagon. It returns a normalised value that is the brightness the current pixel should be:


        float brightnessWithinHexagon(vec2 coord, vec2 center, float v)
        {
           float h = v * sqrt(3.0);
           float x = abs(coord.x - center.x); 
           float y = abs(coord.y - center.y); 
           float brightness = (x > h || y > v * 2.0) ? 
               0.0 : 
               smoothstep(0.5, 1.0, (distance(destCoord(), center) / (v * 2.0))); 
           return ((2.0 * v * h - v * x - h * y) >= 0.0) ? brightness  : 0.0;

        }

The main kernel function itself accepts the coordinates and sizes of eight hexagons, and the colour and brightness to render the hexagons. With those values, it progressively checks whether the current pixel is in any of the eight hexagons, adding the values with each step:


        kernel vec4 lensFlare(vec2 center0, vec2 center1, vec2 center2, vec2 center3, vec2 center4, vec2 center5, vec2 center6, vec2 center7,
           float size0, float size1, float size2, float size3, float size4, float size5, float size6, float size7, 
           vec3 color, float reflectionBrightness) 
        {
           float reflectionO = brightnessWithinHexagon(destCoord(), center0, size0); 
           float reflection1 = reflectionO + brightnessWithinHexagon(destCoord(), center1, size1); 
           float reflection2 = reflection1 + brightnessWithinHexagon(destCoord(), center2, size2); 
           float reflection3 = reflection2 + brightnessWithinHexagon(destCoord(), center3, size3); 
           float reflection4 = reflection3 + brightnessWithinHexagon(destCoord(), center4, size4); 
           float reflection5 = reflection4 + brightnessWithinHexagon(destCoord(), center5, size5); 
           float reflection6 = reflection5 + brightnessWithinHexagon(destCoord(), center6, size6); 
           float reflection7 = reflection6 + brightnessWithinHexagon(destCoord(), center7, size7); 
            
           return vec4(color * reflection7 * reflectionBrightness, reflection7); 

        }


Implementing as a Core Image Filter

With the CIKL written, the next step is to wrap the kernel is a Core Image filter. The filter has a lot of attributes to control the position and colours of each reflection:


    var inputOrigin = CIVector(x: 150, y: 150)
    var inputSize = CIVector(x: 640, y: 640)
    
    var inputColor = CIVector(x: 0.5, y: 0.2, z: 0.3)
    var inputReflectionBrightness: CGFloat = 0.25
    
    var inputPositionOne: CGFloat = 0.15
    var inputPositionTwo: CGFloat = 0.3
    var inputPositionThree: CGFloat = 0.4
    var inputPositionFour: CGFloat = 0.45
    var inputPositionFive: CGFloat = 0.6
    var inputPositionSix: CGFloat = 0.75
    var inputPositionSeven: CGFloat = 0.8
    
    var inputReflectionSizeZero: CGFloat = 20
    var inputReflectionSizeOne: CGFloat = 25
    var inputReflectionSizeTwo: CGFloat = 12.5
    var inputReflectionSizeThree: CGFloat = 5
    var inputReflectionSizeFour: CGFloat = 20
    var inputReflectionSizeFive: CGFloat = 35
    var inputReflectionSizeSix: CGFloat = 40
    var inputReflectionSizeSeven: CGFloat = 20

The filter's overridden outputImage getter calculates the position of reflectionZero to be opposite the inputOrigin:


    let center = CIVector(x: inputSize.X / 2, y: inputSize.Y / 2)
        
    let localOrigin = CIVector(x: center.X - inputOrigin.X, y: center.Y - inputOrigin.Y)

    let reflectionZero = CIVector(x: center.X + localOrigin.X, y: center.Y + localOrigin.Y)

The remaining seven reflections simply interpolate between the origin and reflectionZero:


    let reflectionOne = inputOrigin.interpolateTo(reflectionZero, value: inputPositionOne)
    let reflectionTwo = inputOrigin.interpolateTo(reflectionZero, value: inputPositionTwo)
    [...]
    let reflectionSeven = inputOrigin.interpolateTo(reflectionZero, value: inputPositionSeven)

The interpolateTo(_:value) is a simple extension to CIVector:


    func interpolateTo(target: CIVector, value: CGFloat) -> CIVector
    {
        returnCIVector(
            x: self.X + ((target.X - self.X) * value),
            y: self.Y + ((target.Y - self.Y) * value))
    }

With the reflection centres defined, the kernel containing the CIKL can be executed and a little bit of blur applied:


        let arguments = [
            reflectionZero, reflectionOne, reflectionTwo, reflectionThree, reflectionFour, reflectionFive, reflectionSix, reflectionSeven,
            inputReflectionSizeZero, inputReflectionSizeOne, inputReflectionSizeTwo, inputReflectionSizeThree, inputReflectionSizeFour,
            inputReflectionSizeFive, inputReflectionSizeSix, inputReflectionSizeSeven,
            inputColor, inputReflectionBrightness]
        
        let lensFlareImage = colorKernel.applyWithExtent(
            extent,
            arguments: arguments)?.imageByApplyingFilter("CIGaussianBlur", withInputParameters: [kCIInputRadiusKey: 2])

This lens flare image is composited over the sunbeams image generated by a CISunbeamsGenerator

Conclusion

Yet again, the power of custom kernels shows that almost anything is possible with Core Image. Once I'd found the playchilla.com code, writing the filter took no more than an hour or so. 

The image above shows the output from my lens flare filter composited over a photograph of a sunset using CIAdditionCompositing. Since both SceneKit and SpriteKit support Core Image filters, this effect can easily be integrated into projects using either framework.

Of course, this filter has been added to Filterpedia:




If you'd like to learn more about writing custom Core Image kernels or the framework in general, may I recommend my book, Core Image for Swift.

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.







Simulating Bokeh with Metal Performance Shaders

0
0

Bokeh is an optical effect where out of focus regions of an image take on the shape of a camera's iris - often a polygon such as a hexagon or octagon. This effect can be simulated in both vImage and with the Metal Performance Shaders (MPS) framework using the morphology operator, dilate. In this post, I'll look at a simulation of bokeh using MPS.

Creating a Hexagonal Probe

Dilate operators use a probe (also known as a kernel or structure element) to define the shape that bright areas of an image expand into. In my recent talk, Image Processing for iOS, I demonstrated an example using vImage to create a starburst effect. In this project, I'll create a hexagonal shaped probe using the technique I used recently to create lens flare.

MPSImageDilate accepts a probe in the form of an array of floats which is treated as a two dimensional grid. Much like a convolution kernel, the height and width of the grid need to be odd numbers. So, the declaration of my MPS dilate operator is:


    lazyvar dilate: MPSImageDilate =
    {
        var probe = [Float]()
        
        let size = 45
        let v = Float(size / 4)
        let h = v * sqrt(3.0)
        let mid = Float(size) / 2
        
        for i in0 ..< size
        {
            for j in0 ..< size
            {
                let x = abs(Float(i) - mid)
                let y = abs(Float(j) - mid)
                
                let element = Float((x > h || y > v * 2.0) ?
                    1.0 :
                    ((2.0 * v * h - v * x - h * y) >= 0.0) ? 0.0 : 1.0)
                
                probe.append(element)
            }
        }
        
        let dilate = MPSImageDilate(
            device: self.device!,
            kernelWidth: size,
            kernelHeight: size,
            values: probe)
   
        return dilate
    }()

Executing the Dilate

Metal performance shaders work with Metal textures, so to apply the dilate to a UIImage, I use MetalKit's texture loader to convert an image to a texture. The syntax is pretty simple:


    lazyvar imageTexture: MTLTexture =
    {
        let textureLoader = MTKTextureLoader(device: self.device!)
        let imageTexture:MTLTexture
        
        let sourceImage = UIImage(named: "DSC00773.jpg")!
        
        do
        {
            imageTexture = try textureLoader.newTextureWithCGImage(
                sourceImage.CGImage!,
                options: nil)
        }
        catch
        {
            fatalError("unable to create texture from image")
        }
        
        return imageTexture
    }()

Because Metal's coordinate system is upside-down compare to Core Graphics, the texture needs to be flipped. I use another MPS shader, MPSImageLanczosScale with a y scale of -1:


    lazyvar rotate: MPSImageLanczosScale =
    {
        let scale = MPSImageLanczosScale(device: self.device!)
        
        var tx = MPSScaleTransform(
            scaleX: 1,
            scaleY: -1,
            translateX: 0,
            translateY: Double(-self.imageTexture.height))
        
        withUnsafePointer(&tx)
        {
            scale.scaleTransform = $0
        }
        
        return scale
    }()

The result of the dilation benefits from a slight Gaussian blur, which is also an MPS shader:


    lazyvar blur: MPSImageGaussianBlur =
    {
        returnMPSImageGaussianBlur(device: self.device!, sigma: 5)
    }()

Although MPS supports in-place filtering, I use an intermediate texture between the scale and the dilate. newTexture(width:height:) simplifies the process of creating textures:


    func newTexture(width width: Int, height: Int) -> MTLTexture
    {
        let textureDesciptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(
            MTLPixelFormat.RGBA8Unorm,
            width: imageTexture.width,
            height: imageTexture.height,
            mipmapped: false)
        
        let texture = device!.newTextureWithDescriptor(textureDesciptor)

        return texture
    } 

...which is used to create the destination textures for the scale and blur shaders:


    let rotatedTexture = newTexture(width: imageTexture.width, height: imageTexture.height)

    let dilatedTexture = newTexture(width: imageTexture.width, height: imageTexture.height)

To begin using MPS shaders, a command queue and a buffer need to be created:


    let commandQueue = device!.newCommandQueue()
    let commandBuffer = commandQueue.commandBuffer()

...and now I'm ready to execute the dilate and pass that result to the blur. The blur targets an MTKView current drawable:


    rotate.encodeToCommandBuffer(
        commandBuffer,
        sourceTexture: imageTexture,

        destinationTexture: rotatedTexture)

    dilate.encodeToCommandBuffer(
        commandBuffer,
        sourceTexture: rotatedTexture,
        destinationTexture: dilatedTexture)
    
    blur.encodeToCommandBuffer(
        commandBuffer,
        sourceTexture: dilatedTexture,
        destinationTexture: currentDrawable.texture)

Finally, the buffer is passed the MTKView's drawable to present and committed for execution:


    commandBuffer.presentDrawable(imageView.currentDrawable!)

    commandBuffer.commit();

There's a demo of this code available here.

Bokeh as a Core Image Filter

The demo is great, but to use my bokeh filter is more general contexts, I've wrapped it up as a Core Image filter which can be used like any other filter. You can find this implementation in Filterpedia:





If you'd like to learn more about wrapping up Metal code in Core Image filter wrappers, may I suggest my book Core Image for Swift. Although I don't discuss MPS filters explicitly, I do discuss using Metal compute shaders for image processing. 

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.



Simulating Depth-of-Field with Variable Bokeh in Core Image

0
0

Circular Masked Variable Bokeh
Hexagonal Masked Variable Bokeh
Continuing on from my recent post on simulating bokeh with Metal Performance Shaders, I thought it would be interesting to create a filter similar to Core Image's built-in Masked Variable Blur but applying bokeh intensity rather than blur based on the luminance of a second image.

I've created two filters, MaskedVariableHexagonalBokeh and MaskedVariableCircularBokeh to do just this. Both have an inputBokehMask attribute which is of type CIImage. Where the inputBokehMask is black no effect is applied, and where it is white, the maximum effect is applied:


If you're wondering about the photograph: yes, we do have a shop in London that sells sterling silver lids for your mustard, Marmite and other condiment jars. 

To produce these filters, I had to write my own dilate operator in Core Image Kernel Language. The approach is very similar to a masked variable box blur I wrote for my book, Core Image for Swift.

In a nutshell, my kernel iterates over a pixel's neighbours. While a box blur averages out the values of those neighbouring pixels, a dilate simply returns the brightest pixel. The great thing about writing my own dilate is that I can code in a formula for the probe rather than passing in an array. That formula is based on the luminance of the bokeh mask image - allowing for a variable bokeh intensity at each pixel. 

The only practical difference between the circular and hexagonal versions of my masked variable bokeh is that formula. My base class is the circular version and the hexagonal version simply overrides withinProbe() which is used in the construction of the CIKL:

    // MaskedVariableHexagonalBokeh
    overridefunc withinProbe() -> String
    {
        return"float withinProbe = ((xx > h || yy > v * 2.0) ? 1.0 : ((2.0 * v * h - v * xx - h * yy) >= 0.0) ? 0.0 : 1.0);"
    }


    // MaskedVariableCircularBokeh
    func withinProbe() -> String
    {
        return"float withinProbe = length(vec2(xx, yy)) < float(radius) ? 0.0 : 1.0; "
    }

Easy!

With the clever use of a mask, these filters are excellent for simulating depth-of-field or tilt shift with a more lens-like look that a simple Gaussian blur.

Both of these filters have been added to Filterpedia. I've also added a radial gradient image  to Filterpedia which is a great way to illustrate the effect of these filters.

Core Image for Swift 

If you'd like to learn more about Core Image, may I recommend my book, Core Image for Swift.

Core Image for Swift is available from both Apple's iBooks Store or, as a PDF, from Gumroad. IMHO, the iBooks version is better, especially as it contains video assets which the PDF version doesn't.




A Histogram Display Component in Swift for iOS

0
0

While I'm writing some new content for Core Image for Swift discussing histograms, I've found Core Image's own histogram display filters slightly wanting. Always eager for a challenge, I thought I'd write my own component for displaying histograms and plug it into Filterpedia. The component has no dependencies on Filterpedia, so can easily be reused in any other projects. 

The component is named HistogramDisplay and uses vImage to calculate the histogram data for a given CGImage. The histogram data consists of four arrays (one array for each color channel) of 256 unsigned integers - each containing the number of pixels in the image with that particular value. 

With that data, I use my Hermite smoothed drawing code to create Bezier paths for the three color channels (ignoring alpha) and simply use those paths to draw onto CAShapeLayers. 

The vertical scale is based on the maximum pixel count for any color / tone "bucket". Since there are occasionally outliers that make the scaling a bit crazy, my component allows the user to set the vertical scale with a touch gesture. 

To open the histogram display as a popover in Filterpedia, toggle the UISwitch in the top right corner of the user interface:


Maybe not the most elegant user interface solution, but working for my immediate requirements. 

The source code for HistogramDisplay is available here.
Viewing all 257 articles
Browse latest View live




Latest Images