Working with UIGestureRecognizers

October 14th, 2010 Posted by: Collin - posted under:Featured » Tutorials

Hey iCoders! Today we are going to make a fun project that takes advantage of UIGestureRecognizers which were introduced in iOS 3.0, way back when it was called iPhone OS. UIGestureRecognizer is an abstract class that several concrete classes extend Eg. UITapGestureRecognizer, UIPinchGestureRecognizer. Today we are going to be building a simple photo board application. You will be able to add photos from your board, move, rotate and zoom them in and out around the board. We will also build in some simple physics to give a sense of the photos being thrown around the board. Here is a short video of what our final product will look like.

from on .

GIT Hub

You can find this project . Please let me know any issues you may have. Happy coding!

Creating the project

Lets get a project ready that can handle all of this functionality. Open up xCode and start a View based iPad application called DemoPhotoBoard. Once the project window has come up go to the Framework group in the left bar and right click on it. Select Add -> Existing Framework… A large model view will come down listing all of the Frameworks that can be added to the project. Add in the Framework “Mobile Core Services”. Now go into DemoPhotoBoardViewController.h and add in the line

I apologies for having to use the image there, WordPress hates any line of code with < or > in it. We might as well finish filling in the rest of the header now too. Don’t worry about what these properties will be used for yet just include them in the header for the moment, they will be what we use to keep track of our scaling, movement and rotation. The add photo method will be called from a button we put in our interface in the next step.

@interface DemoPhotoBoardViewController : UIViewController  {
 
        CGFloat lastScale;
        CGFloat lastRotation;
 
        CGFloat firstX;
        CGFloat firstY;
}
 
-(IBAction)addPhoto:(id)sender;
 
@end

Filling in the XIB

The next thing we are going to do is add a tool bar and tool bar button to our XIB. Double click on DemoPhotoBoardViewController.xib. Once it has opened drag in a UIToolBar Item and then Put a UIToolBarButtonItem with a Flexible Space element to the left of it. Make the UIBarButtonItem the system button “Add”. Now if you go to the file owner below and right click on it, you should  see an outlet for the method “addPhoto”. Connect this to the add button we have. As a final step, select the UIToolBar and look at its size panel in the inspector. Make sure the Autosizing settings match the settings seen below so that things don’t get screwy when the app is in other orientations.

Implementing the Photo Picker

Go ahead and open up DemoPhotoBoardViewController.m. The first thing we are going to do is implement the addPhoto method. Insert the following code into your main.

-(IBAction)addPhoto:(id)sender {
 
        UIImagePickerController *controller = [[UIImagePickerController alloc] init];
        [controller setMediaTypes:[NSArray arrayWithObject:kUTTypeImage]];
        [controller setDelegate:self];
 
        UIPopoverController *popover = [[UIPopoverController alloc] initWithContentViewController:controller];
        [popover setDelegate:self];
        [popover presentPopoverFromBarButtonItem:sender permittedArrowDirections:UIPopoverArrowDirectionUp animated:YES];
}

This method creates a UIImagePickerController and tells it to only display images. Next we create an UIPopoverController that we instantiate with our UIImagePickerController as the content view controller. We set the delegate to ourself, and present it from the bar button item sender, which refers to the add button in our interface. We know the pop over will always be below our button so we force the button direction to always be point up. With this done, we can now run the app and see a UIImagePickercController appearing in a UIPopOverController below our add button.

Setting up the Gesture Recognizers

Now we need to implement the delegate method for our UIImagePickerController and add the image to our view when it is selected. We do this with the imagePickerControler:DidFinishPickingMediaWithInfo: delegate method. This method will provide us a dictionary where the key @”UIImagePickerControllerOriginalImage” will return a UIImage object of the image the user selected. We are going to need to create an ImageView and then put this ImageView in a UIView holder. The reason we do this is because standard UIImageViews, despite being UIView subclasses, do not react to gesture recognizer added to them. I’m not exactly sure why that is but this is the solution I have found in my testing. We are going to create 4 different kinds UIGestureRecognizers and connect them to our holder view.

We will first create a UIPinchGestureRecognizer. This object doesn’t require and customization, we will simply set its target to us with the scale: selector and assign this class as its delegate. With this done we add it to the holder view we created.

Next we create a UIRotationGestureRecognizer. This object doesn’t require much customization either. We simply set it to call the rotate: method in our class and set its delegate.

Net we create the UIPanGestureRecognizer. We create the PanGestureRecognizer to make a call to the method move: upon being activated. We tell the PanGestureRecognizer that we only care when a single touch is panning by setting the maximum and minimum touches to 1. We once again add this to the holder view we created.

The final UIGestureRecognizer we create is the UITapGestureRecognizer. The UITapGestureRecognizer will be used to stop an object that has been “thrown” from going all the way to its stopping point. It essentially will be used to catch an object while it is still moving. We set the required number of taps to 1 and set the delegate. We add this final UIGestureRecognizer to our holder view and add the view to subview. You can see the code below. Please ignore the random number WordPress is weak sauce.

 
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
 
        UIImage *image = [info objectForKey:@"UIImagePickerControllerOriginalImage"];
 
        UIView *holderView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, image.size.width, image.size.height)];
        UIImageView *imageview = [[UIImageView alloc] initWithFrame:[holderView frame]];
        [imageview setImage:image];
        [holderView addSubview:imageview];
 
        UIPinchGestureRecognizer *pinchRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(scale:)];
        [pinchRecognizer setDelegate:self];
        [holderView addGestureRecognizer:pinchRecognizer];
 
        UIRotationGestureRecognizer *rotationRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:@selector(rotate:)];
        [rotationRecognizer setDelegate:self];
        [holderView addGestureRecognizer:rotationRecognizer];
 
        UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(move:)];
        [panRecognizer setMinimumNumberOfTouches:1];
        [panRecognizer setMaximumNumberOfTouches:1];
        [panRecognizer setDelegate:self];
        [holderView addGestureRecognizer:panRecognizer];
 
        UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tapped:)];
        [tapRecognizer setNumberOfTapsRequired:1];
        [tapRecognizer setDelegate:self];
        [holderView addGestureRecognizer:tapRecognizer];
 
        [self.view addSubview:holderView];
}

Quickly lets also define each of these methods as such so we can see them all occurring as we touch an object that we add to the view. Add this code in and run the application, you can click around on the objects you add to the board and see the Log messages displaying in the terminal. In the terminal you may or may not be able to activate all of these, because of multi touch being simulated in a pretty limited way. But you can run the code and try this out.

-(void)scale:(id)sender {
        NSLog(@"See a pinch gesture");
}
 
-(void)rotate:(id)sender {
        NSLog(@"See a rotate gesture");
}
 
-(void)move:(id)sender {
        NSLog(@"See a move gesture");
}
 
-(void)tapped:(id)sender {
        NSLog(@"See a tap gesture");
}

UIGestureRecognizer Action Methods

All UIGestureRecognizers contain a state property of type UIGestureRecognizerState. This is because of UIGestureRecognizers calling their action methods throughout the entire time a gesture is being performed. When the gesture first begins the state of the calling UIGestureRecognizer is UIGestureRecognizerStateBegan, throughout its all subsequent calls have the state UIGestureRecognizerStateChanged, and the final call is of state UIGestureRecognizerStateEnded. We can use this to our advantage to do house keeping in each of our gesture action methods. Another important thing to note about the action calls from UIGestureRecognizers is that the properties it will report about a gesture, such as scale for UIPinchGestureRecognizer and rotation for UIRotationGestureRecognizer, will always be in reference to the original state of the object. So as a scale is happening the scale may be reported as; 1.1, 1.2, 1.3, 1.4 and 1.5 on subsequent calls. These scales are not cumulative but all in reference to the original state of the object the UIGestureRecognizers are attached to.

Implementing Scaling

First thing we will do is implement the scale method. The scale method will be called by an id sender, this sender will actually be a UIPinchGestureRecognizer object. If we look at the documentation for a UIPinchGestureRecognizer object we will see that it includes a scale property that is a CGFloat. This scale property will be provided by the UIPinchGestureRecognizer every time the scale: method is called. Because the scale will be cumulative and always in reference to the original state of the object. Because of this, as we make our photo grow, we must make sure that we only scale by the difference of the last scale to the current scale. For example of the first scale: call has the UIPinchGestureRecognizer scale as being 1.1 and the next call has it by 1.2, we should scale by 1.1 and then by another 1.1. To handle this we have the the class property CGFloat lastScale. This will keep track of the last scale we applied to our view so  that on the next call we can only apply the difference between them.

So now that we can tell how much to scale an item upon being pinched we need to look into the mechanism that will actually scale the view. Every UIView has a CGAffineTransform property called transform. This described much of the geometry of the view that will be drawn. Using method provided by the Quartz frameworks we will figure out how to change this variable to scale as we need. Lets first take a look at out whole scaling method.

-(void)scale:(id)sender {
 
        [self.view bringSubviewToFront:[(UIPinchGestureRecognizer*)sender view]];
 
        if([(UIPinchGestureRecognizer*)sender state] == UIGestureRecognizerStateEnded) {
 
                lastScale = 1.0;
                return;
        }
 
        CGFloat scale = 1.0 - (lastScale - [(UIPinchGestureRecognizer*)sender scale]);
 
        CGAffineTransform currentTransform = [(UIPinchGestureRecognizer*)sender view].transform;
        CGAffineTransform newTransform = CGAffineTransformScale(currentTransform, scale, scale);
 
        [[(UIPinchGestureRecognizer*)sender view] setTransform:newTransform];
 
        lastScale = [(UIPinchGestureRecognizer*)sender scale];
}

The first thing we do in this method is bring the touched view to the front. We do this by accessing the view property of our sender which in this case is the UIPinchGesturereRecognizer. Next thing we do is check if this is the final touch in this pinch motion. If it is we reset out lastTouch value to 1. When the scale of one is applied a view does not change. When a pinch has ended we set the final pinch as being the new starting point for the next pinch sequence, specifically 1. Any other touch besides the last will subtract the difference from the last scale and the current scale from 1. This will be the scale change between the current touch and the last. We want to apply this to the current CGAffineTransfom matrix of the view this gesture recognizer is attached to. We now get the current transform of the view and pass it into CGAffineTransformScale() method. The first parameter is for current transform and the following two are the x and y scale to be applied to the passed in transform. The output here will be the new transform for the view. We apply this and reset the scale.

Implementing Rotation

Next thing we handle is rotation. This method has a very similar structure to the scaling method. We use another class property called lastRotation this time instead and a slightly different Quartz method, but the code overall should make sense. Check it out below.

-(void)rotate:(id)sender {
 
        [self.view bringSubviewToFront:[(UIRotationGestureRecognizer*)sender view]];
 
        if([(UIRotationGestureRecognizer*)sender state] == UIGestureRecognizerStateEnded) {
 
                lastRotation = 0.0;
                return;
        }
 
        CGFloat rotation = 0.0 - (lastRotation - [(UIRotationGestureRecognizer*)sender rotation]);
 
        CGAffineTransform currentTransform = [(UIPinchGestureRecognizer*)sender view].transform;
        CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform,rotation);
 
        [[(UIRotationGestureRecognizer*)sender view] setTransform:newTransform];
 
        lastRotation = [(UIRotationGestureRecognizer*)sender rotation];
}

Implementing Movement

Now we handle movement which is a bit different that the rotation and scaling transformations. Although you could use the transform to move an object around using the transform property, we are going to continuously reset the center of each view. We utilize PanGestureRecognizers method translationInView to get the point which the view has been moved to in reference to its starting point. If this is the first touch from the gesture recognizer we set our class properties firstX and firstY. We the calculate our translated point by adding the original center points to the translated point in the view. We set the view’s center with this newly calculated view. You can see the code below.

-(void)move:(id)sender {
 
        [[[(UITapGestureRecognizer*)sender view] layer] removeAllAnimations];
 
        [self.view bringSubviewToFront:[(UIPanGestureRecognizer*)sender view]];
        CGPoint translatedPoint = [(UIPanGestureRecognizer*)sender translationInView:self.view];
 
        if([(UIPanGestureRecognizer*)sender state] == UIGestureRecognizerStateBegan) {
 
                firstX = [[sender view] center].x;
                firstY = [[sender view] center].y;
        }
 
        translatedPoint = CGPointMake(firstX+translatedPoint.x, firstY+translatedPoint.y);
 
        [[sender view] setCenter:translatedPoint];
 
        if([(UIPanGestureRecognizer*)sender state] == UIGestureRecognizerStateEnded) {
 
                CGFloat finalX = translatedPoint.x + (.35*[(UIPanGestureRecognizer*)sender velocityInView:self.view].x);
                CGFloat finalY = translatedPoint.y + (.35*[(UIPanGestureRecognizer*)sender velocityInView:self.view].y);
 
                if(UIDeviceOrientationIsPortrait([[UIDevice currentDevice] orientation])) {
 
                        if(finalX &lt; 0) {                                                              finalX = 0;                   }                                             else if(finalX &gt; 768) {
 
                                finalX = 768;
                        }
 
                        if(finalY &lt; 0) {                                                              finalY = 0;                   }                                             else if(finalY &gt; 1024) {
 
                                finalY = 1024;
                        }
                }
 
                else {
 
                        if(finalX &lt; 0) {                                                              finalX = 0;                   }                                             else if(finalX &gt; 1024) {
 
                                finalX = 768;
                        }
 
                        if(finalY &lt; 0) {                                                              finalY = 0;                   }                                             else if(finalY &gt; 768) {
 
                                finalY = 1024;
                        }
                }
 
                [UIView beginAnimations:nil context:NULL];
                [UIView setAnimationDuration:.35];
                [UIView setAnimationCurve:UIViewAnimationCurveEaseOut];
                [[sender view] setCenter:CGPointMake(finalX, finalY)];
                [UIView commitAnimations];
        }
}

Implementing Momentum

The final half of the above method is for us to calculate the momentum that the object will have after being moved. This will make the object appear as if it is being thrown across a table and slowly coming to stop. In order to do this we utilize UIPanGestureRecognizers velocityInView method which will tell us the velocity of the pan touch within a provided view. With this we can do an easy position calculation for both the x and y coordinates of our object. To do this we must provide an input for time, in this case .4 seconds. While this is not truly momentum and friction based physics it provides a nice affect for our interaction. With out final resting place calculated we ensure that where the object ends up will still be within the visible surface of the iPad by checking against the bounds of the screen depending on the current orientation. The final step is to animate the view moving to this final location over the same .4 second time period.

Implementing Taps

We have one final gesture recognizer implementation to do and that is the tap: method. This method will be used when a user taps an object that is in the midsts of sliding after being moved. We essentially want to stop the movement mid slide. In order to do that we are going to tell the CALayer layer property of our view to cancel all current animations. The short piece of code can be seen below.

-(void)tapped:(id)sender {
 
        [[[(UITapGestureRecognizer*)sender view] layer] removeAllAnimations];
}

Implementing UIGestureDelegate

If you run the code now you will be able to perform all of the gestures described within this document but you will notice that you are not able to do several at the same time. For instance you can not pinch zoom and rotate a view at the same time. This is because we still need to implement the UIGestureRecognizerDelegate method gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer. We want to make sure that any gesture recognizers can happen together except for the pan gesture recognizer. To do this we simply check that it is not a UIPanGestureRecognizerClass and return true in that case. See the short code below.

- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
 
        return ![gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]];
}

GIT Hub

You can find this project . Please let me know any issues you may have. Happy coding!

Follow me on Twitter

  • http://icode.dreamvision-soft.com/blog/?p=87 Working with UIGestureRecognizers | iCode

    [...] Original post on iCodeBlog [...]

  • Zeeshan Khan

    nice man, great work, thanks for sharing.

  • http://www.shawnsbits.com Shawn Grimes

    Article is great. Thanks for sharing.

    I think there is an typo in your post, it is missing some code….

    In the section “Setting Up the Gesture Recognizers”, between the last paragraph and the one above it, i believe the code is missing, there is this line instead:
    “c7043c77473484a4d8f7ca3b58ef5ffa002″

  • http://sunflowerapps.com SFA

    Hi,
    I got the warning on this line:
    [controller setMediaTypes:[NSArray arrayWithObject:kUTTypeImage]];

    It says: “Passing argument 1 of ‘arrayWithObject’ from incompatible pointer type.

    Can you look into the issue?

    Thank you.

  • http://www.rightsprite.com Collin

    Hey SFA,

    The reason for this warning is that the constant kUTTypeImage is a CFString which Objective C doesn’t see as an object. CFString is a foundation level string. These CFString have a “zero cost” conversion to NSString. So the way we can solve this is by simply casting the kUTTypeImage constant to NSString like so.

    [controller setMediaTypes:[NSArray arrayWithObject:(NSString*)kUTTypeImage]];

    Wouldn’t surprise me to see Apple make this something the compiler recognizes in future releases. Thanks for reading and happy coding.

  • David

    It’s Git or git, never GIT.

    Similarly, it’s Xcode, never xCode.

  • huanvn

    Can we use this tutorial on IPhone? I have tried, but failed because there’re no definition for such ****Recognizer class :( Or am I missing something?

    Any comment?

  • huanvn

    Oh, about last comment. I made some typos mistake on class name :(

    continue coding.. ^^

  • huanvn

    I found my own answer. IPhone doesn’t support UIPopoverController –> this can’t run on IPhone :( Will keep in mind the idea of gestures.

    Thanks for sharing ^^

  • sulfide

    It’s called get a life.

  • http://thekinetik.com PRCode

    Nice! thanks you!

  • Andrea

    HI… awesome tutorial…
    I would put my addimage in an imageview… how can do it?
    because i have other object in the view and this code create a new subview that stay in top of all… tnk you

  • http://www.skilline.com ffa

    it’s too dificulte but
    thank you

  • Tommy Myers

    i get an error about the kUTTypeImage

  • banana

    from someone who is probably incapable of understanding why attention to detail is important

  • banana

    At the end of the Implementing the Photo Picker section it suggests we can run the app to test the image picker, but we can’t because we have not implemented the protocols for UINavigationControllerDelegate and UIPopoverControllerDelegate.

  • banana

    nvm it does run despite the warnings.

  • http://www.i-brr.com John Reimer

    Um…. actually UIGestureRecognizer was introduced in iOS 3.2

    and the -requireGestureRecognizerToFail: method can be pretty handy in making sure that you don’t trigger more than one gesture at a time….

  • http://www.smithmedia.co.uk clive

    Doesnt seem to work on iphone as you cant clear the selector window when it loads up the image.

  • http://www.yuyak.com/blog/777/ [iOS] UIGestureRecognizer、CALayerを使った画像の拡大・縮小・移動 – Yuyak Blog

    [...] Working with UIGestureRecognizers | iCodeBlog [...]

  • Gus Gorman

    Just starting to learn this. Thanks for posting this.

    I’m receiving the warning “No ‘-removeAllAnimations’ method found” both times its used and my emulator is crashing whenever I try to add an image with the add button (It’s receiving the signal “SIGABRT”) Any clue as to what could be causing either of these issues?

  • Gus Gorman

    I found the problem. I had a typo on this line: UIImageView *imageview = [[UIImageView alloc] initWithFrame:[holderView frame]];

    I’d written: UIImageView *imageview = [[UIView alloc] initWithFrame:[holderView frame]];

    Thanks again!

  • http://cairouniversity Ali Mahmoud

    the first zoom when click by finger and use the second , for the first time it zoom in as if the image jumped any suggestion

  • http://cairouniversity Ali Mahmoud

    another question
    the zooming and panning is so fast ,any suggestion to slow it down please

  • Steve

    Uiimageview can be used … Be default userinteractionenabled is NO. Jet set it to yes…

  • Aaron Griffith

    I think you’re finalX and finalY values are transposed in the else ifs of your else clause for orientation.