Tags

, ,

I recently implemented a swipe gesture for Fuse. The gesture recognition itself, though complex, was not hard to implement; I had most of the needed pieces already. Designing a user friendly, declarative API for typical use-cases was a bit more involved however. Here’s what I did.

With this gesture I revisited and generalized the solution I had to edge swiping.

The two use-cases

There are two basic use-cases for a swipe gesture: to reveal an action panel behind an item or to remove something from a list. Obviously one technically “swipes” in a scrollable region as well, but in terms of implementation that’s a distinct gesture (though it has a role here I’ll explain later).

Revealing a panel with a side-swipe is common in mobile apps. Given a list of items, such as a chat history, you can swipe left to reveal a delete action. To provide good feedback the amount revealed of the panel is directly tied to the amount the user moves their finger. This gives the feeling that they are sliding a real object out of the way.

The other use-case is swiping to remove an item. On both iOS and Android this is used in the app listing to close an app: flinging the item up/down or left/right depending on the device orientation. In this gesture the item keeps moving away and disappears when the user releases their finger, but typically only if they swipe fast enough.

One gesture we don’t cover yet is the combined behaviour: first swipe to reveal an action and then continue swiping to invoke a default action. I personally question whether this is a good UX: it can lead to invoking an action when the user just meant to reveal a panel. Nonetheless, it shows up commonly so I’ll implement it to give designers the option.

Components of a swipe

The “reveal” use-case posed a bit of structural issue. It’s not a simple gesture to which one responds, but also requires visual feedback during the swiping and upon activation. It became one of those situations where the API design is somewhat trickier than the implementation.

We settled on a system the requires a distinct swipe declaration and a variety of dependent triggers.

1
2
3
4
<Swipe Type="Active" Directon="Left" LengthNode="ActionPanel" ux:Name="ActionSwipe"/>
<SwipingAnimation Source="ActionSwipe">
    <Move X="-1" RelativeTo="ActionPanel"/>
</SwipingAnimation>

The SwipingAnimation defines what happens while the panel is being revealed. Both the movement and length of the swipe are tried to the size of the ActionPanel (this avoids the needs for explicit values, though that is also possible).

Instead of a reveal panel, specified with Type="Active", we can use a Simple gesture instead. Then a <Swiped/> trigger is used to respond when the user swipes. Generally an animation is also required for this gesture, since the user should see some feedback during the swipe.

Internal setup and conflicts

A big issue with gestures is that we can’t know what gesture the user is making until after they’ve moved their finger a bit. Are they going to swipe left, right, maybe scroll the view up/down, or simply make a tapping gesture?

In Fuse we created a soft-capturing system to deal with this. Every gesture that could be possible at the moment tracks the user’s finger until one of them positively identifies the gesture. For swiping this means a finger movement far enough in the direction of the swipe.

This unfortunately causes a delay in responding to gestures — noticeable on all mobile apps. The alternative is for every gesture to start responding immediately and then just revert if one takes over. I tried this approach and it results in the display shaking as the user’s initial finger press tends to include jittering in several directions.

For the swipe gesture I had an extra requirement to satisfy: if the user starts swiping left, but then swipes right, they should be able to switch between two swipe gestures. For example, if the left and right have a panel to reveal they can start revealing the left one and then switch back to revealing the right one, without lifting their finger.

For this, and other performance reasons, an element has only one swipe gesture recognizer attached. All of the possible swipes on the one element are handled by a single controller that decides which swipe region is active and coordinates the transition between regions.

This is similar to the gesture handler for clicks, which also handles tap, double-click, double-tap, pointer pressing and long-press. I’m hoping this is sufficient for complex apps. It’s possible that I might need to switch to a model where all gestures, across all elements, are handled by a single handler.

That velocity thing again

One more feature of swipe panels is that a simple flick opens the panel. The user shouldn’t need to drag their finger the full distance but can simply swipe fast enough to open it. Fortunately I already had code to detect finger velocity. I added a couple if statements with a velocity threshold to detect this flicks.

Since the opening is no longer tied to the user’s finger something else would need to finish the animation. Again here I could use our existing pseudo-physics system to do the job. The opening/closing animation uses the same code that does the scroller’s end snapping.

All together we get a visually pleasing, responsive, and versatile swipe gesture.

Advertisements