Multitouch Gains MomentumMultitouch Gains Momentum

Windows 7 support will push touch technology to the next level.

information Staff, Contributor

November 12, 2009

9 Min Read
information logo in a gray background | information

How you interact with computing devices is about to change. Don't think so? Then consider Apple iPhone, BlackBerry Storm, Nokia N97, Palm Pre, and Motorola CLIQ. They're all touch-screen devices with multitouch, gesture-based user interfaces where you use your fingers to manipulate objects on the screen, with no mouse or keyboard required.

People love them. Shipments of smartphones with touch-screens will more than double in 2009, predicts market-research firm In-Stat. The shift to touch-screens won't stop with mobile phones, as people get used to doing certain tasks with their fingers. And that trend is getting a boost from Microsoft Windows 7, which supports multitouch and gestures.

As touch-screen hardware and tools for developing multitouch applications become more prevalent, businesses of all kinds will want to leverage the technology. Get ready to see it in all sorts of apps, including those used in retail, stock trading, manufacturing, inventory management, healthcare, appliance repair, and delivery services.

Touch-sensing interfaces aren't new -- operations as diverse as the U.S. Postal Service and McDonald's are using them. But these systems are based on users making a single point of contact with the screen, and they don't support gestures. Compare that with the emerging class of multitouch sensing that lets users interact with devices using more than one finger and employing a drag-and-drop capability. For instance, users pinch their thumb and forefinger together to shrink a photo.

Users of MacBook Pro, with its multitouch trackpad for manipulating objects, are familiar with multitouch, but the technology is just gaining traction on other platforms. Besides Microsoft, Qt Software is supporting it with QTouchEvent and QGestureEvent classes in the Qt 4.6 framework.

Microsoft's Approach Microsoft incorporated multitouch and gesture support in Windows 7 with its WM_Touch and WM_Gesture classes. They provide multitouch support in the following ways:

  • Gestures that are trackable as single and multicontact events on digitizers, which are the hardware that converts gesture data to digital information.

  • High-resolution support that lets touch-target elements -- what the user touches on the screen -- render more cleanly, making them easier to access.

  • Windows management that includes taskbar buttons, Jump Lists, AeroPeek/AeroSnap, and other means of controlling your desktop, managing windows, and launching touch-based apps.

  • Touch keyboard with glowing keys enabled as you cover letters on the onscreen soft keyboard.

  • Support for Media Center, Internet Explorer, and other programs that have been optimized for touch features such as panning, scrolling, and zooming.

Since gestures are fundamental to the Windows 7 OS, there's no software-based shell (like there is in Microsoft Surface, the company's tabletop computer that supports gestures) to enable these features. All that's needed for an application to have multitouch behaviors is a digitizer, such as Gateway's ZX Series and Hewlett-Packard's TouchSmart, that recognizes multitouch. But there are extra steps involved: For example, you have to ensure that Jump List items on Windows 7 multitouch machines are spaced further apart than those on Windows 7 machines that don't support multitouch.

Consider the following when designing applications that use multitouch:

  • Create big targets surrounded by white space to avoid click confusion.

  • Avoid hover actions that are prone to be misinterpreted based on the quality and responsiveness of the hardware.

  • Consider the form factor of the machine -- netbooks' smaller screens and lower resolutions could affect applications' default rendering and behavior.

  • Target user applications that consume information and applications, rather than ones focused on their creation.

Multitouch is also good for quick action and input tasks such as reordering a list. At this point, we wouldn't recommend building a transactional data entry application around it.

To reduce the misinterpretation of basic gestures, Windows 7 created a default set of gestures with standard responses. If you drag your finger downward, you get a scrolling response. Tap on a menu header and a menu drops down in an application or pops up in the Start menu.

Windows 7 default gestures behave similarly across applications and in the OS. (See Table 1 for a list of gestures that Windows 7 supports.)

Table 1: Gestures supported by Windows 7.

Microsoft has supplemented this with a "Good, Better, Best" approach to implementation of multitouch in applications; see Table 2.

Table 2: Good, Better, Best approaches to multitouch development.

Multitouch is a great feature of Windows 7 for building rich user experiences. With default legacy support for scrolling, panning, and zooming, applications can have an improved experience without any developer effort. Using the Windows API and convenient interoperability libraries, you can easily extend the experience your applications can deliver. Developer APIs
There are several options for writing multitouch-enabled code: Win32 SDK for unmanaged code; Windows 7 Integration Library (available on CodePlex) for managed code; and when .Net 4.0 releases, controls such as ScrollViewer will have native support for multitouch in Windows Presentation Foundation.

All applications receive messages from WM_Gesture. In normal scenarios, the operating system takes up to 30 WM_Gesture messages per second.

WM_Gesture can call one of two parameters. wParam provides information identifying the gesture command and gesture-specific argument values. This is the same information passed in the ullArguments member of the GESTUREINFO structure. And lParam provides a handle to information identifying the gesture command and gesture-specific argument values. This information is retrieved by calling the GetGestureInfo method.

Return values include zero if an app processes the message, and a call to DefWindowProc if the app doesn't process it. Anything else causes the application to leak memory because the touch input handle won't be closed and associated process memory won't be freed.

The GESTUREINFO structure, which stores the actual gesture interaction, looks like this:





typedef struct _GESTUREINFO {
   UINT cbSize;
   DWORD dwFlags;
   DWORD dwID;
   HWND hwndTarget;
   POINTS ptsLocation;
   DWORD dwInstanceID;
   DWORD dwSequenceID;
   ULONGLONG ullArguments;
   UINT cbExtraArgs;
} GESTUREINFO, *PGESTUREINFO;
 

Table 3 lists the gesture commands indicated by the dwID value in the GESTUREINFO structure.

Table 3: Gesture commands indicated by the dwID value.

For an example of unmanaged code that demonstrates intercepting the WM_GESTURE message and handling the gesture see Listing One. Keep in mind that an app will always process WM_GESTURE unless told to suppress the gesture.





LRESULT DecodeGesture(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam){
   // Create a structure to populate and retrieve the extra message info.
   GESTUREINFO gi;
   ZeroMemory(&gi, sizeof(GESTUREINFO));
   gi.cbSize = sizeof(GESTUREINFO);
   BOOL bResult = GetGestureInfo((HGESTUREINFO)lParam, &gi);
   BOOL bHandled = FALSE;



if (bResult){
   // now interpret the gesture
   switch (gi.dwID){
      case GID_ZOOM:
          // Code for zooming goes here
          bHandled = TRUE;
          break;
   case GID_PAN:
         // Code for panning goes here
        bHandled = TRUE;
        break;
   case GID_ROTATE:
       // Code for rotation goes here
       bHandled = TRUE;
       break;
   case GID_TWOFINGERTAP:
      // Code for two-finger tap goes here
      bHandled = TRUE;
      break;
   case GID_PRESSANDTAP:
      // Code for roll over goes here
      bHandled = TRUE;
      break;
   default:
      // A gesture was not recognized
      break;
  }
}else{
   DWORD dwErr = GetLastError();
   if (dwErr > 0){
      //MessageBoxW(hWnd, L"Error!", L"Could not
     // retrieve a GESTUREINFO structure.", MB_OK);
   }
}
if (bHandled){
   return 0;
}else{
   return DefWindowProc(hWnd, message,wParam, lParam);
  }
}
 

Listing One

The Windows 7 Integration Library in managed code in CLR 3.5 SP1 provides an entire API that you have access to from the Windows7.Multitouch.dll and Windows7.Multitouch.WPF.dll. To get started, you first check if there is a digitizer that supports multitouch:





if
(!Windows7.Multitouch.TouchHandler.DigitizerCapabilities.IsMultiTouchReady)
{
    MessageBox.Show("Multitouch is not availible");
    Environment.Exit(1);
}
 

This checks if the hardware supports touch. If it does, it intercepts the stylus events on the digitizer which is enabled by using the EnableStylusEvents factory class from the Windows 7 Integration Library. You then write code using the Windows 7 Integration Library to track and manage the touch-id's from the stylus events which translate into the objects moving on the screen. In the case of the previous code which detects whether a stylus is supported, you would handle the WM_GESTURE messages with StylusDown, StylusUp, and StylusMove:





public MainWindow()
{
    InitializeComponent();
    if
(!Windows7.Multitouch.TouchHandler.DigitizerCapabilities.IsMultiTouchReady)
    {
        MessageBox.Show("Multitouch is not availible");
        Environment.Exit(1);
    }
    this.Loaded += (s, e) =>
      { Factory.EnableStylusEvents(this); LoadObjects(); };
    //Register for stylus (touch) events
    StylusDown += ProcessDown;
    StylusUp += ProcessUp;
    StylusMove += ProcessMove;
}
 

In this example, the ProcessDown, ProcessUp, and ProcessMove functions are tracking the location of the object that is being processed by the touch events. As a finger touches the digitizer, a unique touch-id is available for you to track to make decisions on the events that are occurring. In the Windows 7 Integration Library example, this touch-id management is handled by two classes:

  • The ObjectTracker class, which contains the ProcessDown, ProcessUp, and ProcessMove functions that track the location of the object on the screen.

  • The ObjectTrackerManager class, which actually handles the touch events, and contains a Dictionary object which maps the current touch event to the correct ObjectTracker instance which represents the actual object being interacted with on the screen.

In the ObjectTrackerManager class, the following scenarios need to be considered when determining which ObjectTracker (the concrete object that is being touched on the screen) to forward the touch event to.

  • ProcessDown

    • The finger touches an empty spot, so nothing should happen.

    • The finger touches new object (for instance, a Picture object that is being acted upon), and a new ObjectTracker instance is created and a new Dictionary item is registered for this touch-id.

    • A second (or more) finger touches an already tracked object, and a new touch type is correlated with the existing touch type.

    
    
    
    
    public void ProcessDown(object sender, StylusEventArgs args)
    {
        Point location = args.GetPosition(_canvas);
        ObjectTracker objectTracker =
            GetObjectTracker(args.StylusDevice.Id, location);
        if (objectTracker == null)
            return;
        objectTracker.ProcessDown(location);
    }
     
    

    ProcessMove The finger touch-id is not correlated with the ObjectTracker, so nothing should happen. The finger touch-id is correlated with an ObjectTracker instance, so we need to forward the event to it. public void ProcessMove(object sender, StylusEventArgs args) { ObjectTracker objectTracker = GetObjectTracker(args.StylusDevice.Id); if (objectTracker == null) return; Point location = args.GetPosition(_canvas); objectTracker.ProcessMove(location); } ProcessUp One finger touch-id is removed, but there is at least one more correlated touch-id. This entry must be removed from the Dictionary. The last correlated touch-id is removed, so this entry must be removed from the Dictionary. public void ProcessUp(object sender, StylusEventArgs args) { Point location = args.GetPosition(_canvas); ObjectTracker objectTracker = GetObjectTracker(args.StylusDevice.Id); if (objectTracker == null) return; objectTracker.ProcessUp(location); _objectTrackerMap.Remove(args.StylusDevice.Id); }

In both examples, it takes work to do something more interesting with multitouch than simple default legacy support. If you want a richer experience and don't want to write too much code, we recommend that you consider the controls for WPF that will ship with .Net 4.0.

Read more about:

20092009
Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights