Apple has applied for a patent which outlines a method of iPhone voice control that would significantly improve the limited functionality available through the current iteration. While iOS 4.x allows you to verbally make calls, control music playback, and ask the time — very similar to the Windows Mobile-era Voice Command software from Microsoft — Apple envisions a much more robust system which reaches deeper into the operating system, marrying touch input with voice prompts to enable a technology Apple calls contextual voice command. For example, instead of immediately launching the application, touching its icon on the iPhone home screen could bring up a menu of voice commands specific to that program — perhaps allowing simple functions to be performed without leaving the launcher at all. Another usage case, illustrated below, delivers the commands associated with touching a particular email element; in this example, tapping an image attachment gives the user a visual menu of speech-triggered activities that could be performed on the photo, such as editing, saving, or viewing in full screen. The application goes on to suggest that other input methods could be combined with voice besides touch, utilizing the device’s built-in motion or light sensors, for instance. An API is also mentioned for allowing developers to leverage this module in their third-party applications.