Google Project Glass API Details EmergeGoogle Project Glass API Details Emerge
SXSW conference demo showed how developers will be able to write Project Glass applications for use with Google's Internet-connected eyewear.
Google Chromebook Pixel: Visual Tour
Google Chromebook Pixel: Visual Tour(click image for larger view and for slideshow)
Google has revealed more details about how Project Glass will work, providing developers with a sense of the kinds of applications they will be able to build for Google's Internet-connected eyewear.
Project Glass developer evangelist Timothy Jordan on Monday delved into the workings of Project Glass at the SXSW conference in Austin, Texas. The event was documented by Engadget in a live blog.
Google has been teasing developers and the public with glimpses at Project Glass over the past few months, in preparation for the launch of the first iteration of its spectacles, Glass Explorer Edition, in "early 2013."
In late January and early February, the company held invitation-only Glass Foundry events for developers in San Francisco and New York to introduce its Mirror API, used to write code that connects third-party apps and services to Google's servers, which communicate with Glass devices.
[ Want to know how Google tests its disaster readiness? Read Google Vs. Zombies -- And Worse. ]
Though Google previously disclosed that the Project Glass Mirror API would rely on RESTful data transport, Jordan's presentation delved into more detail.
The Project Glass interface is built around the concept of a timeline. Users can swipe backward and forward along the touchpad on the Glass frame to navigate through timeline cards associated with Glass events. Those old enough to have seen Apple's Hypercard technology (1987-2004) should recognize the concept immediately. As a matter of UI design, there's also some similarity to Apple's Cover Flow and Mirror Worlds' Scopeware.
A timeline card, which can contain an image, text, audio or video, can be inserted on a Glass user's timeline with an HTTP Post command. Get and Update (Put) actions are also supported. Message encoding is done using JSON. In code, a sample text message would be sent to Google in the form { "text" : "Your ad here"}, preceded by the appropriate HTTP header. HTML markup can be sent as well, to provide more a visually appealing design.
Timeline cards support a few parameters that affect the mode of presentation. Formatted thus -- { "text" : "Your ad here", "cardOptions" : [{ "action" : "READ_ALOUD" }] } -- a Glass user would hear the message read back using a text-to-speech voice. If nothing else, Project Glass has a bright future as an accessory for guided museum tours.
Google is also providing a shareEntities function for distributing content viewed in Glass to Google+.
The mechanism for Glass users to communicate with third-party services is called Subscriptions. Code that sends a Subscription command transmits a timeline collection -- a set of Glass timeline cards -- back to Google and then to the subscribed service. The transmission includes a callback URL to trigger some processing function at the receiving service.
Among the third-party applications shown communicating with Project Glass were Gmail, Evernote, Path and Skitch.
Jordan stressed the need to design applications for Glass rather than attempting to port them from, say, a mobile phone or tablet app. Glass apps should communicate selectively, without overloading the user or getting in the way. Good luck getting marketers to stick to subtlety.
About the Author
You May Also Like