dinsdag 16 september 2014

Developing for Google Glass

I have worked on a demo project for Glass. Before this project I already had some experience in Android, which came in handy (I will explain why later). In this post I will explain what the Google guidelines are for developing Glassware, what the platform looks like, the app that I’ve build and a few points from what I’ve learned. This post can also be used to get started with building your own Glass app.

Google Guidelines for Glassware

When I first started developing for Glass I started searching from where to start. I first found all sorts of information about guidelines for design of Glass apps (Glassware). I noticed Google thought the design was more important than the technique/platform.

Nowadays there are many different platforms, e.g. laptop, smartphone, tablet and now also the Google Glass. When smartphones were relatively new I noticed that a lot of apps were following designs for other platforms (e.g. web). That didn’t work out very well. Now we have Glass, which is a platform on its own and very up close and personal.

The guidelines are:
  • Design for Glass
  • Don’t get in the way
  • Keep it relevant/timely
  • Avoid the unexpected
  • Build for people [1]

For Glass it’s important to keep the design as simple as possible. Show small amounts of information, which is clear to the user and only shown when the user needs it. The screen is off when the user isn’t using it and can only be activated by the user. This means that the app you build must still work if the user missed an update for example. Think of this as a pull mechanism. When the user needs something it will ‘pull’ the information from your app, not the other way around. When you develop for Glass it’s very important that you keep your end user in mind. How will he/she use the app? When you want to test whether an app really works, wear it and use it like you would as if you were an end user.

The Glass is a platform focused on now, this moment. You might use a smartphone for information across the day, a laptop for last month, external storage for the last couple of years. The idea of Glass is that the user wears it during the day and only uses it on moments when he/she needs it for information needed at that moment. I wouldn’t want to receive an email from last week, but I would want to read an email that just came in and is important to me.

Be transparent to the user. In other words: give them the functionality that you promised. For Glass this is even more important than for other platforms, because the user is wearing it, which makes it up close and personal. When I’m sharing a photo to a website I wouldn’t want to read the new features of that website. I want to read that in a newsletter. Unexpected behavior could really give the user an overall negative user experience (UX). So it’s very important for a positive experience by the user to do as you say. [2]

Glass SDK overview

The Glass SDK (aka GDK) includes the Android SDK and a GDK add-on for the extra functionality necessary for Glass, e.g. Voice Triggers for the voice commands.

 
 [3]

Because it contains the Android SDK, developing for Glass is basically the same as developing for Android. Things like Activities, Adapters and the likes can all be used in the application. There are just some extras that can be used. If you are already familiar with Android you can start developing for Glass immediately. If you are not Google advices to first follow a few Android tutorials. [3]

Eclipse/ADT

In Eclipse with ADT you can create a Glass app by creating a new Android project and selecting the Glass SDK as the build target.

File > New Project > Android Project



And select theme: none. The default theme created by the wizard doesn’t work for Glass.
Although none is selected as theme ADT usually assigns a theme. You can remove the attribute from your manifest and remove the theme file from the res/theme folder.

Manifest

Glass uses voice triggers/commands a lot. These voice triggers can be defined in the Android Manifest file. In the first highlighting the intent-filter is added to be able to use voice triggers in the app. In the second highlighting a reference is defined to where the voice triggers definition can be found.



The contents of voice triggers definition file looks like this.





This application uses the voice trigger RECOGNIZE_THIS and requires camera and network permissions.

A complete list of voice triggers for Glass can be found at https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/app/VoiceTriggers.Command.

It is also possible to define your own voice triggers and use them in your own app. When you want to put an app in the Glass store with a custom voice trigger, the voice trigger first needs to be approved by Google and then has to be added to the GDK and released in the next version. One of the requirements for approval is that the voice trigger needs to be generic and cannot be specific to your application. For more details on the guidelines for voice triggers take a look at https://developers.google.com/glass/distribute/voice-checklist.

API level

API level 19 (KitKat) was used for GlassScanner (the demo project I build). At the moment of writing there is only one Glass version, so the minimum and target SDK are the same. [4]

Activity

Activities in Android are used to define the UI and to define the behavior related to different UI components (basically the user interaction). The UI is defined in a XML file and the behavior in a class file.
In the Glass add-on (see GDK overview) a new class called Card is introduced, which can be used to layout your application. When you have a custom layout that cannot be created with the Card class a layout can also be defined in an Activity XML file, there are however a few things to take into account. Not all XML layouts will work. Google has a few examples that can be found at https://developers.google.com/glass/develop/gdk/ui-widgets. [7]

GlassScanner

GlassScanner is an application that scans barcodes and QR-codes and displays the result. It could be used to display information about a package, e.g. when it needs to go through customs to be allowed for import or export. The app could display information like what is in the package, the quantity and how much it may weigh. The package can be scanned and immediately be weighed. Or it could display information about products for allergies. The user could configure the app to tell it which allergies the user has, or somebody that comes over for dinner. When the user goes shopping for groceries, the user can scan a product and see a red screen if the product contains one of the ingredients and green if the product is OK. This way the user doesn’t have to read all the ingredients on each product.

In the screenshots below the first example shows a barcode that is scanned and the result from that scan. In the second example a QR code is scanned and the result from that is shown.


These are fairly simple samples for the purpose of demo. Just enough to give people an idea of what is possible.

Workflow




The app starts immediately in scan mode. When a code has been scanned, based on the type of code (QR or barcode) an URL is opened in the browser or searched for in a private database in the cloud. If the barcode was found in the database, the data will be displayed to the user, else a message will be shown that no data could be found.

Dependencies

GlassScanner uses several libraries and other dependencies, here is a list:
The dependencies are build up like this:



The GDK is the foundation, right above that is Scandit which is used to actually scan codes and gives us back the code and the type of code (QR or barcode). When the type of code is barcode ion is used to retrieve data from the Fusion Table and parses that data via GSON. The result will then be displayed to the user. If there are images used for the result, they are retrieved from a URL afterwards also via ion. Displaying the result happens in a custom activity.

 

The colors in the XML are linked to the colors in the card on the right. The card is an actual result of a sample barcode for GlassScanner.

The layout basically consists of three main parts: header, body and footer. In the header there is a title image, a title and a subtitle. The body contains a description and the footer contains some additional information, in this case review information. In the bottom right there is also an orange space which does not contain anything, but could contain a small image.

Lessons learned


Easy to get started
Since I already had some experience with Android it was easy for me to get started with Glass development.

Missing emulator
When developing for Android there are all sorts of emulators for different versions and devices. For Glass the emulator is missing. This can make it a little bit difficult for development, because you need the actual device. I do find it understandable, because of the up close and personal element of Glass. The only way to truly test whether Glassware works is to use it as if you were an end user, which makes a physical Glass essential.

There are some Glass emulators on the web, but not fully functional.

Voice recognition
When you’ve already worn a Glass, you might have noticed that the Glass captures all sounds, including those of your surroundings. So if somebody else names a command, the Glass executes it. When you’re dictating a message (tweet, mail, etc.) and people in your surroundings are talking, what they are saying might also be added to the message.
I find it desirable that something like voice or speech recognition would be build in, so Glass would only respond to the user.

Sources

Google Guidelines for Glassware:
[1] https://developers.google.com/glass/design/principles
[2] http://glass-apps.org/glass-developer

Glass SDK platform overview:
[3] https://developers.google.com/glass/develop/overview

GDK Quick Start:
[4] https://developers.google.com/glass/develop/gdk/quick-start

Voice Triggers overview:
[5] https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/app/VoiceTriggers.Command

Voice checklist:
[6] https://developers.google.com/glass/distribute/voice-checklist

UI:
[7] https://developers.google.com/glass/develop/gdk/ui-widgets