Microsoft Surface Hub promises to be a “powerful team collaboration device designed to advance the way people work together naturally” and to “unlock the power of the group”. From what we’ve seen, Microsoft Surface Hub looks a whole lot like the failed Microsoft Surface “table computer” that we say several years ago. (No, you read that correctly, “table computer”, not “tablet computer”.)

The original Microsoft Surface was essentially a relatively high-powered computer built in to a table. The tabletop was the monitor, and several cameras watched the underside of the screen to look for fingers, cups, barcodes, and other objects. The OS would then intelligently interact with those objects. Restaurants and bars could install them to allow patrons to order meals and drinks, then pay for their orders whenever they wanted, even being able to split the bill – right there through the table. Cell phone companies could install these as “smart kiosks” in their stores so customers could plop two (or more) new phones on the table and compare the similarities and differences through an interactive interface. It was really a cool concept, and the demonstration unit I was able to go hands-on with back in the day was very impressive. That was as far as the tech ever got. Microsoft axed the project, and even recycled the “Surface” name for use with its tablets (not to be confused with tables).

microsoft-surface(1)

Fast-forward to today and we’ve got Microsoft resurrecting the Surface brand (again), only this time it’s hanging on the wall instead of embedded in a table (or tablet). In fact, you could think of Microsoft Surface Hub as a HUGE tablet hanging on the wall and you wouldn’t be too far off from what the product does. Participants in a meeting can interact with apps on the screen using their fingers, or even pick up a stylus and draw on the screen. Then, the collaborators can be emailed what was worked on in the meeting. It’s a pretty cool concept.

How can Android compete?

We already have lots of ways that we can get content from our Android-powered smartphones and tablets (not tables) onto a big screen. Currently it’s a one-at-a-time thing, and it’s somewhat limited to what can and cannot be “cast” to the screen.

An app could potentially connect to Chromecast, one of Amazon’s products, Android TV, Google TV, or anything that supports wireless screencasting, and could allow single-person interaction with the TV. In this scenario, only the person casting could manipulate the data being sent to the screen.

chromecast

What’s needed is a “hub”. Android TV or Google TV could be that hub, but any Android could also serve in that capacity – with the right app to power it. In this scenario the meeting host could cast his or her screen to the TV and allow “collaboration” from any other user in the room. These people would connect to the host device using WiFi Direct or some other protocol. The collaborative input would be cast to the TV through the host device, for all to see on the TV. This would allow everyone to participate from wherever they were sitting in the room, rather than requiring individuals to stand and interact with the TV itself. Using Android TV as a “hub” for this app would probably make things easier from a technological standpoint, but really shouldn’t be necessary. Extending this concept, an Internet connected participant could join the meeting remotely, connecting to the host device using Internet protocols rather than local wireless protocols.

In this manner, any TV with an inexpensive dongle (or a smart TV with WiFi built-in) could function in much the same way as Microsoft’s Surface Hub. IT wouldn’t have to worry about managing and maintaining another computer (as they would with Microsoft’s solution), and the cost of a TV plus a dongle (Chromecast, Fire Stick, etc.) should be significantly less expensive than the Surface Hub. All we need now is for a developer to do it.

Evernote, are you listening?