Notable Products

Google Glass

google-glass.png

1

Development History

Google Glass was developed by Google X, Google's "moonshot" research and development wing responsible for Google's Self Driving Car and other futuristic technologies. The product was a closely guarded secret until it was publicly announced in April 2012. Google Glass is heralded as the first "real" augmented reality product with useful features and fast enough computing. Public response to the beta program, where select engineers wore Google Glass around the city to fine-tune its features, was mixed. Some lauded the device's merits, while others considered it a dystopian invasion of privacy (see "Controversies"). Beta tests end January 2015, and Google is currently in the process of developing a consumer version of Google Glass, the release of which has yet to be announced.2

How it Works

Hardware
Google Glass uses a combination of sensors, cameras, and interfaces to create the artificial reality.3 Beginning with a camera, Glass has the ability to take photos and record 720p HD video. This allows Glass to sense the world around it and generate the display. The Explorer version of Google Glass uses a liquid crystal on silicon (also known as LCoS) display embedded in the eyeglass that overlays digital images on the wearer’s view. A touchpad is located on the side of the glasses, allowing users to control the device by swiping through an interface displayed on the screen. Wearers communicate with the Internet via natural language voice commands and dictate responses to notifications, status updates, and various other functions to a microphone embedded in the device. Google Glass is a tethered device, meaning it must be connected to an android phone or an external computer to perform the communications and processing for the device.

Software
Because Google Glass is an untethered device, it is connected to an external mobile device, like an android phone, to run applications. Google Glass applications are free applications built by third-party developers. Glass also uses native Google applications, such as Google Maps, Google+, and Gmail. Third-party applications announced at (SXSW) include Evernote, Skitch, and The New York Times.4 In 2013, Google released the Mirror API, allowing developers to start making apps for Glass. Google Glass can be used with Facebook and Twitter to provide social media notifications. Some of its more exciting uses are when Google Glass overlays the digital world over the real one. Yelp reviews of restaurants can be displayed in front of the users view as they walk by the restaurant, for example. It can also be used to provide navigation, overlaying an arrow on the ground in front of the user and highlighting turns. Google Glass also recently acquired an application called WordLens, which translates street signs from foreign languages in real time, overlaying the translated sign over the real one on the user’s screen.5

Controversies

Many people considered Google Glass an invasion of privacy, and concerns about recording video of private conversations without consent fueled fears that Google Glass users were unintentionally (or sometimes intentionally) violating people’s right to privacy. Some became quite angry at Glass users. One was even assaulted for using Google Glass. Some called for designating certain areas, like Exams, classrooms, and doctors offices, as "glass free zones". Many groups, from privacy activists to governments, criticized Google Glass as a step too far in the debate over surveillance and privacy. The Canadian privacy commissioner and 36 other data protection authorities in June 2013 raised privacy concerns about Google Glass in an open letter to CEO Larry Page. “Fears of ubiquitous surveillance of individuals by other individuals, whether through such recordings or through other applications currently being developed, have been raised. Questions about Google’s collection of such data and what it means in terms of Google’s revamped privacy policy have also started to appear,” they wrote6. The controversies surrounding Google Glass closely echo other ethical dilemmas of Augmented Reality Devices


Blippar

Adrian-Peterson-Blippar-1-700x352.jpg

7

Development History

Blippar, founded in 2011, is a visual browsing application that utilizes image-recognition and augmented reality technologies in order to overlay the physical world with digital content via the device’s camera. The application connects brands with consumers through augmented reality-enhanced, targeted advertising.

The Blippar app is available on smartphones, tablets or wearable devices (including Google Glass). Using the app is simple: the user can simply scan (‘blipp’) images to unlock interactive digital content. Companies that partner with Blippar can advertise through “Blippable” images (‘markers’), which can be found on, packaging, printed pages, adverts, outdoor marketing and screens. The act of “Blipping” enables the user to run any program a phone or device is already capable of running, including audio and visual media, web page links, mobile games, and image galleries.8

In 2014, Blippar acquired the Dutch AR application Layar, and in doing so created the world’s largest AR user base. Layar – a mobile application developed in 2009 that is available on iOS and Android - quickly gained international attention as one of the first mobile augmented reality browsers to hit the market. Blippar, together with Layar, has a current market valuation of $1 billion USD.9

How it Works

In the most basic sense, the Blippar application turns the users mobile phone, tablet, or device into an AR device. A ‘blipp’ — defined on the Blippar web site as: "the action of instantaneously converting anything in the real world into an interactive wow experience" — occurs when a customer interacts with a brand's advertisements. Blipps rely on object recognition, and can come in many shapes and sizes, such as mobile coupons, 2D or 3D overlay, location-based services, and others.

In order to recognize images and graphics, the Blippar app uses "markerless image recognition," in other words, it recognizes and categorizes an image without information about the environment given beforehand. It is this recognition that triggers an immediate response from mobile device. Sara Angeles, a writer for BusinessNewsDaily, provides the following example: a user opens the Blippar app and places the mobile device in front of a Heinz ketchup bottle. The app uses invisible markers assigned to Heinz's ketchup bottle art to identify that it is indeed a Heinz ketchup bottle. The app then triggers a pre-programmed augmented reality on the device: a Heinz recipe book appears on top of the bottle as seen through the device, allowing users to flip through meal ideas with Heinz ketchup as an ingredient, as though the recipe book were on the actual bottle itself.

Other examples include triggering a game from a candy bar; beauty tutorials from Cover Girl; a Middle Earth adventure from "The Hobbit;" interactive, in-action videos from Range Rover; and a photo op with Justin Bieber from his album cover.10

Strengths, Weaknesses, and Future Applications

In the long term, Blippar hopes to catalogue everything - such as a bench in a park, a dog walking down the street, or the Statue of Liberty – in order to offer the best AR-enhanced advertising and information to its users. Currently, web-based search engines dwarf image-based search engines in daily usage; however, web-based searching is limited by vocabulary and literacy. Blippar’s image recognition software could enable the app to take search engines beyond the limitations of language, empowering consumers by instantly giving them relevant information in a timely manner from the environment around them.

Blippar’s search engine is net neutral and it will use the most accurate sources of information within the application. There is a fluid user interface with design and color input from the blipped item. The speed and accuracy of the Blippar platform lets users to access information faster than customary web searches as there is no delay in latency. The location based predictive computing technology uses deep learning and artificial intelligence that will improve and personalize the visual search results based on the user.11

Like so many other mobile applications, the success of Blippar in the near future is hugely dependent on its ability to continually attract new users while retaining current ones. When more people are using the Blippar app, businesses have a higher incentive to advertise through Blippar-enabled content.

However, for their mobile application, internal weaknesses and external threats inherent to smartphone use must be overcome first. Foremost, increasing data-use and the battery drain of smartphones are the causes of widespread disuse.12


Boeing AR Manufacturing Tablets

6a00d83451706569e2017ee8115a91970d-800wi_zsjael.jpg

13

Development History

Boeing began developing AR technology for its manufacturing of planes in 1990. The program was funded by a Technology Reinvestment Grant provided by the Advanced Research Projects Agency. A team led by Thomas Caudell began experimenting with a Head Mounted Device that would be able to guide engineers as they built wiring routes in the wings of planes. Traditionally, the engineers had needed to be guided by large physical guides for the wires that take up space in warehouses and are burdensome to move around. The early headsets used led displays reflected off of mirrors within the headset to overlay the virtual and real images, instructing workers on what sort of wire to use, and how to place it as they moved step by step.14

How it Works

The tablet now involves a far more complex tracking system. Rather than designing an app for a tablet, Boeing designed the hardware for their AR system as well as the software. The tablets contain at least a dozen ball bearings as well as six infrared cameras.to track the position of the tablet. The accuracy of that AR system allowed it to be rolled out for testing in a Washington Boeing plant this spring. When aimed at a certain part that they are working on the tablet can superimpose the tools they should use or give written or animated instructions. In a study comparing workers with the AR tablets and those without the tablet worked amazingly well. Workers with AR tablets were 30% faster and 90% more accurate than their counterparts using instructions from a pdf.15

Impact on the Future

As Boeing implements their tablet and success is proven, AR could spread throughout manufacturing. They have moved the tablet into a larger pilot phase and are looking forward to implementing the technology into safety glasses that can be worn on the assembly line. Paul Davies, a technical fellow at the Boeing Research and Technology group said that they are experiencing issues in scaling the effectiveness of the AR tablets. Tracking in a large 3D area like a plane fuselage is not yet accurate enough. Once it is scaled, though, the impact for Boeing (and potentially the entire complex manufacturing industry) could be enormous if the efficiency and accuracy numbers from the ISU study hold true. Thomas Caudell explained that AR could benefit sectors of manufacturing that require skills too complex for automation with modern robotics technology.16


Microsoft Hololens

microsoft-hololens-9-2-808x538.jpg

17

Development History

Codenamed "Project Baraboo," HoloLens had been in development for five years before its announcement, and was conceived earlier in a pitch made in late 2007 for what would become the Kinect technology platform for the Xbox.18 Announced at the E3 conference and at Microsoft keynotes with impressive videos and demos, Microsoft has targeted HoloLens for release "in the Windows 10 timeframe," with the Microsoft HoloLens Development Edition to begin shipping in the first quarter of 2016. Microsoft will be sending out developer kits (for $3000 apiece) for those who meet the following criteria:

  • They must be a developer in the United States or Canada where the Development Edition will first be available.
  • They must be a Windows Insider. By participating in the Windows Insider program, they agree to provide feedback and work with Microsoft to improve the product

How it Works

As opposed to a tethered device like Google Glass, Microsoft Hololens is a glasses/headset that is a cordless, self-contained Windows 10 computer. All computing and processing is done from the device itself. It comes with a CPU and GPU, but also contains a third, proprietary processor: A Holographic Processing Unit (HPU). The HPU gives HoloLens the real-time ability to understand where the user is looking, to understand gestures, and to spatially map the user’s immediate surrounding. Conceived, designed, and engineered by Microsoft, this “HPU” is designed specifically to support HoloLens. An accelerometer, gyroscope, and magnetometer, coupled with head tracking cameras, enables HoloLens to understand where the user’s head is and how it’s moving. The HoloLens can also generate binaural audio which can simulate spatial effects, such that the user can perceive a sound as coming from a specific location.19

Possible Applications

  • Virtual Overlay: Hololens can project holograms onto any solid surface. Some consumer applications include projecting recipes onto the kitchen counter and walking the user through the cooking process, giving repair instructions to a technician working remotely by overlaying information on the wearer’s environment, turning a coffee table into a gaming environment, and projecting a TV on any surface that follows you to other rooms.
  • Immersive Gaming: Microsoft released an impressive demonstration turning a coffee table into a virtual game of Minecraft. They also developed a Hololens demo using the new Halo 5: Guardians game to give us a glimpse of the future of immersive, mixed reality gaming.
  • Medicine: 3D medical training uses detailed medical models rendered in three dimensions to allow students to get a better look at what goes on in the human body without having to perform dissections.
  • OnSight: Developed by the NASA Jet Propulsion Laboratory, OnSight integrates data from the Curiosity rover into a 3D simulation of the Martian environment, which scientists interact with using HoloLens devices. OnSight can be used in mission planning, with users able to program rover activities by looking at a target within the simulation, and using gestures to pull up and select menu commands. There are tentative plans to deploy OnSight in Curiosity mission operations to control rover activities by July 2015.20
  • Sidekick: Sidekick is a virtual aid tool for astronauts with two modes of operation. Remote Expert Mode uses the functionality of the Holographic Skype application—voice and video chat, real-time virtual annotation—to allow a ground operator and space crew member to collaborate directly over what the astronaut sees, with the ground operator able to see the crew member's view in 3D, provide interactive guidance, and draw annotations into the crew member's environment. In Procedure Mode, animated virtual illustrations are displayed on top of objects with which a crew member is interacting. This mode can be used for guidance and instructional purposes in standalone scenarios. Sidekick is being deployed for use on the International Space Station. The Cygnus CRS Orb-4 commercial resupply mission on December 3, 2015 will reportedly bring the HoloLens hardware to the crew of the ISS.
  • Construction: Blueprints can be turned into 3D immersive environments on build sites, allowing engineers to physically see their designs all around them. This allows engineers to spot possible design flaws.
  • Rapid Prototyping: The HoloLens can be connected to a CAD software to allow for instant 3D renderings of computer-designed products. This allows for multiple prototypes of a product to be examined and tweaked without having to actually construct each prototype for testing.

Back: Applications and Concerns

Home: Augmented Reality

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License