Rendering only the elements found within the users' gaze may help improve Apple Glass' performance in terms of processing videos.

An Augmented Reality (AR) system, such as Apple Glass, depends on gathering data about the surrounding environment to provide the user with better-mixed reality images. The entire process involves taking a live camera feed of objects within the field of vision of the user, then digitally manipulating the images and showing them to the user in a new and altered state. With most VR and AR systems tethered to computers, the need to gather data may cause a problem since you can only funnel limited amounts of data through that tether at any given time. With Apple's plan to render only the elements found within the field of view of the users, the entire process may improve the performance of the VR or AR headset, particularly Apple Glass.

   

In a patent recently granted to Apple by the US Patent and Trademark Office, Apple seeks to deal with the ingress issue through the narrowing down of the video data that a VR or AR headset, such as Apple Glass, needs to process before the main processing occurs. The document, which is entitled "Gaze direction-based adaptive pre-filtering of video data," suggests that the VR or AR headset could capture an image of the environment and then apply filters to the scene, each covering various areas of the video frame.

The filters, which the document reveals are responsible for determining which data to transmit to the host for processing, takes a position based on the gaze of the user. The filtered data layers are then transmitted to the host through a tether or a wireless method, after which, the images are processed and are sent back to the VR or AR headset, such as the Apple Glass, to display. The logic behind this process is that there is no need to gather all data that the user sees and process it. The document explained that while the user's face is facing a particular direction, their eyes are looking at something else, making the rendering of the data where they are facing practically pointless.

Apple's new system first requires the detection of the gaze of the user then applying that data point within the border of the image data. Multiple data subsets can define the different sizes and shapes of the image gathered by the VR or AR headset, such as Apple Glass, that are of higher priority. For instance, the device can prioritize the sections that the user is actively focusing on under the primary subset, while a bigger secondary vision covering the entire area comes second. As such, the system would blur the difference between the main high-quality subset and the lower-quality subset, with the rest of the camera feed overlaying in the AR app.