Other capabilities that might be enabled by view tracking include ad rotation if a person looking at same spot, auto focus to show additional detail, the ability to identify focus and determine what someone looks at from a single vantage point. View tracking may also enable attention-based rewards and tokens. Also, experiences such as advertisements may be static until someone looks at them, and then become interactive. As another example, if a person. looks at a billboard long enough, it may be entered into a “cookie list” (as is common with a typical web browser).
Thus, knowing the location where a rendering experience occurs may not be sufficient to deliver a complete experience. In addition to being based on the direction a user is looking, a truly seamless xR experience may also be based on how long they look at a specific object, their movements, and other tracking mechanisms. Together these tracking mechanisms may be known as consumer input mechanisms.
That is, consumer input mechanisms are the ways in which the consumer can directly impact the xR experience. Typically, the rendering device or other aspects of the xR ecosystem can automatically ‘track’ or ‘follow’ the consumer to capture such consumer input mechanisms (although there may be other ways to capture such information).
In some examples, consumer input in the xR experience may be based on a three-step process: look at something, point at it to select it, then click to input the selection. When eyes become the selection tool, the first two steps become one. There is also significant power in non-verbal cues in multi-user xR. For example, with eye-tracking, an xR representation of a person (a.k.a. their “avatar”) can flick a sideways glance, blink, or exhibit other eye based social actions, which may form an important part of a natural social dynamic.