fbpx

Virtual Proximity Sensing (VPS)

Introduction

People are using computers in a growing number of different situations, many of which go beyond the traditional desktop. As such, there is a growing need for new input devices and techniques that better suit the task at hand. A key attribute of an appropriate technique is its ability to provide context-dependent feedback and guidance based on where it is used, but this is difficult because it requires knowing a person’s position in space relative to the target surface. In this post we introduce sensing () as a means of obtaining such information. VPS uses sensors that measure parameters indicative of distance from a surface to estimate actual distance, and thus provides information about proximity that can be used by an interaction technique or device as context for operation. We describe two implementations: one using accelerometer data from an off-the-shelf smart phone and another using capacitive sensing with a custom circuit board attached to a laptop computer’s touch pad. We then show how VPS can be incorporated into existing applications such as focus+context displays, digital document management systems, interactive video editing systems, and games to provide context-dependent functionality that depends on proximity. Finally we present insights obtained through interviews regarding how people interact with computers in their homes, what they might use virtual proximity sensing for if it were available today, and what they would like to see done with it in the future.

What is a virtual proximity (VPS)?

A virtual proximity sensor (VPS) is a computer program using sensors to estimate the distance of an object or surface. It uses various types of input such as touch, hover, and vision to determine how close a user is to an object. A VPS outputs are used to select an appropriate mode of operation for a human-computer interaction device such as a robotic arm or hand that picks up objects, or for selecting what information should be displayed on screens in augmented reality applications.

VPS is based on the principle that there is a direct relationship between the distance from which one views or touches something and how much detail can be seen on it. When we get closer to looking at something, our eyes change focus from seeing everything clearly (accommodation) until they are focused directly on what we are looking at (engagement). This phenomenon has been exploited by artists and photographers who have put subjects into focus by placing them at different distances from camera lenses with different focal lengths; however this approach does not work well when trying to change modes quickly because cameras need time before changing their position.

What is the Use of Virtual proximity sensor?

The VPS output is used to select an appropriate mode of operation for a human-computer interaction device. The VPS provides information on the distance and orientation of an object with respect to the sensor. This data can be used by software that implements a virtual pointing interface (VPI). A VPI allows users to control their computer by moving their finger over a surface, without touching it.


DOWNLOAD FREE FILES NOW

It uses a number of different sensors, including touch and hover.

The VPS uses a number of different sensors, including touch and hover. It is able to determine the distance between the user and the surface of interaction by using a combination of these two methods, which allows for a more accurate determination than either would alone. In order to use this information, the VPS must be connected to the computer. This allows it to send output signals that control how your computer responds when you interact with it. This can mean selecting an appropriate mode of operation for human-computer interaction devices (HCDs) or changing something in your HCD’s display based on where you are touching or hovering over on its screen.

Conclusion: virtual proximity sensing

In summary, we have presented the architecture of a virtual proximity sensor (VPS) that uses a set of sensors to estimate the distance of the surface. We have also described its implementation in an embedded system and its use in selecting an appropriate mode of operation for a human-computer interaction device.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button