EchoTag: Accurate Infrastructure-Free Indoor Location Tagging with Smartphones

Abstract – We propose a novel mobile system, called EchoTag, that enables phones to tag and remember indoor locations without requiring any additional sensors or pre-installed infrastructure. The main idea behind EchoTag is to actively generate acoustic signatures by transmitting a sound signal with a phone’s speakers and sensing its reflections with the phone’s microphones. This active sensing provides finer-grained control of the collected signatures than the widely-used passive sensing. For example, because the sensing signal is controlled by EchoTag, it can be intentionally chosen to enrich the sensed signatures and remove noises from useless reflections. Extensive experiments show that EchoTag distinguishes 11 tags at 1cm resolution with 98% accuracy and maintains 90% accuracy even a week after its training. With this accurate location tagging, one can realize many interesting applications, such as automatically turning on the silent mode of a phone when it is placed at a pre-defined location/area near the bed or streaming favorite songs to speakers if it is placed near a home entertainment system. Most participants of our usability study agree on the usefulness of EchoTag’s potential applications and the adequacy of its sensing accuracy for supporting these applications.

People

Faculty: Kang G. Shin
Current Students: Yu-Chih Tung

System Overview

Locations are sensed based on acoustic reflections while the tilt/WiFi readings are used to determine the time to trigger acoustic sensing, thus reducing the energy consumption of the sensing process.

Locations are sensed based on acoustic reflections while the tilt/WiFi readings are used to determine the time to trigger acoustic sensing, thus reducing the energy consumption of the sensing process.

The above figure gives an overview of EchoTag which is composed of recording and recognition phases. In the recording phase, multiple short sequences of sound signals will be emitted from the phone speakers. Each sequence is repeated a few times with different delays between left and right channels to enrich the received signatures as we will discuss in the following sections. The reading of built-in inertial sensors is also recorded for further optimization. After recording the signature, the selected target application/function and the collected signatures are processed and saved in the device’s storage. In the recognition phase, the phone will continuously check if the WiFi SSID and the tilt of the phone match the collected signatures. If the tilt and WiFi readings are similar to one of the recorded target locations, then the same acoustic sensing process is executed again to collect signatures. This new collected signature is compared with the previous records in the database using a support vector machine (SVM). If the results match, the target application/function will be automatically activated.

Acoustic Signature

Responses varies with location (i.e., the distribution of light and dark vertical lines) and this is used as a feature for accurate location tagging.

Responses varies with location (i.e., the distribution of light and dark vertical lines) and this is used as a feature for accurate location tagging.

EchoTag differentiates locations based on their acoustic signatures, characterized by uneven attenuations occurring at different frequencies as shown in the above figure. Note that EchoTag does not examine the uneven attenuations in the background noise but those in the sound emitted from the phone itself. For example, as shown in this figure, the recorded responses of a frequency sweep from 11kHz to 22kHz are not flat but have several significant degradations at certain frequencies.

During the recording of emitted sound, hardware imperfection of microphones/speakers, absorption of touched surface materials and multipath reflections from nearby objects incur different degradations at different frequencies. Only the degradation caused by multipath reflections is a valid signature for sensing locations even in the same surface.

During the recording of emitted sound, hardware imperfection of microphones/speakers, absorption of touched surface materials and multipath reflections from nearby objects incur different degradations at different frequencies. Only the degradation caused by multipath reflections is a valid signature for sensing locations even in the same surface.

There are three main causes of this uneven attenuation: (a) hardware imperfection, (b) surface’s absorption of signal, and (c) multipath fading caused by reflection. As shown in the above figure, when sound is emitted from speakers, hardware imperfections make the signal louder at some frequencies and weaker at other frequencies. After the emitted sound reaches the surface touched by the phone, the surface material absorbs the signal at some frequencies. Different materials have different absorption properties, thus differentiating the surface on which the phone is placed. Then, when the sound is reflected by the touched surface and the surrounding objects, the combination of multiple reflections make received signals constructive at some frequencies while destructive at other frequencies. This phenomenon is akin to multipath (frequency-selective) fading in wireless transmissions. For example, if the reflection of an object arrives at microphones t milliseconds later than the reflection from the touched surface, then the signal component at 103/2t Hz frequency of both reflections will have opposite phases, thus weakening their combined signal. When reflections reach the phone’s microphone, they will also degrade due to imperfect microphone hardware design.

For the purpose of accurate location tagging, EchoTag relies on the multipath fading of sound among the properties mentioned above as this is the only valid signature that varies with location even on the same surface.

Publications

  • Yu-Chih Tung and Kang G. Shin, EchoTag: Accurate InfrastructureFree Indoor Location Tagging with Smartphones, Proceedings of The 21th ACM Annual International Conference on Mobile Computing and Networking (MobiCom’ 15), September 7-11, 2015, Paris, France. PDF pdf