Newer
Older
\documentclass[letterpaper,12pt]{article}
\usepackage{ambarkutuk-paper}
\usepackage[letterpaper, margin=1in]{geometry}
\usepackage{layout}
\usepackage{appendix}
\usepackage{times}
\usepackage{ragged2e}
% \title{Information Theoretic Approach to Multi-sensor Perception Problems}
% \author[1]{Creed Jones}
% \author[1]{Paul Plassmann}
% \affil[1]{The Bradley Department of Electrical and Computer Engineering}
\input{etc/definitions}
\input{etc/glossaries}
\begin{document}
\justifying
\pagenumbering{gobble}
\begin{center}\large{\textbf{An Occupant Localization based on Uncertain Vibro-measurements}}\end{center}
% \begin{center}\large{\textbf{Information Theoretic Approach to Solve Gait Analysis Problem with Multi-Sensor Perception}}\end{center}
Existing methods for solving the indoor occupant location problem with passive perception schemes (determining, for example, the location of thefootsteps of a person without an active emitter walking across a room) suffer from high uncertainty.
In my work, I present a novel probabilistic localization framework that tackles both model imperfections and measurement uncertainty to improve the precision of computed localization estimates.
The key ideas are (1) the relaxation of the normality assumption for each individual sensor's estimate, and (2) the employment of maximum likelihood cost function approach to combine these individual sensor distributions.
While the proposed technique requires the solution of a non-linear equation, it provides significantly precise location estimates.
Specifically, this work studies the structural vibration signal captured by floor-mounted accelerometers\footnote{In this document, the words ``sensor'' and ``accelerometer'' are used interchangably.} due to an occupant's footfall patterns to determine the whereabouts of an occupant in a room.
% The proposed solution does not require occupants to carry special devices, markers, or beacons; hence, it is a passive approach while the sensible area (field-of-view) cannot be occluded; however, the measurements in-hand are uncertain and governing phenomenon, i.e., wave propagation, is signicantly complex.
% The proposed solution is unconventional (with relaxed assumptions), unintrusive (from privacy perspective), and unconstrained (from mode of deployment aspect) approach.
\noindent\textbf{\underline{Contributions}}: The original contributions of this study are given below:
% \item A measurement model that models different sensing uncertainties, \vspace{-2ex}
\item A framework that works out uncertainty bounds of vibro-localization techniques, \vspace{-2ex}
\item Employment of multi-sensing principles to achieve minimal localization uncertainty, \vspace{-2ex}
\item Multiple validation studies based on simulation and experimental data.
\end{itemize}
\vspace{-4ex}\subsubsection*{Method}\vspace{-1ex}
An overview of the proposed framework is shown in \Cref{fig:overview}.
When an occupant takes a step, the footfall pattern of the occupant applies some forcing, i.e., the ground reaction force, on the floor which generates a structural vibrations wave in it.
This wave is then sensed by $m$ number of accelerometers yielding to a vibro-measurement vector $\vect{z}_i = (z_i[1], \ldots z_i[n])^\top \in \mathbb{R}^n$ $\forall i \in \{1,\ldots, m\}$ for each sensor indexed with $i$ obtained at discrete time steps at $\forall k \in \{1,\ldots, n\}$.
% The measurement of $i^{th}$ sensor for discrete time steps $k=\{1,\ldots, n\}$ constitute the elements of the vibro-measurement vector $\vect{z}_i$, i.e., $\vect{z}_i = \left(z_i[1], \ldots, z_i[n]\right)^\top \in \mathbb{R}^n$.
It is well-known in the literature that the vibro-measurements are often disturbed with random measurement errors and drifted due to sensor bias.
In order to tackle such disturbances, we employ a probablistic approach to see how likely it is to obtain measurement vector $\vect{z}_i$ when its \gls{pdf} is given as $\f{\vect{Z}_i}$.
\boxed{
\vect{x}_i = \vect{x}_{true} + \vect{\chi}_i = \vect{h}_i(\vect{z}_i; \vect{\beta})
}
Equation above forms the backbone of the localization framework $\vect{h}_i(\cdot)$ where the calibration vector $\vect{\beta}$ represents some parameters describing the characteristics of the wave propagation phenomenon occuring in the floor and is assumed to be known during the employment of the technique.
With the given representation, the location estimate $\vect{x}_i$ with its corresponding localization error $\vect{\chi}_i$ are given by when the occupant's true heel-strike location is $\vect{x}_{true}$.
In short, the localization framework embraces the measurement error in vector $\vect{z}_i$, imperfections in $\vect{beta}$ and $\vect{h}_i(\cdot)$ by considering \glspl{pdf} $\f{\vect{Z}_i}$.
Therefore, it yields another set of \glspl{pdf} denoting each sensor's belief about occupant location $\f{\vect{X}_i}$.
In essence, \glspl{pdf} $f_{\vect{X}_i}\left(\vect{x} \right)$ inherently assigns a probability to any arbitrary location vector $\vect{x}$ in the localization space forming sensors' belief about the occupant location.
Sequentially, these \glspl{pdf} are combined within a Sensor Fusion algorithm to obtain the consensus among the sensors' belief $\f{\vect{X_1}, \ldots, \vect{X}_m}$.
% where the vector of imperfect time-domain vibro-measurements of a single-axis accelerometer is given by $\vect{z}_i = \left(z_i[1], \ldots, z_i[n]\right)^\top \in \mathbb{R}^n$ between time steps $k = \{1, \ldots, n\}$ for all sensors $i = \{1, \ldots, m\}$.
% % are modeled as the combination of the true vibro-measurement that the sensor is supposed to register, $z_t[k]$, and random effects of sensor imperfections, $\zeta[k]$.
% % \begin{equation*}
% % \vect{z} = \{z[k]:k = \{1,\ldots n\}\}, \qquad \text{where } z[k] = z_t[k] + \zeta[k], \text{ and }\zeta[k] \sim \N{\delta}{\sigma_{\zeta}}.
% % \end{equation*}
% % \vspace{-5ex}
% The proposed localization technique yields a \gls{pdf} of location estimate $f_{\vect{X}_i}\left(\vect{x}_i \right)$ by using the localization framework $\vect{h}_i\left(\cdot\right)$ and the \gls{pdf} of the vibro-measurements $\f{\vect{Z}_i}$, which are straightforward to obtain and are unique to each individual sensor.
% % The proposed method is able to derive the theoretical \gls{pdf} of location estimates
% % $(\vect{x}_1, \ldots, \vect{x}_m)$
% % Specifically, each sensor's likelihood function represents where the sensor ``thinks'' the occupant's foot landed.
% By combining the \gls{pdf} of each sensor, the joint-\gls{pdf} is obtained where the peak (mode) of the joint-\gls{pdf} is finally determined as the location estimation.
% In short, the generation of this joint-\gls{pdf} is called ``Sensor Fusion'' in the literature where the importance of each sensor's \gls{pdf} is scaled according the information that the \gls{pdf} carries.
% % Due to stochastic nature of the problem, we employ a probabilistic approach such that the localization framework $\vect{h}(\cdot)$ yields to a likelihood function defined over the localization space.
% % Specifically, each sensor's likelihood function represents where the sensor ``thinks'' the occupant's foot landed.
% % By combining the likelihood function of each sensor, the joint likelihood function is obtained where the peak (mode) of the joint likelihood function is finally determined as the location estimation.
\centering
\includegraphics[width=\textwidth]{k_50.png}
\caption{\gls{pdf} of each sensor's belief about occupant location}
% \caption{\textbf{(a)}: This figure overlays the \gls{pdf} of each sensor about the occupant location together. The red crosses denote the sensor location, and the black dot shows the actual impact location. As can be seen in the figure, as the distance between the impact and the sensor increases, the spread of the \gls{pdf} increases, meaning the sensor is increasingly becoming uncertain about the occupant location.
% \textbf{(b)}}
\caption{Results of the localization framework $\vect{h}_i(\cdot)$ and Sensor Fusion}
\vspace{-2ex}\subsubsection*{Results}\vspace{-1ex}
The efficacy and validity of the proposed technique were assessed with a series of controlled experiments.
These experiments were held in Virginia Tech's own Goodwin Hall that is equipped with over 200 accelerometer embedded in its superstructure~\cite{alajlouni2020passive}.
In the experiments, the occupants were asked to walk along a 40 meters stretch of the south hallway.
We limited our sensory-scope to closest eleven accelerometer placed under this hallway.
\Cref{fig:result} demonstrates two types of results we obtain from the proposed framework: \Cref{fig:pdf} depicts a step location and its corresponding \gls{pdf} of each sensors' belief about the occupant location overlayed in the same figure; while \Cref{fig:joint_pdf} shows the result of the Sensor Fusion algorithm.
As can be seen in \Cref{fig:pdf}, each sensor indepedently generates a \gls{pdf} about the impact location.
A significant inverse relationship between the impact distance and the certainty of the \gls{pdf} was observed in the results.
In other words, the localization framework $\vect{h}_i(\cdot)$ becomes uncertain when the occupant moves away from the sensor, or vice versa.
It is important to note that the directionality component in the localization framework $\vect{h}_i(\cdot)$ is modeled with complete lack-of-information, i.e. a Uniform Distribution in the range between $(0, 2\pi)$.
Therefore, the sensors yielded radially symmetric \gls{pdf}s around their locations.
On the other hand, \Cref{fig:joint_pdf} depicts the joint-\gls{pdf} that is resultant from the Sensor Fusion algorithm.
As the figure suggests, the joint-\gls{pdf} represents the consensus among all the sensors while their contribution to the consensus is scaled according to how certain each \gls{pdf} is.
Because of this nature, the joint-\gls{pdf} shows a very small spread over the localization space.
% \begin{figure}[!h]
% \centering
% \begin{subfigure}[b]{0.3\textwidth}
% \centering
% \includegraphics[width=\textwidth]{example-image-a}
% \caption{}
% % \label{fig:three sin x}
% \end{subfigure}
% \hfill
% \begin{subfigure}[b]{0.3\textwidth}
% \centering
% \includegraphics[width=\textwidth]{example-image-b}
% \caption{}
% % \label{fig:five over x}
% \end{subfigure}
% \caption{
% \textbf{(a)}
% \textbf{(b)}
% }
% \label{fig:xy_mle3}
% \end{figure}
% \Cref{fig:xy_mle3} demonstrates the error statistics of the estimated heel-strike locations as a function of the location the heel-strike locations.
% The left and center plot shows these error by only using the structural vibration and visual signal only.
% As can been seen from the left and center plots, the error characteristics of these sensing modalities are significantly different.
% When the result of these sensors are fused, the right plot is obtained where the SF algorithm greatly benefits from structural vibration and visual signals.
% \vspace{-2ex}\subsubsection*{Broader Impact}\vspace{-1ex}
% \lipsum[1]
\vspace{-2ex}\subsubsection*{Conclusions and Future Work}\vspace{-1ex}
This document presented a passive indoor occupant localization technique which relies on the vibro-measurements of a floor.
With the controlled experiments, the proposed localization framework showed promising results and capabilities in the localization problem.
From the results, we observed that the proposed framework yielded \gls{pdf}s that are accurately representing the occupant location.
In order to handle the uncertainty of the estimations, multi-sensor perception scheme was employed.
This resulted in a significant increase in the information about the occupant location.
We can summarize this work with a counter-intuitive statement: ``Engineering principle suggests replacing the sub-standard sensors with better ones, we suggest employment of many of these sensors.''