Skip to content
Snippets Groups Projects
Ambarkutuk-Extended-Abstract.tex 9.32 KiB
Newer Older
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
\documentclass[letterpaper,12pt]{article}
\usepackage{ambarkutuk-paper}
\usepackage[letterpaper, margin=1in]{geometry}
\usepackage{layout}
\usepackage{appendix}
\usepackage{times}
\usepackage{ragged2e}
% \title{Information Theoretic Approach to Multi-sensor Perception Problems}  

% \author[1]{Creed Jones}
% \author[1]{Paul Plassmann}
% \affil[1]{The Bradley Department of Electrical and Computer Engineering}
\author{}
\date{\vspace{-10ex}}

% \linenumbers
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
\input{etc/definitions}
\input{etc/glossaries}
\begin{document}
\justifying
% \maketitle
\begin{center}\large{\textbf{Un-Title: Very Cool Title}}\end{center}

% \begin{center}\large{\textbf{Information Theoretic Approach to Solve Gait Analysis Problem with Multi-Sensor Perception}}\end{center}
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
\vspace{-5ex}\subsubsection*{Introduction}\vspace{-1ex}
This document briefly presents a solution to indoor occupant localization problem with an unconventional (from perception viewpoint), unintrusive (from privacy perspective), and unconstrained (from mode of deployment aspect) approach~\cite{alajlouni2019new}.
The proposed solution does not require occupants to carry special devices, markers, or beacons; hence, it is a passive approach while the sensible area (field-of-view) cannot be occluded; however, it suffers from uncertainty and complexity of the governing phenomenon.
In this work, the structural vibration signal captured by floor-mounted uncertain accelerometers due to an occupant's footfall patterns is used to determine the whereabouts of an occupant in a room.
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed

\noindent\textbf{\underline{Contributions}}: The original contributions of this study is as given below:
\vspace{-1ex}
\begin{itemize}
    \item A measurement model that models different sensing uncertainties, \vspace{-2ex}
    \item A framework that works out uncertainty bounds of vibro-localization techniques, \vspace{-2ex}
    \item Employment of multi-sensing principles to achieve minimal localization uncertainty, \vspace{-2ex}
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
    \item Multiple validation studies based on simulation and experimental data.
\end{itemize}

\vspace{-4ex}\subsubsection*{Method}\vspace{-1ex}
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
\input{fig_system}
The overall schematic of the proposed technique is shown in \Cref{fig:overview}.
Briefly, the proposed technique uses $m$ number of acccelerometer's vibro-measurements separately then combines the results with a Sensor Fusion algorithm to obtain the consensus of all the sensors's belief about the occupant location.

The main contribution of this study is creating a step localization framework $\vect{h}_i(\cdot)$ that provide a \gls{pdf} representing where the heel-strike location events $\vect{x}_i$ occured with the measurement of an imperfect sensor, i.e., it yields noisy and likely bias-drifted measurements $\vect{z}_i$.
% The proposed framework decomposes step localization problem into two smaller problems: estimation of the distance $d_i = g_d\left(\vect{z}_i; \vect{\beta}_d\right)$, and directionality $\theta_i = g_\theta\left(\vect{z}_i; \vect{\beta}_\theta\right)$ of the impact location to the sensor.
With the given representation, the location estimate $\vect{x}_i$ with its corresponding localization error $\vect{\chi}$ are given by when the occupant's true heel-strike location is $\vect{x}_t$:
\begin{equation*}
    \vect{x}_i = \vect{x}_t + \vect{\chi}_i = \vect{h}_i(\vect{z}_i; \vect{\beta})
\end{equation*}
where the vector of imperfect time-domain vibro-measurements of a single-axis accelerometer is given by $\vect{z}_i = \left(z_i[1], \ldots, z_i[n]\right)^\top \in \mathbb{R}^n$ between time steps $k = \{1, \ldots, n\}$ for all sensors $i = \{1, \ldots, m\}$.
% are modeled as the combination of the true vibro-measurement that the sensor is supposed to register, $z_t[k]$, and random effects of sensor imperfections, $\zeta[k]$.
% \begin{equation*}
%     \vect{z} = \{z[k]:k = \{1,\ldots n\}\}, \qquad \text{where } z[k] = z_t[k] + \zeta[k], \text{ and }\zeta[k] \sim \N{\delta}{\sigma_{\zeta}}.
% \end{equation*}
% \vspace{-5ex}
The proposed localization technique yields a \gls{pdf} of location estimate $f_{\vect{X}_i}\left(\vect{x}_i \right)$ by using the \gls{pdf} of the vibro-measurements $\vect{z}_i$, which are straightforward to obtain and are unique to each individual sensor, and localization framework $h_i\left(\cdot\right)$.  
The \gls{pdf} $f_{\vect{X}_i}\left(\vect{x} \right)$ inherently assigns a probability any arbitrary location vector $\vect{x}$ in the localization space, i.e. sensors' belief about the occupant location.
% The proposed method is able to derive the theoretical \gls{pdf} of location estimates 
% $(\vect{x}_1, \ldots, \vect{x}_m)$ 
% Specifically, each sensor's likelihood function represents where the sensor ``thinks'' the occupant's foot landed.
By combining the \gls{pdf} of each sensor, the joint \gls{pdf} is obtained where the peak (mode) of the joint \gls{pdf} is finally determined as the location estimation.
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
In short, the generation of this joint \gls{pdf} is called ``Sensor Fusion'' in the literature where the importance of each sensor's \gls{pdf} is scaled according the information that the \gls{pdf} carries in the fusion algorithm.
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed

% Due to stochastic nature of the problem, we employ a probabilistic approach such that the localization framework $\vect{h}(\cdot)$ yields to a likelihood function defined over the localization space.
% Specifically, each sensor's likelihood function represents where the sensor ``thinks'' the occupant's foot landed.
% By combining the likelihood function of each sensor, the joint likelihood function is obtained where the peak (mode) of the joint likelihood function is finally determined as the location estimation.
\begin{figure}[!t]
    \centering
    \begin{subfigure}[b]{0.45\textwidth}
        \centering
        \includegraphics[width=\textwidth]{k_50.png}
        \caption{}
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
        \label{fig:pdf}
    \end{subfigure}
    \hfill
    \begin{subfigure}[b]{0.45\textwidth}
        \centering
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
        \includegraphics[width=\textwidth]{example-image-a}
        \caption{}
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
        \label{fig:joint_pdf}
    \end{subfigure}

    \caption{\textbf{(a)}\textbf{(b)}}
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
    \label{fig:xy_mle2}
\end{figure}

\vspace{-2ex}\subsubsection*{Results}\vspace{-1ex}
The efficacy and validity of the proposed technique were assessed with a series of controlled experiments.
These experiments were held in Virginia Tech's own Goodwin Hall that is equipped with over 200 accelerometer embedded in its superstructure~\cite{alajlouni2020passive}.
In the experiments, the occupants were asked to walk along a 40 meters stretch of the south hallway measuring. 
We limited our sensory-scope to closest eleven accelerometer placed under this hallway.
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
\Cref{fig:xy_mle2} demonstrates two types of results we obtain from the proposed framework: \Cref{fig:pdf} depicts a step location and its corresponding \gls{pdf} of each sensors' belief about the occupant location overlayed in the same figure; while \Cref{fig:joint_pdf} shows the result of the Sensor Fusion algorithm.
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed
As can be seen in \Cref{fig:pdf}, each sensor indepedently generates a \gls{pdf} about the impact location. As can be seen in the figure, a significant inverse relationship between the sensor and the impact and the certainty of the \gls{pdf}.
In other words, the localization framework $\vect{h}_i(\cdot)$ becomes uncertain when the occupant moves away from the sensor.
It is important to note that the directionality component in the localization framework $\vect{h}_i(\cdot)$ is modeled with lack-of-information, i.e. Uniform Distribution in the range $(0, 2\pi)$.
Therefore, the sensors yield radially symmetric \gls{pdf}s around their locations.
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed

% \begin{figure}[!h]
%     \centering
%     \begin{subfigure}[b]{0.3\textwidth}
%         \centering
%         \includegraphics[width=\textwidth]{example-image-a}
%         \caption{}
%         % \label{fig:three sin x}
%     \end{subfigure}
%     \hfill
%     \begin{subfigure}[b]{0.3\textwidth}
%         \centering
%         \includegraphics[width=\textwidth]{example-image-b}
%         \caption{}
%         % \label{fig:five over x}
%     \end{subfigure}
%     \caption{
%         \textbf{(a)} 
%         \textbf{(b)} 
%     }
%     \label{fig:xy_mle3}
% \end{figure}
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed

% \Cref{fig:xy_mle3} demonstrates the error statistics of the estimated heel-strike locations as a function of the location the heel-strike locations.
% The left and center plot shows these error by only using the structural vibration and visual signal only.
% As can been seen from the left and center plots, the error characteristics of these sensing modalities are significantly different.
% When the result of these sensors are fused, the right plot is obtained where the SF algorithm greatly benefits from structural vibration and visual signals. 
Murat Ambarkutuk's avatar
Murat Ambarkutuk committed

\vspace{-2ex}\subsubsection*{Conclusions and Future Work}\vspace{-1ex}
\lipsum[1]
% In this document, two different research studies conducted in VTSIL that involve the SF discipline were presented.
% Overall, SF techniques are robust to many erroneous factors while capturing the sensor uncertainties to adaptively tune its internal parameters. 
% We believe that SF has the potential to provide crucial information to the solutions to complex problems such as structural health monitoring of complex structures, vibration control, non-field-of-view perception, etc.

\clearpage
\bibliographystyle{elsarticle-num-names} 
\bibliography{etc/cas-refs}

% \section*{Appendix}
% \subsection*{Page Layout}
% \centering
% \layout

\end{document}