What if tracking movement and actions indoors using just Wi-Fi, without any privacy concern? Channel state information-powered AI has the answer. Smart sensing becomes faster, lighter, and deployable on everyday edge devices.
The Internet of Things (IoT) is revolutionising how we interact with our surroundings—from adjusting lights in a smart environment to monitoring patients in healthcare settings. To interact with the environment, two things are essential: activity and a person’s location. Identifying the activity is known as activity recognition, while locating the exact position is called localisation. Tracking combines both.
One way to achieve this is by installing indoor cameras, but that raises privacy concerns and increases costs. To overcome these challenges, we can leverage signals from existing wireless fidelity (Wi-Fi) devices. This eliminates the cost of camera installation and addresses privacy concerns by relying on Wi-Fi signals.
Traditionally, the global positioning system (GPS) is used for localisation, but it performs poorly indoors due to signal blockage from walls and furniture, resulting in inaccurate positioning.
To address this, we use channel state information (CSI) from Wi-Fi. As Wi-Fi signals move through space, they encounter obstacles like walls, furniture, and human bodies, altering their amplitude and phase. CSI captures these changes, offering fine-grained insights into the wireless channel. By analysing these signal variations, CSI enables motion detection, activity recognition, and even localisation—without the need for extra sensors or cameras.
How it is applied
In the context of human activity recognition and localisation, CSI is particularly useful because it enables non-intrusive monitoring. For instance, in geriatric care, CSI can detect falls or unusual movements without needing wearable devices, ensuring continuous monitoring while maintaining comfort and dignity.

Fig. 1: Overview of CSI-based joint activity recognition and localisation on edge
Similarly, in smart homes, CSI can recognise activities like walking, sitting, and sleeping, allowing automation systems to adjust lighting, temperature, or security settings accordingly. For localisation, generally, GPS is used, but it is not particularly useful indoors. In hospitals or assisted living facilities, CSI based localisation can track the movement of elderly individuals or patients with cognitive impairments (like, Alzheimer’s). Alerts can be sent to caregivers if a patient wanders into restricted areas or exits the premises. In case a person collapses in a home or hospital room, the system can pinpoint their exact location and trigger an emergency alert, thus reducing response time.
Meanwhile, within a smart environment, a particular activity can signify different things; for example, in the living room, doing an upward gesture will indicate increasing the TV volume, while the same gesture in the bedroom will mean changing the AC temperature. Thus, we need to jointly identify activity and location with CSI; this is known as CSI-based joint activity recognition and localisation.
Deep learning for CSI-based recognition
Once we have the CSI data, we need to train a model that can be used later on to identify real-time activity and location. We train a deep learning-based model, specifically convolutional neural network (CNN), to perform this task. Effectively, training a model means mapping the labeled input data to its output, and in the process, the model learns some parameters called weights and biases. These learned weights and biases are then used later on for real-time inference.

CNNs use filters to learn relevant information in the data for the given task at hand. These models, often reliant on complex neural networks, require significant memory and processing power, making real-time deployment on resource-constrained edge devices difficult. Deep neural networks used for CSI analysis often have millions of parameters, making them unsuitable for embedded systems. Also, limited processing power on edge devices slows inference, making real-time applications difficult.
To overcome these challenges while deploying this CNN-based model on edge devices, we need to compress the size of the model and also reduce the computational complexity. The size of the model depends on the number of parameters (weights and biases) it has, and the computational complexity depends on the number of Floating Point Operation per second (FLOPs) the model needs to perform. This can be done via model compression. There are two major model compression methodologies:
- Quantisation
- Pruning
Optimising CSI models with quantisation and pruning
OOPS! THIS IS EFY PRIME CONTENT...
which means that you need to be an EFY PRIME subscriber to read it.
EFY PRIME content is our best content. Hence, you need to make a small investment to access all of our content including EFY Prime content.
If you're already an EFY PRIME member, feel free to login below.
Else, CLICK HERE to invest in an EFY Prime account and become our VIP customer who can access all our content, and that too without the clutter of ads!
BENEFITS OF EFY PRIME MEMBERSHIP:
(1) Zero Clutter AD free experience
(2) Super-fast user experience
(3) Focussed reading experience with no distractions
(4) Access to all our content including our Best-of-Best which is EFY Prime