Polluted air on a city

Earlier this year I moved from Amsterdam to Barcelona with my family. As soon as we settled in Barcelona, there was something we immediately noticed: we could smell the pollution. Locals here didn’t notice it and, sure enough, after several months here I can’t smell it anymore myself. Out of sight, out of mind. Scary.

This got me interested in day-to-day pollution and how it impacts our lives. Turns out, we breathe more than 20.000 liters of air every day. If that sounds like a lot, it’s because it is! And yet we are seldomly concerned about the quality of the air that goes through our lungs. We should be though, because air pollution has vast implications for our health. Please allow me to scare you a little bit:

Quite a grim reality. The stream of shocking research findings is endless. In short, we can conclude that air pollution is killing us slowly, in ways we may not even know yet.

What can we do about it?

That’s the million dollar question. It might seem that the forces involved are so huge that we can’t do much as individuals.

But I believe that the first step is to be aware and informed. Do you know how polluted it is where you are are standing right now? And how this pollution is affecting your life?

Imagine having a service that gives you real-time information about pollution around you and gives you advice on how to deal with it. This would empower people and give them control of how much they expose themselves to pollution, and would also give them a tool to take action on reducing those levels of pollution city-wide.

For some time now I’ve been thinking of making such a service. Turns out, I am not the only one that likes that idea!

The project

At Strategic Engineering, we set out to explore the possibility of making such a service. We are proposing it as a project called WAQI (Wearable Air Quality Indicator).

We think that even if WAQI is still just a proposal, it can influence other projects at Telenor Digital, because it touches on many interesting areas:

IoT

We will measure the air quality using a wearable device equipped with a few sensors. The first iterations of the device will be simple, using Bluetooth LE for communications. Telenor Digital has made some stints into IoT and we’ll be tapping those for in-house knowledge and networks.

Front-end

We want the front-end to be the star of the show. Users will see creative and useful visualizations on the current air quality, along with suggested actions about what to do about it. The user will interact with the wearable exclusively through their phone.

Back-end

The back-end will potentially be dealing with an enormous amount of anonymized data, along with its geolocation coordinates. We’ll need real-time data-acquisition and processing, along with a flexible model that allows us to do deep analysis and train models on this data. Given enough data, it should give us interesting insights on how pollution behaves and evolves in our cities.

These three areas are interesting for many future projects. In-house IoT knowledge, real-time databases, Machine Learning on geolocated datasets… the department can benefit of any experience and knowledge acquired during exploring this project.

University of Oslo joins the effort!

One of the persons who immediately liked the idea was Hakeem, creative extraordinaire at Telenor. Turns out that Haq has connections with UIO, and he proposed WAQI as a potential research project for Interaction Design students. And it got accepted!

This is very important, because it means that we have a whole extra team of highly motivated (they are betting their semester results on it!) interaction designers helping us find ways to engage users that we might not have come up with. It’s a luxury that most projects can’t count on.

Besides interaction design related to the device and the phone application, they will research target users and scenarios where measuring air quality can be useful (from personal to professional uses), which will make us think out of the box about how to implement the project in ways it fits the different scenarios. At the same time, we’ll be helping them shape their university project and showing them how a project gets implemented outside the walls of the university, in a real-world company. We’ll be meeting every week to share progress and integrate each team’s results.

We have already met the design students who will be looking at ways to make WAQI compelling to potential users. They are as excited and eager as we are, and we are extremely grateful to have such a great help!

Students from UIO A videoconference-selfie of the students that will help us with WAQI.

The future

We are very excited about the possibilities for this project. We are at a very initial stage, trying to find the main challenges and uses for it, but the more we think about it, the more we see a clear need for something like WAQI. We are completely open to suggestions, ideas and constructive criticism, so go ahead and drop us a line!

This article explains how MediaStreams work in Firefox and the changes I did to them to accommodate cloning.

First of all. What is a MediaStreamTrack, and how can you clone it?

A MediaStreamTrack represents a realtime stream of audio or video data.

It provides a common API to the multiple producers (getUserMedia, WebAudio, Canvas, etc.) and consumers (WebRTC, WebAudio, MediaRecorder, etc.) of MediaStreamTracks.

A MediaStream is simply put a grouping of MediaStreamTracks. A MediaStream also ensures that all tracks contained in it stay synchronized to each other, for instance when it gets played out in a media element.

Cloning a track means that you get a new MediaStreamTrack instance representing the same data as the original, but where the identifier is unique (consumers don’t know it’s a clone) and disabling and stopping works independently across the original and all its clones.

Now, how does all this come together in Firefox?

Continue reading »

Click the image above to start playing the video

Continue reading »

I’m in the middle of writing the software for our very first LoRa device, and the module we used as a basis is based around an Atmel SAM D20 MCU. Which means writing code against Atmel Software Framework (ASF). I figured that porting the code to read a sensor over I2C to ASF would be very straight forward, but it took me 2 days. So for my future self: here’s how to read data from the MPU-6050 over I2C in ASF.

Continue reading »

Twice a year Telenor Digital organises an internal hackathon, a two-day offsite where we have the chance to mingle with other teams and work on things we’d normally never touch. Given Jan’s fascination with phone sensors he was wondering whether we could feed the data from the gyroscope and the accelerometer into a machine learning algorithm and that way classify what a person is doing. Could we create a model that would check the stream of data coming off these sensors and then tell whether the person is sitting, walking or dancing?

Continue reading »