Artificial Intelligence is eating the world, but how can we use this new power for good (and profit)? And should we be worried about the killer robots?


Artificial Intelligence (AI) is a major topic these days, and promises to help businesses achieve their goals of optimizing their operations. Many of today’s big players like Google and Facebook have AI as their core competency. Telenor has a great potential to leverage this technology in our digitization, but what does this actually mean? For many people AI is a black box, and with this article we will hopefully make it a bit more approachable.

When thinking about AI, humanoid robots is one of the first things that come to mind. Are we close to a society where we will not know if a person is human or not? Will we have machines that act like human and are sentient? These thoughts consider a machine that can operate as a “general AI”, an AI that can do general things without being specifically told what it should do – it can learn on its own. Essentially, something that acts and thinks like a human. It is easy to expect something like this, especially if people have heard about “training an AI” to complete a task. However the reality is not exactly as one might think.

The AIs we have today can do complex tasks, but are not general AIs. They can only do a single specific task that they have been taught how to do. When you hear people talk about “training” an AI, what they really mean is that the AI program tests many different configurations until it finds one configuration that solves the specified task at hand. This task has to be extremely specific. Most of today’s problems solved by AI have to be explicitly transformed into a form that can be understood by a computer, and the result transformed back, so we’re not on the verge of a robot uprising just yet.

Continue reading »

Virtual Reality is all the rage nowadays, but this isn’t the first time the Virtual Reality Hype wave has spread across the world. Man has tried to create the ultimate VR experience back since the 50s, and philosophers and authors have described VR experiences hundreds of years earlier still. So is the hype real this time? Is it finally the year of the Virtual Reality Headset?

Early devices were often stationary, requiring large amounts of equipment to use. Pictured above (left: Sensorama, right: Nintendo Virtual boy) are two stationary attempts from the 1950s and 1990s respectively. The Virtual Boy with its stereoscopic 3D and glorious 384x224 pixel display had a tendency to give people headaches😵😵and make them puke 😱😱 everywhere.

Today the puking and headaches have been replaced with bruises as people are falling over virtual furniture and pool tables. Neither children nor adults are safe, as these poor fellows experience 😂😂.

But what made VR go from something contained within huge arcade machines to something within the reach of so many people? After many years of less-than-satisfactory VR experiences, both hardware, display technology and software finally caught up to our imaginations. The modern VR revolution basically required this setup:

Mobile screen technology has gotten so good, with such high resolutions, the technology could be re-used for Virtual Reality equipment. The founder and creator of Oculus Rift metaphorically duct-taped a phone to his head to verify the product idea 😍😍. Further development of screen technology allowed for higher refresh rates than the first prototype (95 frames per second being the breakpoint for a smooth VR experience). It was also discovered that tricks that had worked well for regular computer monitors made the experience worse inside a VR headset. For instance, each time a new color is put on a pixel it would keep lighting until a new color was ready. This made for a terrible experience when moving your head, since it felt like the world was moving with you, making you feel seasick. Instead, VR headsets now always switch via the color black, and only blink the pixels in the correct color for short amounts of time. Finally, the ever-increasing power of GPUs have made more realistic and faster graphics possible. This means you still need a powerful computer with the latest tech to drive the systems at optimal resolution 💻💻 and speed 🏎🏎..

In the above picture to the left, you can see someone using the HTC Vive with the controllers allowing the game to track you while you walk through virtual reality. In the right picture, you can see Job Simulator, a game allowing you to experience what future humans think office work looked like back in the early 2000s. Being able to move around in your virtual cubicle is incredibly immersive 😂😂!

With multiple top-of-the-line VR headsets on the market right now (Oculus Rift, HTC Vive, Sony Playstation VR), and low-end offerings like Google Cardboard and Gear VR for the Galaxy Phone, VR is available for everyone 💰💰.

If you’d like to find out more about VR, check out our Facebook Live stream 🎥🎥 on the 22. of February 📱📱. Check out our Facebook page for more information 👍👍.

Software security is an important part of the overall security in Telenor Digital. The outset for building secure software in Telenor Digital is to embrace the opportunities that DevOps approaches like agile, continuous integration and continuous delivery give us to improve security. As our organisation doesn´t have a massive history of legacy systems and legacy ways of working – we are allowed to tap from some of the most recent practices for securing our code. It is the ideal starting point for a journey to improve our software security.

Continue reading »

UiO Telenor Digital Research Competition

Kickstart your career!

Join SAI and Telenor Digital’s research competition! SAI & Telenor Digital invite all anthropology and SV students to participate.

How will this work, and how can I participate?

  1. Send an email with your research proposal (1-2 pages) to cecilie dot perez at telenorditigal dot com by 20.02.2017. Your proposal will be evaluated by SAI and Telenor staff.
  2. February 2017: the proposals that made it to the final will be announced. All finalists get a small prize. The finalists will have about one month to complete their proposed research.
  3. All finalists will receive individual guidance from SAI and Telenor.
  4. The finalists will present their findings to Telenor and SAI staff, in late March
  5. At this event the winner will be announced, the first place winner get’s a prize valued at (approx) 10 000 NOK, as well as a shadow day of their dreams!

Can we participate in groups?

Absolutely! If your team wins, the prize will be split amongst the team members.

What should my research proposal include?

  • A clearly defined research question(s) that is anchored in anthropological thinking and methodology
  • Outline of methodology
  • Clearly defined timeline (no longer than 3 weeks)
  • Study population: Who are you interested in? and how will you recruit them for your study?
  • Ethical considerations

What topic should my study be on?

The topic is for this competition is communication within Norwegian families:How do families organise and communicate amongst themselves? How do they navigate, choose, understand and feel about the disperse landscape of communication tools? Cork-boards, oral messages, SMS, chat groups; use ethnography to capture the lived experience of families.

What should my research question be?

You are free to choose your own research question(s) as long as it relates to the topic: ‘Communication within Norwegian Families’. Below are some examples for inspiration:

  • Private life and public spaces:
  • How do families, and its different members experience and understand public and private with relation to communication?
  • Communication in everyday life:
  • How do families communicate when they plan and organise? What tools do they use and what needs do the different family members have?
  • How do family dynamics and communication practices within families influence each other?

When is the deadline?

Submit your proposal by 20.02.2017

What can I win?

  • First prize for a value of (approx) 10 000 NOK
  • Runners-up prizes for a value of 1000-2000 NOK
  • The opportunity to present your findings to Telenor Digital.
  • Shadow day tailored especially for you! Get first-hand experience on how your education can be put to work in an organisation like Telenor.

I’m still confused, how can I contact you?

Send an email to cecilie dot perez at telenorditigal dot com

This blog post is also available on Medium

Recently, HTMLCanvasElement.captureStream() was implemented in browsers. This allows you to expose the contents of a HTML5 canvas as a MediaStream to be consumed by applications. This is the same base MediaStream type that getUserMedia returns, which is what websites use to get access to your webcam.

The first question that comes to mind is, of course: “Is it possible to intercept calls to getUserMedia, get a hold of the webcam MediaStream, enhance it by rendering it into a canvas and doing some post-processing, then transparently returning the canvas’ MediaStream?”

As it turns out, the answer is yes.

We built a cross-platform WebExtension called Zombocam that does exactly this. Zombocam injects itself on every webpage and monkey-patches getUserMedia. If a webpage then calls getUserMedia, we transparently enhance the camera and spawn a floating UI in the DOM that lets you control your different filters and settings. This means that any website that uses your webcam will now get your enhanced webcam instead!

This blog post is a technical walk-through of the different challenges we ran into while developing Zombocam.

Monkey-patching 101

Monkey-patching getUserMedia essentially means replacing the browser’s implementation with our own. We supply our own getUserMedia function that wraps the browser’s implementation and adds an intermediary canvas processing step (and fires up a UI). Of course, since getUserMedia is a web JS API, there are one million different versions that need to be supported. There’s Navigator.getUserMedia and MediaDevices.getUserMedia, and then vendor prefixes on top of that (e.g. Navigator.webkitGetUserMedia and Navigator.mozGetUserMedia), and then there are different signatures (e.g. callbacks vs promises), and then on top of that again they historically support different syntaxes for specifying constraints. Oh, and they have different errors too. To be fair, MediaDevices.getUserMedia, the one true getUserMedia, solves all of these problems, but the web needs to wait for everyone to stop using the old versions first.

All of this boils down to having to type a lot of code to iron over the inconsistencies between different implementations, but in the happy case we end up with something like:

The rendering pipeline

Most of the effects and filters in Zombocam are implemented as WebGL fullscreen quad shader passes. This is a WebGL rendering technique that essentially lets us generate images on the fly on a per-pixel basis by using a fragment shader. This is elaborated upon in thorough detail in this excellent article by Alexander Oldemeier. Using this technique means that the image processing can be done on the GPU, which is essential to achieve smooth real-time performance. For each video frame, the frame is uploaded to the GPU and made available to an effect’s fragment shader, which is responsible for implementing the specific transformation for that effect.

Effects in Zombocam are split into three main categories: color filters, distortion effects and overlays. Filters in the first categories are implemented as non-linear per-channel functions with hard-coded mappings of input to output values in each frame. The idea is that a color grading expert creates a nice-looking preset using his or her favorite color grading tool. Then that color grading is applied to three 0–255 gradients, one for each color channel. The color graded outputs then serve as lookup tables for the pixel values in order to create a color graded output. This is a simplified version of the technique elaborated upon in this excellent article by Slick Entertainment.

Distortion effects are implemented as non-linear pixel coordinate transformation functions on the input image. That is, the pixel at coordinate (x, y) in the transformed image is copied from the pixel at coordinate f(x, y) in the original image. As long as you define f correctly, you can implement swirls, pinches, magnifications, hazes and all sorts of other distortions.

Finally, overlay effects simply overlay new pixels on parts or all of the frame. These new pixels can be sourced from anywhere, including other video sources. This effectively lets us overlay Giphy videos directly in the camera stream! Productivity will never be the same.

Since effects can be chained in Zombocam, the output from one effect’s rendering pass is fed directly as input to the next effect’s rendering pass. This opens for a wide array of different possible effect combinations.

Zombocam can turn you into a cyclops if you’re not careful when chaining effects!

Works everywhere! (*)

In theory, this approach works everywhere out of the box, so you can use when you’re snapping a profile picture on Facebook, hanging out in video meetings on or Google Hangouts. In practice, however, the story is a little more nuanced. Reliably monkey-patching getUserMedia in time in a cross-browser fashion via injection from a WebExtension without going overboard with permissions turns out to be hard in some cases. This means that if an application is really adamant at calling getUserMedia reeeally early in the page’s lifetime, getUserMedia might not be monkey-patched yet. In that case, Zombocam will simply never trigger, and it will be as if it weren’t ever even installed.

When attempting to transparently monkey-patch APIs one has to take extreme care to make sure that the monkey-patching actually is transparent. That means properly forwarding all sorts of properties on the Streams and Tracks returned from getUserMedia that applications might expect and depend on.

One specific example of this that we ran into was with’s new premium offering, where you can screen-share and show your webcam stream in your meeting room at the same time. The application relied on the name of one of the Tracks to be “Screen”, which we didn’t properly forward to our Tracks that we got from our canvas. Because of this, didn’t know which of the tracks was the screen-sharing track, and things stopped working. Properly forwarding the name property solved the issue, and we learned an important lesson in the virtues of actually being transparent when trying to transparently intercept APIs.

What’s next: audio filters

With the new release of Zombocam we’ve taken it one step further and enhanced getUserMedia audio tracks as well using the Web Audio API. More on that in a later blog post!