Frontend Weeky logo

Frontend News

HOW TO BUILD
A SIMPLE CAMERA COMPONENT

Frontend News #4

David East - Author David East — August 15th, 2018

There are three main APIs needed to build a camera component

To build a camera component let's first understand the needed Browser APIs.

Let's build a custom camera element so you don't have to worry about hooking this code up ever again.

Custom Elements make components reusable across frameworks

This tutorial is not framework specifc. Leaf node components should be reusable. Custom Elements are a new(ish) browser standard that allows you to build reusable elements that are portable in most JavaScript frameworks. If you're not familiar with Custom Elements, it's okay. They're not too hard to use up front. It can get complex in advanced situations, but we'll steer clear of those paths. Here's a simple example:

class HelloElement extend HTMLElement {
  constructor() {
    // calling the construtor is not required.
    // but if you do, make sure to call super()
    super();
  }

  // this is called when the element is connected to the DOM
  connectedCallback() {
    // attach a shadow root so nobody can mess with your styles
    const shadow = this.attachShadow({ mode: 'open' });
    shadow.textContent = 'Hello world!';
  }
}

// define the tag name, it must have a dash
customElements.define('hello-element', HelloElement);
<hello-element></hello-element>

That's the general idea. Like I said, it gets more complicated, but in the case of the camera component we can keep things simple.

A camera needs video element and a hidden canvas

Let's start with the simple caxmera component.

camera.js
class SimpleCamera extend HTMLElement {
  connectedCallback() {
    const shadow = this.attachShadow({ mode: 'open' });
    this.videoElement = document.createElement('video');
    this.canvasElement = document.createElemnt('canvas');
    this.videoElement.setAttribute('playsinline', true);
    this.canvasElement.style.display = 'none';
    shadow.appendChild(this.videoElement);
    shadow.appendChild(this.canvasElement);
  }
}

customElements.define('simple-camera', SimpleCamera);

This component simply adds two elements: a video and a hidden canvas element. The playsinline attribute helps prevent janky video. These elements set the stage for streaming video and taking photos.

index.html
<!DOCTYPE html>
<html lang="en">
<head>
  <meta name="viewport" content="width=device-width, initial-scale=1.0">>
  <title>Simple Camera Component</title>
  <script src="camera.js"></script>
</head>
<body>
  <simple-camera></simple-camera>
</body>
</html>

This HTML document imports the component from the camera.js file and creates an element for the camera. Let's start streaming some video.

Access a camera through the MediaDevices API (with permission)

Use the navigator.mediaDevices.getUserMedia() method to permissibly gain access to a user's camera.

navigator.mediaDevices.getUserMedia(constraints)
  .then((mediaStream) => {
        
  });

Notice that getUserMedia() returns a Promise. The Promise resolves a MediaStream if successful. This stream is used on a video element. If the Promise rejects, you know the user has not granted permission. However! The Promise may never resolve or reject. The user can decide to never take action on the permission popup. Isn't that fun?

Browser support for MediaDevices is strong, but strange

The MediaDevices API is strongly supported. It's available in all modern browsers. However, there's no support in Internet Explorer, so you'll need a feature check.

if (navigator.mediaDevices.getUserMedia === undefined) {
  navigator.mediaDevices.getUserMedia(constraints)
    .then((mediaStream) => {
          
    });
}

However, some browser versions have partial support for MediaDevices and some have vendor specific implementations. The MDN article has a great section on setting the polyfills. Fortunately these polyfills should be applied outside of our element, so we won't need to account for this in our element.

Set audio and video constraints for the media stream

The getUserMedia() method takes in a set of contraints. These contraints help configure the stream after the user accepts permission. They have the type of MediaStreamConstraints. You can specify two main properties: audio and video.

navigator.mediaDevices.getUserMedia({ audio: false, video: { facingMode: 'user' }})
  .then((mediaStream) => {
        
  });

The audio property is a simple boolean. You request the user's audio or you don't. The video property is much more complex. The video constraints, or also known as the MediaTrackConstraints, specify everything you could possibly need for a video stream: echoCancellation, latency, sampleRate, sampleSize, volume, noiseSuppression, frameRate, aspectRatio, facingMode, and of course height and width.

These are a lot of contraints. However, unless you're building one heck of a camera app you'll only need a few. Namely, height, width, and facingMode.

SIGN UP!

One web topic. Every week. Sorry if you already have 😬

Assign the MediaStream to the Video element

Now that the MediaStream is configured, you can assign it to a video element.

camera.js
open(constraints) {
  return navigator.mediaDevices.getUserMedia(constraints)
    .then((mediaStream) => {
      // Assign the MediaStream!
      this.videoElement.srcObject = mediaStream;
      // Play the stream when loaded!
      this.videoElement.onloadedmetadata = (e) => {
        this.videoElement.play();
      };
    });
}

The video element has a srcObject. It streams from the device's camera when assigned a MediaStream. This snippet above added a open method on the element. Custom Elements have callable methods. If a user calls this open method it will start the video stream.

index.html
<!DOCTYPE html>
<html lang="en">
<head>
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>Simple Camera Component</title>
  <script src="camera.js"></script>
</head>
<body>
  <simple-camera></simple-camera>

  <script>
  (async function() {
    const camera = document.querySelector('simple-camera');
    await camera.open({ video: { facingMode: 'user' }})
  }())
  </script>
</body>
</html>

Now that we can stream video, let's take photos.

Use the Canvas to take photos as Blobs

The canvas element has the ability to draw a frame from a video element. Using this functionality you can draw on an invisible canvas and then export the image as a blob.

camera.js
_drawImage() {
  const imageWidth = this.videoElement.videoWidth;
  const imageHeight = this.videoElement.videoHeight;

  const context = this.canvasElement.getContext('2d');
  this.canvasElement.width = imageWidth;
  this.canvasElement.height = imageHeight;

  context.drawImage(this.videoElement, 0, 0, imageWidth, imageHeight);

  return { imageHeight, imageWidth };
}

This private _drawImage() method sets the height and width of the invisible canvas to the video's height. Then it uses the drawImage() method on the context. The video element, x position, y position, width, and height are supplied. This creates a drawing on the invisible canvas and sets us up to create a blob.

camera.js
takeBlobPhoto() {
  const { imageHeight, imageWidth } = this._drawImage();
  return new Promise((resolve, reject) => {
    this.canvasElement.toBlob((blob) => {
      resolve({ blob, imageHeight, imageWidth });
    });
  });
}

The canvas element has a toBlob() method. Since it is async, you can turn it into a Promise so it's easier to consume.

Now you can start to control this camera:

index.html
<!DOCTYPE html>
<html lang="en">
<head>
  <meta name="viewport" content="width=device-width, initial-scale=1.0">>
  <title>Simple Camera Component</title>
  <script src="camera.js"></script>
</head>
<body>
  <simple-camera></simple-camera>
  <button id="btnPhoto"></button>
  <script>
  (async function() {
    const camera = document.querySelector('simple-camera');
    const btnPhoto = document.querySelector('#btnPhoto');
    await camera.open({ video: { facingMode: 'user' }})
    btnPhoto.addEventListener('click', async event => {
      const photo = await camera.takeBlobPhoto();
    });
  }())
  </script>
</body>
</html>

Blobs are great when you need to upload a file. But sometimes it's nice to just stick a base64 encoded string into an image tag. The canvas element has a solution just for this.

Use the Canvas to take photos as base64

The canvas element has a toDataURL() method. This method takes the current contents of the canvas and spits it out to a base64 encoded image.

camera.js
takeBase64Photo({ type, quality } = { type: 'png', quality: 1 }) {
  const { imageHeight, imageWidth } = this._drawImage();
  const base64 = this.canvasElement.toDataURL('image/' + type, quality);
  return { base64, imageHeight, imageWidth };
}

The takeBase64() method calls the toDataUrl() method and returns it's base64 value. Notice that you can specify image type and the quality of the image.

index.html
<!DOCTYPE html>
<html lang="en">
<head>
  <meta name="viewport" content="width=device-width, initial-scale=1.0">>
  <title>Simple Camera Component</title>
  <script src="camera.js"></script>
</head>
<body>
  <simple-camera></simple-camera>
  <button id="btnBlobPhoto">Take Blob</button>
  <button id="btnBase64Photo">Take Base64</button>
  <script>
  (async function() {
    const camera = document.querySelector('simple-camera');
    const btnBlobPhoto = document.querySelector('#btnBlobPhoto');
    const btnBase64Photo = document.querySelector('#btnBase64Photo');
    await camera.open({ video: { facingMode: 'user' }})
    btnBlobPhoto.addEventListener('click', async event => {
      const photo = await camera.takeBlobPhoto();
    });
    btnBase64Photo.addEventListener('click', async event => {
      const photo = camera.takeBase64Photo({ type: 'jpeg', quality: 0.8 });
    });
  }())
  </script>
</body>
</html>

Port to your favorite framework

Modern JavaScript frameworks have the ability to use custom elements. This makes custom elements an atrractive choice for building common components. You can easily port this component if your company manages multiple apps that use multiple frameworks. The Custom Elements Everywhere shows how compatible each framework is with custom elements.

See each framework's docs for registering custom elements:

THE HEAP

Links to articles I actually read

Web performance

Custom site performance reports with the CrUX Dashboard

developers.google.com/web
The Chrome UX Report (CrUX) is a treasure trove of web performance data. It contains real user metrics on millions of domains. They recently added the ability to hook up your own domain and see a detailed chart about your site. The CrUX report tells you the amount of users across connections (2G, 3G, 4G), the first contentful paint, and device distribution.

The Chrome Developer Summit returns

developer.chrome.com/devsummit
This isn't really web performance news directly. However, place a bookmark, make a calendar reminder, or do whatever helps to remember this event. It's usually two days of top tier web development content.

Reduce JavaScript Payloads with Code Splitting

developers.google.com/web
There's three different code splitting techniques: vendor, entry point, and dynamic splitting. Vendor splitting is crucial for every app. Entry point splitting works best for apps that don't use client-side routing. Dynamic splitting works great for single page apps or other lazy loading situations.

Demos

HTML5 Terminal

htmlfivewow.com
A pure HTML / CSS / and JS terminmal. Drag files into the terminal. Enter commands. Watch the magic happen.

side-by-side-pageload.js

twitter.com/ebidel
Use puppeteer to load two or more pages side-by-side to visually see the difference in page load. You can control the viewport and the network throttling.

A pure CSS 3D yoyo

codepen.io/uzcho_
A pure CSS yoyo in just 272 lines of code. Pretty impressive.

PWA

NOTE! This section is curated my Maxim Salnikov! He's one of the most knowledgeable and passionate PWA developers out there. Give him a follow on Twitter.

Intent to Implement: Writable Files

groups.google.com/a/chromium.org
Chrome team started to work on a standard web platform feature (proposed by Web Incubator CG), making it possible to write things like document editors as PWAs. This includes open files and folders for read and write access, saving files to a user selected location, and persisting references to files/folders to later access again.

Google rolls out Windows 10 Action Center support for Chrome

windowsreport.com
Peter Beverloo from Google Chrome team announced the native (via Action Center) support for Web Push notifications of the PWAs installed on Windows 10 via Chrome 68+. This feature is hidden behind the flag though.

Vue CLI 3.0 is here!

medium.com/the-vue-point
Evan You is introducing the next major version of CLI which now includes pwa-plugin based on WorkboxJS. This plugin generates the web app manifest and registers a service worker which is precaching the core app files, so you have offline-ready app shell.

Angular

Angular Console — The UI for the Angular CLI

blog.nrwl.io
Nrwl is an Angular consulting company. They are founded by two former members of the Angular team: Jeff Cross and Victor Savkin. They just did something amazing. They created a desktop UI to complement or even replace the Angular CLI. Create projects, generate components, and build applications all from the UI. Even if you are a strict CLI user, you'll find some serious productivity boosts with this app.

Getting started with the Angular Console

youtube.com
AngularFirebase posted a 6 minute video detailing all that you can do with the Angular Console. Worth a watch if you're an Angular developer.

(P)React

React in Battlefield 1

youtube.com
A great 5 minute lightning talk at React Europe by Markus Thurlin. The menu UI in Battlefield 1 is built in React. Why? React makes the UI portable across different game titles and you can update the bundle over the network instead of shipping a new patch. How? EA uses the Frostbite game engine. It doesn't have a browser. They had to create C++ bindings to do a myraid of crazy things. The talk doesn't go into detail about how they implement the bindings, but I wish it did.

Fonts

Pizza Press

monotype.com
Ever wondered what the font is used on Domino's pizza boxes? Wonder no more. This is a great case study about the creation of Domino's custom font.

Trade Gothic

linotype.com
This font is so clean. I love it. I'm dying to use it in a new project.

Machine Learning

Getting started with Tensoflow codelab

codelabs.developers.google.com
Learn how to set up Tensorflow on your machine and use Tensorflow Hub to retrain a model to identify flower photos. You don't need any Python and ML experience to complete this codelab. The hardest part is setting up pip on your machine. The best part is once you learn how to train this model to identify flower photos, you can train it to identify any kind of photo. It's surprisingly easy. I switched it to use this Pokemon dataset and it was able to accurately idenfity all generation one Pokemon.

FEEDBACK

I need to hear from you. It's super important

Every week I ask you what was good and what might have sucked. I read each piece of feedback and work to incorporate it into the future editions of the newsletter. It's just a brief Google Form.

Preview of Google Form for feedback

Contact / Privacy Policy