/ WebRTC

Using the Webcam in React

Just recently I started a fun little side project and found myself using the webRTC spec api for the first time. To my delight I found that is a remarkably simple and straight forward tool and there are plenty examples out there to get you off the ground. For this project I needed to use the webcam to allow the user to take and save a picture and this video was a fantastic overview.

There is really just one thing that needed doing and that was to figure out how to break the example up into a functioning React component. Let's take a look.

class Capture extends Component {

First of all I knew I was going to need to access the video parameters like width and height throughout the component so I could either store them on state or in my redux store. In this case since the component already needed methods and couldn't be pure, the data would not be useful outside of it and it wouldn't change I went ahead and used state.

constructor()
super(props);

this.state = {
  constraints: { audio: false, video: { width: 400, height: 300 } }
};

this.handleStartClick = this.handleStartClick.bind(this);
this.takePicture = this.takePicture.bind(this);
this.clearPhoto = this.clearPhoto.bind(this);

Now most of the work around getting the video feed up and functional is on page load and where does any on load logic go? On componentDidMount of course! When the component mounts the getUserMedia should run to get permission for the video feed and direct it onto the <video> element. Here I am just using the chrome webkit implementation for simplicity and I promisified it first because, well, who wants to deal with those callbacks?

componentDidMount()
const constraints = this.state.constraints;
const getUserMedia = (params) => (
  new Promise((successCallback, errorCallback) => {
    navigator.webkitGetUserMedia.call(navigator, params, successCallback, errorCallback);
  })
);

getUserMedia(constraints)
.then((stream) => {
  const video = document.querySelector('video');
  const vendorURL = window.URL || window.webkitURL;

  video.src = vendorURL.createObjectURL(stream);
  video.play();
})
.catch((err) => {
  console.log(err);
});

this.clearPhoto();

This clearPhoto() call at the end just takes the next step and sets up the <img> tag which will receive the captured still.

clearPhoto()
const canvas = document.querySelector('canvas');
const photo = document.getElementById('photo');
const context = canvas.getContext('2d');
const { width, height } = this.state.constraints.video;
context.fillStyle = '#FFF';
context.fillRect(0, 0, width, height);

const data = canvas.toDataURL('image/png');
photo.setAttribute('src', data);

Last but certainly not least we have the handleStartClick and actual takePicture functions which, you guessed it, capture the photo on the button click.

handleStartClick(event)
event.preventDefault();
this.takePicture();

The takePicture function grabs the offscreen canvas which will capture the image initially. It sets the canvas to the appropriate size and then calls drawImage which takes the first parameter as it's source and draws it with the given dimensions. Finally the toDataURL function will convert the canvas to the specified data type, in this case a png, which is then placed as the src of our waiting <img> tag.

takePicture()
const canvas = document.querySelector('canvas');
const context = canvas.getContext('2d');
const video = document.querySelector('video');
const photo = document.getElementById('photo');
const { width, height } = this.state.constraints.video;

canvas.width = width;
canvas.height = height;
context.drawImage(video, 0, 0, width, height);

const data = canvas.toDataURL('image/png');
photo.setAttribute('src', data);

All of this is rendered onto a very simple page.

render()
return (
  <div className="capture"
    style={ styles.capture }
  >
    <Camera
      handleStartClick={ this.handleStartClick }
    />
    <canvas id="canvas"
      style={ styles.picSize }
      hidden
    ></canvas>
    <Photo />
  </div>
);

The <Camera /> and <Photo /> elements here are simple pure functional components which house the video stream and the resulting still respectively

const Camera = (props) => (
  <div className="camera"
    style={ styles.box }
  >
    <video id="video"
      style={ styles.picSize }
    ></video>
    <a id="startButton"
      onClick={ props.handleStartClick }
      style={ styles.button }
    >Take photo</a>
  </div>
);
const Photo = (props) => (
  <div className="output"
    style={ styles.box }
  >
    <img id="photo" alt="Your photo"
      style={ styles.picSize }
    />
    <a id="saveButton"
      onClick={ props.handleSaveClick }
      style={ styles.button }
    >Save Photo</a>
  </div>
);

And that is really all there is to that. With this simple component tree you get a live video stream and the ability to capture stills from the stream. In this example I am rendering the image back onto the page for the user to review. After all, no one likes that one chance photo at the DMV why would they want that on a web page?! Elsewhere in the app I actually am enabling the user to save the data back to the database to be used throughout the application.

All in all the getUserMedia is an awesome and powerful tool and remarkably simple to use! If you haven't played around with it yet, I recommend you do.