Tutorial: creating FindKeanu app using new Amplify Predictions and Angular

Gerard Sans
10 min readOct 29, 2019

Use face recognition to uncover fake selfies and more by Gerard Sans

Is this a fake selfie? Hold my beer. Image by vertecchi*.

How many times do you go on your day and happen to cross paths with a celebrity? Now imagine how embarrassing would be to post a selfie with someone that resembles a celebrity but is not. I think, I may have built the perfect app for you: FindKeanu! 🙄😂

In this tutorial, we will learn how easy is to build a fullstack serverless app to find Keanu Reeves in selfies like a pro while learning about face recognition using Angular and Amplify. We will cover:

Why not run this app using single-click deploy before continue reading?

Introduction to Amplify

Amplify makes developing, releasing and operating modern full-stack serverless apps easy and delightful. Mobile and frontend web developers are being supported throughout the app life cycle via an open source Amplify Framework (consisting of the Amplify libraries and Amplify CLI) and seamless integrations with AWS cloud services, and the AWS Amplify Console.

By using Amplify, teams can focus on development while the Amplify team enforces best patterns and practices throughout the Amplify stack.

Introduction to Predictions

Predictions is a new category for Amplify CLI that integrates seamlessly with Amazon Translate, Amazon Polly, Amazon Transcribe, Amazon Rekognition and Amazon Comprehend.

Predictions is based in highly scalable, deep learning technology and requires no machine learning expertise to use.

Predictions is available for Web, React Native and Native Apps (iOS and Android). Using Predictions, we can easily add and configure AI/ML use cases using just few lines of code.

AWS services provided by Predictions

To implement the FindKeanu application we are going to focus on Amazon Rekognition.

Introduction to Amazon Rekognition

Amazon Rekognition offers a state-of-the-art image and video analysis service that can identify objects, people, text, scenes, and activities via a simple and easy to use API. Features include:

  • Detecting objects and scenes
  • Detecting and analysing faces including custom-collections
  • People path tracking in videos
  • Detecting celebrities
  • Detecting inappropriate content
  • Detecting text

For our solution, we are going to leverage the celebrity recognition API part of Amazon Rekognition’s facial analysis and facial recognition capabilities. This API, can recognise thousands of celebrities in a wide range of categories, such as media, sports, business, politics and entertainment. For actors, this may include a series of additional reference images, for their most popular characters, to improve accuracy. Eg: “John Wick” and “Neo” for Keanu Reeves actor.

Images used for facial recognition are never stored. Only face metadata, aka facial feature vectors, are stored by Amazon Rekognition.

Setting up a new project with the Angular CLI

To get started, create a new project using the Angular CLI. If you already have it, skip to the next step. If not, install it and create the app using:

npm install -g @angular/cli
ng new amplify-app

Navigate to the new directory and check everything checks out before continuing.

cd amplify-app
npm install
ng serve

Changes to Angular CLI project

The Angular CLI requires some changes in order to use AWS Amplify. Come back to this section to troubleshoot any issues.

Add type definitions for Node.js by changing tsconfig.app.json. This is a requirement for aws-sdk-js.

{
"compilerOptions": {
"types": ["node"]
},
}

Add the following code, to the top of src/polyfills.ts. This is a requirement for projects using Angular 6 or later.

(window as any).global = window;(window as any).process = {
env: { DEBUG: undefined }
};

Installing the AWS Amplify dependencies

Install the required dependencies for AWS Amplify and Angular using:

npm install --save aws-amplify 
npm install --save @aws-amplify/auth @aws-amplify/predictions

Installing the Amplify CLI

In case you don’t have it already, install the Amplify CLI:

npm install -g @aws-amplify/cli

Now, we need to configure the Amplify CLI with your credentials:

amplify configure

Once you’ve signed in to the AWS Console, continue:

  • Specify the AWS Region: pick-your-region
  • Specify the username of the new IAM user: amplify-app

In the AWS Console, click Next: Permissions, Next: Tags, Next: Review, and Create User to create your new IAM user. Then, return to the command line and press Enter.

  • Enter the access key of the newly created user:
    accessKeyId: YOUR_ACCESS_KEY_ID
    secretAccessKey: YOUR_SECRET_ACCESS_KEY
  • Profile Name: default

Setting up your Amplify environment

AWS Amplify allows you to create different environments to define your preferences and settings. For any new project, you need to run the command below and answer as follows:

amplify init
  • Enter a name for the project: amplify-app
  • Enter a name for the environment: dev
  • Choose your default editor: Visual Studio Code
  • Please choose the type of app that you’re building javascript
  • What javascript framework are you using angular
  • Source Directory Path: src
  • Distribution Directory Path: dist/amplify-app
  • Build Command: npm run-script build
  • Start Command: ng serve
  • Do you want to use an AWS profile? Yes
  • Please choose the profile you want to use default

At this point, the Amplify CLI has initialised a new project and a new folder: amplify. The files in this folder hold your project configuration.

<amplify-app>
|_ amplify
|_ .config
|_ #current-cloud-backend
|_ backend
team-provider-info.json

Adding Predictions to identify celebrities

AWS Amplify provides celebrity recognition via the predictions category which gives us access to Amazon Rekognition. To add predictions use the following command:

amplify add predictions

When prompted choose:

  • Please select from one of the categories below (Use arrow keys): Identify
  • You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
  • Do you want to use the default authentication and security configuration? Default configuration
  • How do you want users to be able to sign in? Username
  • Do you want to configure advanced settings? No, I am done
  • What would you like to identify? Identify Entities
  • Provide a friendly name for your resource: (identifyEntities0212d2f3) <Enter>
  • Would you like use the default configuration? Default Configuration
  • Who should have access? Auth and Guest users

Pushing changes to the cloud

By running the push command, the cloud resources will be provisioned and created in your AWS account.

amplify push

Configuring the Angular application

Reference the auto-generated aws-exports.js file that is now in your src folder. To configure the app, open main.ts and add the following code below the last import:

import Auth from '@aws-amplify/auth';
import Predictions, { AmazonAIPredictionsProvider, InterpretTextCategories } from '@aws-amplify/predictions';
import amplify from './aws-exports';
Auth.configure(amplify);
Predictions.configure(amplify);
Predictions.addPluggable(new AmazonAIPredictionsProvider());

Adding Styling

AWS Amplify provides UI components that you can use in your app. Let’s add these components to the project

npm i --save @aws-amplify/ui

Also include these imports to the top of styles.css

@import "~@aws-amplify/ui/src/Theme.css";
@import "~@aws-amplify/ui/src/Angular.css";

Creating FindKeanu UI

Our interface will show some pre-selected images an a button hiding an input file. This will allow users to quickly pick images and send them to be analysed by the identify celebrities API. See the app flow below.

Our main logic sits behind the input change event calling to findKeanu.

Selecting the image file to analyse

We are using CSS to hide an input file behind our button. After the user selects a file, the change event will be triggered passing over the original DOM event as $event. This way, we can have access to the image file information. Note how we are only accepting images for the file system dialog.

<input type="file" (change)="findKeanu($event)" accept="image/*" />

This is the code snippet to prepare the selected image file before passing it over to the Predictions API.

findKeanu(event) {
const { target: { files } } = event;
const file = files[0];
}

See below, the call toPredictions.identify(config) returning a promise. Besides passing over the selected file, our configuration requires a flag to activate celebrity detection.

Predictions.identify({
entities: {
source: { file },
celebrityDetection: true
}
}).then(response)

The response object we get back contains and array of entities, with as many as 100 celebrities, providing the following information:

  • Bounding box — The coordinates of the bounding box that surrounds the face. Properties: top, left, width and height.
  • Facial landmarks — An array of facial landmarks. For each landmark, eyes, nose and mouth, the response provides their coordinates. Properties: type, x and y.
Bounding box and type values: eyeLeft, eyeRight, nose, mouthLeft, mouthRight. 3D model by Caleb_Rolph*.
  • Metadata — Additional information. This provides a unique identifier, a display name, a list of links, if available, and a pose. Properties: id, name, urls and pose.

Using coordinates

The origin of coordinates (0,0), for both the bounding box and landmarks, is the upper left corner. Values are expressed using ratios against the full width or height (x/w, y/h). As an example, the center (100, 50) for a 200x100 image will become (0.5, 0.5).

Note coordinates can sometimes return values, below 0 or over 1, as faces or landmarks can slightly fall outside the image limits near the edges.

Understanding face pose values

Pose describes the rotation of the head in pitch, roll and yaw axis using a frontal picture as a reference.

If you want to quickly test by yourself, pitch axis is the movement you would do when turning your head up (+) or down (-); roll axis would be sneaking out from behind a wall to the left (-) or right (+) and yaw turning your head left (-) or right (+). See a demonstration below.

Note left and right are taken from the observers perspective, not the subjects, in the picture or video.

Top to bottom: pitch, roll and yaw axis movement. 3D model by Caleb_Rolph*.

Properties: pitch, roll and yaw. Values: -180 to 180.

Finding Keanu in the results

Once we get the results back, we filter the entities looking for the ones that correspond with Keanu Reeves Amazon Rekognition Id, which is 32wO2f3.

Predictions.identify(config).then(result => {
this.celebrities = [];
let keanuFound = result.entities.filter((entity) => {
const {metadata: {id} = { } } = entity;
if (id) {
this.celebrities.push(entity);
}
return id == "32wO2f3"; // Keanu Reeves
})

Depending on the results we will show the corresponding message and image.

Bonus feature: identify other celebrities

Depending on the image, sometimes the results may include other celebrities that are not Keanu. In order to take advantage of this information, we are adding those celebrities links using this code:

<div *ngIf="celebrities.length>0">
<div *ngFor="let e of celebrities">
<a href="http://{{e.metadata.urls[0]}}" target="_blank">
{{e.metadata.name || 'Not found'}}
</a>
</div>
</div>
Is not all lost if Keanu is not in there. Give it a try!

Understanding results and improving accuracy

After testing the application for a while, you will have a good sense of the potential of the celebrity detection API. For certain images, you will wonder why the results come one way or another. Things to take into account:

  • Massive catalog — Prediction APIs can operate with up to 20 million indexed facial feature vectors.
  • Intended usage — This API is designed to filter massive volumes of images/videos and identify a small set that is likely to contain a celebrity. Eg: press release image and video catalogs. For critical use cases, adding a human verification step would be required.
  • Image size and overall quality — The larger the better. To be detected, a face must be larger than 40x40 pixels for a 1920x1080 image or same proportion if larger.
  • Face size and pose — Frontal pictures perform better. Minimum face size is 80x80.

During my tests, I was able to improve the results accuracy by creating a custom-collection of images and using just a single reference image. You can improve results further, providing multiple images targeting different face detection scenarios. Find more details here.

Predictions.identify({
entities: {
source: { file },
collection: true
}
}).then(response)

Ready to code?

If you have an AWS Account, you can deploy FindKeanu with a single click (below) or test it locally following the instructions at gsans/find-keanu-angular.

You don’t have one? Use the next few minutes to create it and activate the free plan for a whole year! Follow steps at AWS knowledge center.

Free tier for a new AWS Account. For latest pricing check here

Thanks for reading!

Have you got any questions regarding this article or AWS Amplify? Feel free to ping me anytime at @gerardsans.

My Name is Gerard Sans. I am a Developer Advocate at AWS working with AWS Amplify and AWS AppSync teams.

Node is an open source, cross-platform JavaScript run-time environment

Angular is an open source project from Google

*I have requested permission from the author to use the image for the purpose of this blogpost.

--

--

Gerard Sans

Helping Devs to succeed #AI #web3 / ex @AWSCloud / Just be AWSome / MC Speaker Trainer Community Leader @web3_london / @ReactEurope @ReactiveConf @ngcruise