Face detection is an optional feature that improves the automatic cropping of Extra Images when a photo contains people. Instead of always cropping to a fixed anchor point (such as the centre of the image), SocialMagick can detect where faces appear in the photo and crop around them, keeping the people in the frame even when your image dimensions differ considerably from the source photo's aspect ratio.
Face detection is only triggered when all of the following conditions are met:
The template's Crop Anchor Point is set to Detect.
An Extra Image is available for the current page (the Extra Image source is not None, or is set to a valid static image).
If face detection is enabled but no faces are found in the image, SocialMagick falls back to cropping using the Centre Point anchor. Similarly, if the configured cloud service returns an error or is unreachable, SocialMagick falls back gracefully rather than failing to generate the image.
The result of face detection is cached along with the generated OpenGraph image. Detection only runs again when the cached image is invalidated (for example, because the Extra Image, template, or text has changed). This means that even when using a cloud provider, you will not be charged for every page view — only for the page loads that actually regenerate the image.
The three available detection methods, and their trade-offs, are as follows.
This is the default method. SocialMagick ships with a pure PHP implementation of the Viola-Jones object detection framework trained on frontal faces. It runs entirely on your server using PHP, requires no external accounts, credentials, or network requests, and has no running cost beyond the CPU time consumed.
The trade-off is accuracy. The Viola-Jones classifier is one of the older face detection algorithms. It works reliably for well-lit, roughly frontal portraits but struggles with faces at a significant angle, partly occluded faces, very small faces relative to the image size, and lower-resolution images. For many typical site use-cases — staff photos, product portraits, event photography — Pure PHP is entirely sufficient.
No additional configuration is required for this method.
When Google Cloud Vision API is selected, SocialMagick sends the Extra Image to the Google Cloud Vision service and uses the returned face bounding boxes to determine the crop anchor point. Google's model is considerably more accurate than the Viola-Jones classifier; it handles angled faces, partial occlusion, and a wide range of photographic conditions robustly.
To use this method you need:
A Google Cloud account with billing enabled.
The Cloud Vision API enabled for your project.
An API key (or a service account key) with access to the Cloud Vision API. Create and manage keys in the Google Cloud Console.
Enter the API key in the Google API Key field in the component Options under the Face Detection tab.
Google Cloud Vision charges per image analysed. Because face detection results are cached, charges accrue only when images are regenerated, not on every page view. Consult the Google Cloud Vision pricing page for current rates before enabling this method.
When AWS Rekognition is selected, SocialMagick sends the Extra Image to Amazon's Rekognition service. Like Google Cloud Vision, Rekognition is a highly accurate deep-learning based face detector that handles challenging photographic conditions well.
To use this method you need:
An Amazon Web Services account with billing enabled.
An IAM user (or role) with the rekognition:DetectFaces permission. The user does not need any other AWS permissions.
The IAM user's Access Key ID and Secret Access Key.
Enter the Access Key, Secret Key, and the AWS Region closest to your web server in the component Options under the Face Detection tab.
AWS Rekognition charges per image analysed. Check the AWS Rekognition pricing page for current rates before enabling this method.