Google has a habit of breathlessly announcing a new feature exclusively for a new Pixel model before eventually adding it as a feature for older Pixel models. When Google released the Pixel 8 lineup last year, it included the Magic Editor, which lets the user change the color of the sky, move the subject of the photo to a different location, and arrange elements in the photo — all using AI.
Google hoped that the feature would help sell many units and encourage owners of older Pixel models to upgrade to the Pixels 8 or Pixel 8 Pro. But earlier this year, Google surprised everyone by allowing all Pixel models running Android 8 or higher, at least 4GB of RAM, and a 64-bit chipset to apply the Magic Editor to images found in the Google Photos app. So my Pixel 6 Pro not only had the Magic Editor, but also the Unblur feature, which cleans up blurry photos.
Non-Pixel Android phones and certain iPhone models can also use the Magic Editor
What's even more interesting is that all Android phones running Android 8 and above, and even iPhone models running iOS 15 or above, can use Magic Editor (though not Unblur) with the Google Photos app installed. These phones can only use Magic Editor for 10 images per month unless they subscribe to the 2TB tier of cloud storage app Google One.
Google is now pushing out some updated AI-based accessibility features to older Pixel models. One such feature is Guided Frame, which is designed for Pixel users with low vision. The feature gives you audio instructions to help you get your face in the frame of a selfie or a regular photo. It also helps people with low vision find the right camera angle. With Guided Frame, the phone prompts the user to tilt their head up or down or move the camera left or right before the camera automatically captures the picture. Guided Frame also alerts you when the light is too dim so you can find a brighter place to take a photo. The feature is now available in the camera settings.
With the magnifying glass, visually impaired people can enlarge the world around them. | Image credit: Google
Also available on older Pixel models is Magnifier. This is a feature exclusive to Pixel models that uses the camera to help users with low vision “zoom in on the world around you.” With a little help from AI, Magnifier can now help people with low vision search for specific words. Google provided examples including looking for vegetarian dishes on a menu or trying to find the departure time of a specific flight on the big display board at the airport.
The feature uses a picture-in-picture format, so if you're standing at the deli counter and want to take a closer look at the menu, you can take a picture and use picture-in-picture to look at the options on the menu without losing your place. You can turn on the selfie light to create a mirror for applying makeup using the front-facing camera.
Live Caption and Live Transcribe now support more languages
Live Caption for Android shows you the sounds coming from your phone's speakers in real time across all your apps. Live Caption support is now being added for seven more languages. The seven are Korean, Polish, Portuguese, Russian, Chinese, Turkish, and Vietnamese. Support for the same seven languages is being added to Live Transcribe, which now works with 15 languages without a cellular or Wi-Fi connection. With an internet connection, 120 languages work with Live Transcribe, giving over a billion Android users real-time transcriptions of the sounds around them.
Dual mode for live transcription. | Image credit: Google
If you have a foldable phone with dual screens like the new Pixel 9 Pro Fold, there is now a new version of Live Transcribe designed specifically for those models. Live Transcribe's dual-screen mode allows multiple viewers to see what's being discussed during a call by placing the phone on a table so that the screens and live transcription can be seen by everyone on the call. This feature lets you transcribe a conversation over dinner or at a business meeting.
Google says the company will continue to work with the disability community and use new AI features to give people with disabilities greater access to the world.