Ahead of the WWDC event in June, apple on tuesday previewed It’s a set of accessibility features coming “later this year” in the next major iPhone update.
The new “Personal Voice” feature, expected as part of iOS 17, will allow iPhones and iPads to generate a digital reproduction of the user’s voice for face-to-face conversations, phone calls, FaceTime and voice calls.
Apple said Personal Voice creates a synthetic voice just like you and can be used to connect with family and friends. This feature is intended for users with conditions that may affect their speaking ability over time.
Apple Personal Voice
apple
Users can create a personal voice by recording 15 minutes of audio on their device. Apple said the technology uses local machine learning techniques to maximize privacy.
It’s part of a larger suite of accessibility improvements for iOS devices, including new Assistive Access features that make iOS devices easier to use for users with cognitive disabilities and their caregivers.
Apple also announced another technology powered by machine learning. It enhances the existing magnifier functionality with a new detection mode powered by point-and-speak. The new feature combines camera input, LiDAR input and machine learning technology to announce text on screen.
Apple usually announces software in beta at WWDC. This means that members of the public who wish to opt-in will have the feature available to developers for the first time. These features typically remain in beta during the summer and become generally available in the fall when new iPhones hit the market.
Apple’s 2023 WWDC conference starts on June 5th. The company is expected to unveil its first virtual reality headset, among other software and hardware announcements.