Home News Apple iOS 17 update to bring personal voice feature and more

Apple iOS 17 update to bring personal voice feature and more

Assistive access on the cards too

0

Apple has recently released the preview of the iOS 17 update, and we are already wooed by the features. One of the features is going to help people who are on the verge of losing their speaking ability to generate a voice that sounds similar to theirs. Apart from that, there are a few more features that are on the list and will be released with the iOS 17 update. Moving forward in this article, we will talk about these features in brief.

iOS 17 update features we can get to see

Personal voice feature: This can be said to be one of the most anticipated features out of the lot. The feature has been specially designed for people who are facing conditions like ALS (amyotrophic lateral sclerosis), which puts them on the brink of losing their voice. This feature will help suffering people in developing a personalized voice that will sound identical to their natural voice. Users can activate the feature by reading a random text prompt and recording it for a period of 15 minutes on their iPhone or iPad. Apple has claimed that the feature is based on an on-device learning model, which helps in keeping user information secure and private.

Assistive access: The assistive access feature of iOS 17 will allow the users to get a customized experience of FaceTime and Phone, as the two will be integrated into a single Calls application along with Camera, Messages, Music, and Photos. The feature provides a simple interface with high-contrast buttons and large text labels. Not only this, but the tools will also aid in customizing the experience to a considerable extent.

Magnifier’s Detection Mode: In the Magnifier application of iPhones, Point and Speak will help the users interact with physical objects based on several text labels. To make you understand it better with an example, in case you are using any household appliance, then Point and Speak integrates input from the Camera app, LiDAR scanner, and on-device machine learning to narrate the text on buttons as soon as the users move their fingers

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version