Our speech recognition system is able to directly accept natural user voice instructions for image editing either locally through on-device computing or through a cloud-based Natural Language understanding service. This is a first step towards a robust multimodal voice-based interface which allows our creative customers to search and edit images in an easy and engaging way using Adobe mobile applications. – Adobe
Although the video above is a proof-of-concept rather than a new product/service reveal, it continues to demonstrate Adobe’s commitment to incorporating machine learning and AI into their suite of products. At last year’s Adobe MAX Conference, Adobe announced Adobe Sensei, a framework that allows that can help automate tasks and assist Adobe Services like Stock Visual Search and Match Font.
What truly makes sets this demonstration apart is how Adobe begins to paint a future where we will be calling natural commands to our workspace and get the desired effect. Sure, the concept sticks to basics – simple crops, canvas flips, and sharing, but if Adobe is able to build out its machine learning and AI framework at half the speed that other software giants have, we may see software that not only edits based on verbal cues but one that would be able to identify users’ style and vision.
The real question remains, will photographers be willing to embrace a future where our software will begin to make artistic suggestions on their behalf?