Facebook Inc. in a recent press meet has announced an improved version of its artificial intelligence technology that generates descriptions of photos posted on its site for visually impaired users. The Automatic Alternative Text (AAT) AI was first introduced in 2016 as a way of improving the experience for visually impaired users. Before that, when a visually impaired user came across a photo on their newsfeed, Facebook would simply state the word “Photo” followed by the name of the person who posted.
The latest iteration AAT is much more accurate and capable. With the update, Facebook has expanded the number of concepts it can detect and identify in an image, and also provide more detailed descriptions, covering activities, landmarks, food types and types of animals.Facebook said the updated AAT can now recognize about 1,200 concepts instead of just 100. It achieved that by training the AI on a weekly basis using samples that Facebook said were more accurate, and demographically inclusive.
The company added that it trained the models to predict locations and semantic labels of the objects within an image. The multi-label and multidataset training techniques also helped make the model more reliable. Users also have the option to click on an image and get an even more detailed description of what it contains.