FIG. 7 shows that in another example embodiment, text 700 such as an email presented on a display 702 may be imaged by the imager on the smart glasses 704. FIG. 8 illustrates the imaging step of the display 702 at block 800, sowing logic that may be performed by a processor on the smart glasses 704 or by another processor. Moving to block 802, text recognition is executed on the text to determine the nature and subject of the text. Decision diamond 804 indicates that based at least in part on the text recognition, it is determined whether the text implicates an application, such as a calendar application. If so, the application may be automatically accessed and executed at block 806 by, for example, entering identify at least one application to process text. Alternatively, or in addition, at block 808 an alert may be presented on the smart glasses that the text being imaged on the display 702 may pertain to an application.
Machine learning may be used in the logic of FIG. 8. A training set of terms, text layouts, etc. may be input to, for example, a NN to train the NN what applications pertain to particular terms, layouts, etc. in text.
FIG. 9 illustrates further. All or part of the text imaged on the display 702 may be reproduced on a display 900 of the smart glasses, along with an advisory 902 that the text has been added or otherwise used in an application, in the example shown, added as an entry in the user's calendar. A selector 904 may be presented and may be selected to delete the action indicated.