Many smart mobile devices are enabled with highly configurable cameras that allow users to capture photographs without difficulty. Available settings include zoom level, aperture, filters, brightness, and shading. However, despite the available settings, not every captured photograph may be of high quality. For example, after capture, a photographer may review a photograph, either self-captured or being peer reviewed, and orally critique the image (e.g., “The brightness should be higher”). As such, it may be advantageous to, among other things, capture user feedback and sentiment to a captured photograph and adjust device settings accordingly.
According to at least one embodiment, capture of a user's sentiments, emotions, or comments on a photograph may be enabled on a photographic capture device through a sensor, such as a microphone. Through machine learning, a knowledge corpus of historical user sentiments to specific photographs may be created to identify the user's photographic need in specific situations. When the user focuses a capture device towards a surrounding, the device may identify the contextual surrounding and, using the knowledge corpus, identify appropriate settings under with a photograph should be captured. In at least one embodiment, if any emotion, sentiment, or comments are related to non-device parameters (e.g., an unwanted object is present or a subject is obstructed), then the capture device may provide voice-based guidance to the user while capturing the photograph so the issue may be corrected. In at least one other embodiment, user behavior while capturing the photograph, such as capturing multiple photographs or previewing a photograph and recapturing, may be tracked and included within the knowledge corpus so such behavior may be considered by a machine learning module.