One prior art solution, described for example by Tiedemann in “Synchronizing Translated Movie Subtitles” suggests a synchronization that is done on the basis of aligned anchor points, which is a dictionary-based approach using automatic word alignment. Other approaches suggest various kinds of filters to identify such potential anchor points. Guimar?es et al. in “A Lightweight and Efficient Mechanism for Fixing the Synchronization of Misaligned Subtitle Documents” propose two-phase subtitle synchronization framework. This approach uses both anchors as well as audio fingerprint annotation. This requires the enrichment of the subtitle file with additional audio fingerprints and analysis at the first level and then adding the anchors, such as shown in other prior art approaches, so as to fix the presentation of misaligned subtitle entries. In such cases it is necessary to prepare the files, automatically or manually as the case may be, as well as perform complex operations so as to reach the required alignment between the subtitles and the actual audio content. Yet another prior art solution performs time alignment speech recognition such as suggested by Kirby et al. in U.S. Pat. No. 7,191,117. It is therefore desirable to provide a solution that will allow for affordable, simple and real-time lip sync to support the ever increasing demand to resolve the lip sync error problem.
It is therefore desirable to provide a solution that will allow for affordable, simple synchronization between the audio content of an audiovisual content to its respective subtitles so as to support the ever increasing demand to resolve the subtitle misalignment problem.