What is claimed is:1. A computer-implemented method comprising:accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content;comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object;identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object; andexecuting a workflow relating to at least one of the first media data object and the second media data object based on the set of edits.2. The computer-implemented method of claim 1, wherein comparing the first temporally sequenced content with the second temporally sequenced content comprises:dividing the first temporally sequenced content into a first sequence of segments;dividing the second temporally sequenced content into a second sequence of segments;calculating a pairwise distance between each segment within the first sequence of segments and each segment within the second sequence of segments to identify one or more common segments between the first sequence and second sequence, whose pairwise distance falls within a predetermined threshold, and one or more different segments between the first sequence and second sequence, whose pairwise distance exceed the predetermined threshold;identifying the longest common subsequence of segments between the first sequence of segments and the second sequence of segments; andidentifying the set of common temporal subsequences from the longest common subsequence of segments by identifying a set of contiguous portions of the longest temporal subsequence.3. The computer-implemented method of claim 2, wherein:the first and second media data objects comprise audio data objects;dividing the first temporally sequenced content into the first sequence of segments comprises dividing the first temporally sequenced content into segments of a predetermined length of time; anddividing the second temporally sequenced content into a second sequence of segments comprises dividing the second temporally sequenced content into segments of the predetermined length of time.4. The computer-implemented method of claim 2, wherein:the first and second media data objects comprise video data objects;dividing the first temporally sequenced content into the first sequence of segments comprises dividing the first temporally sequenced content into separate video frames; anddividing the second temporally sequenced content into the second sequence of segments comprises dividing the second temporally sequenced content into separate video frames.5. The computer-implemented method of claim 2, wherein identifying the longest common subsequence of segments between the first sequence of segments and the second sequence of segments comprises identifying the longest common subsequence of segments with a same temporal ordering in both the first sequence of segments and the second sequence of segments.6. The computer-implemented method of claim 1, wherein executing the workflow comprises:identifying a user account associated with performing a task that is based at least in part on the first media data object; andsending a notification to the user account that indicates at least one of the set of edits to the first media data object.7. The computer-implemented method of claim 1, wherein executing the workflow comprises:identifying a user account associated with performing a task that relates to the first media data object; andgenerating and assigning a new task to the user account based at least in part on at least one of the set of edits to the first media data object.8. The computer-implemented method of claim 1, further comprising:identifying a project that relates to the first media data object;identifying a set of dependencies within the project; anddesignating a task as incomplete based at least in part on the set of edits interfering with at least one dependency upon which the task relies.9. The computer-implemented method of claim 1, wherein the workflow comprises a post-production workflow for a content item that has been changed as represented by a difference between the first media data object and the second media data object.10. The computer-implemented method of claim 9, wherein the post-production workflow comprises a localization workflow to update a localization of video content, the localization comprising at least one of:dubbing the video content in a selected language;subtitling the video content in a selected language; orapplying visual description to the video content.11. The computer-implemented method of claim 1, wherein the workflow comprises a quality control workflow for inspecting changed content of the second media data object as indicated by the set of edits.12. The computer-implemented method of claim 1, wherein the workflow comprises transforming stored data corresponding to temporally sequenced content of the second media data object that falls outside the set of common temporal subsequences between the first media data object and the second media data object based at least in part on the set of edits.13. The computer-implemented method of claim 1, wherein the set of edits comprises at least one of:an insertion of content adjacent to a subsequence within the set of common temporal subsequences;a deletion of content adjacent to a subsequence within the set of common temporal subsequences; ora substitution of content adjacent to a subsequence within the set of common temporal subsequences.14. The computer-implemented method of claim 1, wherein the set of edits indicates a change in relative temporal position between a first subsequence within the set of common temporal subsequences and a second subsequence within the set of common temporal subsequences.15. The computer-implemented method of claim 1, wherein comparing the first temporally sequenced content represented by the first media data object with the second temporally sequenced content represented by the second media data object comprises:rendering the first temporally sequenced content from the first media data object; andrendering the second temporally sequenced content from the second media data object.16. The computer-implemented method of claim 1, wherein:the first media data object and the second media data object each comprise simultaneous video content and audio content;identifying the set of common temporal subsequences between the first media data object and the second media data object comprises identifying a set of common temporal video subsequences and a set of common temporal audio subsequences; andexecuting the workflow based on the set of edits comprises determining the workflow based at least in part on determining a difference between the set of common temporal video subsequences and the set of common temporal audio subsequences.17. The computer-implemented method of claim 1, wherein identifying the set of edits comprises generating metadata that indicates a start time and an end time for each subsequence within the set of common temporal subsequences.18. A system comprising:at least one physical processor; andphysical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to:access a first media data object and a different, second media data object that, when played back, each render temporally sequenced content;compare first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object;identify a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object; andexecute a workflow relating to at least one of the first media data object and the second media data object based on the set of edits.19. The system of claim 18, wherein comparing the first temporally sequenced content with the second temporally sequenced content comprises:dividing the first temporally sequenced content into a first sequence of segments;dividing the second temporally sequenced content into a second sequence of segments;calculating a pairwise distance between each segment within the first sequence of segments and each segment within the second sequence of segments to identify one or more common segments between the first sequence and second sequence, whose pairwise distance falls within a predetermined threshold, and one or more different segments between the first sequence and second sequence, whose pairwise distance exceed the predetermined threshold;identifying the longest common subsequence of segments between the first sequence of segments and the second sequence of segments; andidentifying the set of common temporal subsequences from the longest common subsequence of segments by identifying a set of contiguous portions of the longest temporal subsequence.20. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to:access a first media data object and a different, second media data object that, when played back, each render temporally sequenced content;compare first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object;identify a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object; andexecute a workflow relating to at least one of the first media data object and the second media data object based on the set of edits.