What is claimed is:1. A system configured to generate correspondences between portions of recorded audio content and records of a collaboration environment, the system comprising:one or more physical processors configured by machine-readable instructions to:manage environment state information maintaining a collaboration environment, the collaboration environment being configured to facilitate interaction by users with the collaboration environment, the environment state information including work unit records, the work unit records including work information characterizing units of work created within the collaboration environment and assigned within the collaboration environment to the users who are expected to accomplish one or more actions to complete the units of work, the work unit records including a first work unit record for a first unit of work previously assigned to a first user to complete, and a second work unit record for a second unit of work;effectuate presentation of instances of a user interface on client computing platform associated with the users, wherein the users access recorded audio content and provide user input through the instances of the user interface to generate user-provided correspondences between temporal content of the recorded audio content and one or more of the work unit records, the recorded audio content including utterances by one or more of the users, the temporal content corresponding to periods of time within the recorded audio content, the user input including identification of the temporal content within the recorded audio content, such that a first instance of the user interface is presented on a first client computing platform associated with the first user through which the first user accesses first recorded audio content and provides a first set of user inputs, the first set of user inputs including an identification of first temporal content within the first recorded audio content and an identification of second temporal content within the first recorded audio content, wherein the user interface includes a temporal selection portion, and wherein the user input further includes user interaction with the temporal selection portion to direct a playback of the recorded audio content to individual ones of the periods of time to identify the temporal content;obtain user input information conveying the user input into the instances of the user interface, such that first user input information conveys the first set of user inputs into the first instance of the user interface by the first user;generate, based on the user input information, correspondence information conveying the user-provided correspondences between the temporal content of the recorded audio content and the one or more of the work unit records so that the users accessing the one or more of the work unit records are also provided access to corresponding ones of the recorded audio content and/or the temporal content, such that based on the first set of user inputs by the first user previously assigned to the first unit of work, first correspondence information and second correspondence information are generated, the first correspondence information conveying a first correspondence between the first temporal content of the first recorded audio content and the first work unit record, and the second correspondence information conveying a second correspondence between the second temporal content of the first recorded audio content and the second work unit record, such that access to the first temporal content of the first recorded audio content is provided while accessing the first work unit record, and access to the second temporal content of the first recorded audio content is provided while accessing the second work unit record;monitor the first set of user inputs with the temporal selection portion displayed in the first instance of the user interface that leads to updated indications of position and duration of a first period of time that identifies the first temporal content within the first recorded audio content;identify and store the updated indications within the first correspondence information; andeffectuate presentation of work unit pages of the collaboration environment through which the users access the work unit records, the work unit pages displaying work descriptions of respective work or user descriptions of respective users to whom the respective work is assigned, the work unit pages providing access, in response to further user input, to the instances of the user interface through which the users access the recorded audio content and/or the temporal content to provide the user input, such that a first work unit page is presented on the first client computing platform of the first user through which the first user accesses the first work unit record, the first work unit page providing the access to the first instance of the user interface through which the first user accesses the first recorded audio content and/or the first temporal content to provide the first set of user inputs.2. The system of claim 1, wherein the user input further includes identification of individual work unit records that correspond to identified ones of the temporal content within the recorded audio content, such that the first set of user inputs further includes identification of the first work unit record as corresponding to the first temporal content, and identification of the second work unit record as corresponding to the second temporal content.3. The system of claim 1, wherein the presentation of the instances of the user interface through which the users access the recorded audio content is limited to the users that are linked to the recorded audio content.4. The system of claim 3, wherein the users that are linked to the recorded audio content include one or more of creators of the recorded audio content, assignees of individual ones of the work unit records, or one or more of the users who participated in the recorded audio content.5. The system of claim 1, wherein the one or more physical processors are further configured by the machine-readable instructions to:store the correspondence information in the work unit records, such that the first correspondence information is stored in the first work unit record, and the second correspondence information is stored in the second work unit record.6. The system of claim 1, wherein the access to the recorded audio content and/or the temporal content is facilitated by resource identifiers appearing on the work unit pages, such that selection of a resource identifier identifies a work unit record and accesses a digital asset stored in the work unit record.7. The system of claim 1, wherein the one or more physical processors are further configured by the machine-readable instructions to:compile the correspondence information and the work information of the one or more of the work unit records into input/output pairs, the input/output pairs including training input information and training output information, the training input information for an individual input/output pair including the correspondence information for an individual one of the recorded audio content, the training output information for the individual input/output pair including the work information for an individual one of the work unit records, such that the first correspondence information and the work information for the first work unit record are compiled into a first input/output pair, and the second correspondence information and the work information for the second work unit record are compiled into a second input/output pair;train a machine learning model based on the input/output pairs to generate a trained machine learning model, the trained machine learning model being configured to generate the correspondences between the temporal content of the recorded audio content and the work unit records, such that the machine learning model is trained using the first input/output pair and the second input/output pair to generate the trained machine learning model; andstore the trained machine learning model.8. A method to generate correspondences between portions of recorded audio content and records of a collaboration environment, the method comprising:managing environment state information maintaining a collaboration environment, the collaboration environment being configured to facilitate interaction by users with the collaboration environment, the environment state information including work unit records, the work unit records including work information characterizing units of work created within the collaboration environment and assigned within the collaboration environment to the users who are expected to accomplish one or more actions to complete the units of work, the work unit records including a first work unit record for a first unit of work previously assigned to a first user to complete, and a second work unit record for a second unit of work;effectuating presentation of instances of a user interface on client computing platform associated with the users, wherein the users access recorded audio content and provide user input through the instances of the user interface to generate user-provided correspondences between temporal content of the recorded audio content and one or more of the work unit records, the recorded audio content including utterances by one or more of the users, the temporal content corresponding to periods of time within the recorded audio content, the user input including identification of the temporal content within the recorded audio content, including presenting a first instance of the user interface on a first client computing platform associated with the first user through which the first user accesses first recorded audio content and provides a first set of user inputs, the first set of user inputs including an identification of first temporal content within the first recorded audio content and an identification of second temporal content within the first recorded audio content, wherein the user interface includes a temporal selection portion, and wherein the user input further includes user interaction with the temporal selection portion to direct a playback of the recorded audio content to individual ones of the periods of time to identify the temporal content;obtaining user input information conveying the user input into the instances of the user interface, including obtaining first user input information conveying the first set of user inputs into the first instance of the user interface by the first user;generating, based on the user input information, correspondence information conveying the user-provided correspondences between the temporal content of the recorded audio content and the one or more of the work unit records so that the users accessing the one or more of the work unit records are also provided access to corresponding ones of the recorded audio content and/or the temporal content, including based on the first set of user inputs by the first user previously assigned to the first unit of work, generating first correspondence information and second correspondence information, the first correspondence information conveying a first correspondence between the first temporal content of the first recorded audio content and the first work unit record, and the second correspondence information conveying a second correspondence between the second temporal content of the first recorded audio content and the second work unit record, such that access to the first temporal content of the first recorded audio content is provided while accessing the first work unit record, and access to the second temporal content of the first recorded audio content is provided while accessing the second work unit record;monitoring the first set of user inputs with the temporal selection portion displayed in the first instance of the user interface that leads to updated indications of position and duration of a first period of time that identifies the first temporal content within the first recorded audio content;identifying and storing the updated indications within the first correspondence information; andeffectuating presentation of work unit pages of the collaboration environment through which the users access the work unit records, the work unit pages displaying work descriptions of respective work or user descriptions of respective users to whom the respective work is assigned, the work unit pages providing access, in response to further user input, to the instances of the user interface through which the users access the recorded audio content and/or the temporal content to provide the user input, such that a first work unit page is presented on the first client computing platform of the first user through which the first user accesses the first work unit record, the first work unit page providing the access to the first instance of the user interface through which the first user accesses the first recorded audio content and/or the first temporal content to provide the first set of user inputs.9. The method of claim 8, wherein the user input further includes identification of individual work unit records that correspond to identified ones of the temporal content within the recorded audio content, such that the first set of user inputs further includes identification of the first work unit record as corresponding to the first temporal content, and identification of the second work unit record as corresponding to the second temporal content.10. The method of claim 8, wherein the presentation of the instances of the user interface through which the users access the recorded audio content is limited to the users that are linked to the recorded audio content.11. The method of claim 10, wherein the users that are linked to the recorded audio content include one or more of creators of the recorded audio content, assignees of individual ones of the work unit records, or one or more of the users who participated in the recorded audio content.12. The method of claim 8, further comprising:storing the correspondence information in the work unit records, such that the first correspondence information is stored in the first work unit record, and the second correspondence information is stored in the second work unit record.13. The method of claim 8, wherein the access to the recorded audio content and/or the temporal content is facilitated by resource identifiers appearing on the work unit pages, such that selection of a resource identifier identifies a work unit record and accesses a digital asset stored in the work unit record.14. The method of claim 8, further comprising:compiling the correspondence information and the work information of the one or more of the work unit records into input/output pairs, the input/output pairs including training input information and training output information, the training input information for an individual input/output pair including the correspondence information for an individual one of the recorded audio content, the training output information for the individual input/output pair including the work information for an individual one of the work unit records, including compiling the first correspondence information and the work information for the first work unit record into a first input/output pair, and the second correspondence information and the work information for the second work unit record are compiled into a second input/output pair;training a machine learning model based on the input/output pairs to generate a trained machine learning model, the trained machine learning model being configured to generate the correspondences between the temporal content of the recorded audio content and the work unit records, including training the machine learning model based on the first input/output pair and the second input/output pair to generate the trained machine learning model; andstoring the trained machine learning model.