What is claimed is:1. A method, comprising:selecting a subset of a plurality of encoding profiles, each encoding profile including a unique set of values for a set of encoding parameters, the selected encoding profiles representing optimal coverage of a first bit rate-quality data space that encompasses all of the sets of values of encoding parameters represented by the plurality of encoding profiles;selecting previously-live video presentations, the previously-live video presentations comprising videos of a given category of live content, wherein the previously-live video presentations are a subset of less than all the video presentations in a collection of previously-live video content;selecting, from each of the previously-live video presentations, representative segments;encoding each of the representative segments using each of the plurality of encoding profiles, thereby generating a plurality of encoded video segments;determining values for a plurality of attributes for each of the plurality of encoded video segments, the plurality of attributes including a video quality attribute, a bit rate attribute, and an encoding runtime attribute;identifying a first subset of the first plurality of encoded video segments based on a first plurality of constraints, the first plurality of constraints including a minimum video quality constraint, a maximum bit rate constraint, and a maximum encoding runtime constraint;identifying a plurality of segment-optimized encoding ladders, each segment-optimized encoding ladder being optimized for a respective one of the representative segments, wherein identifying the plurality of segment-optimized encoding ladders comprises identifying a second subset of the first subset of the plurality of encoded video segments based on a second constraint, the second constraint corresponding to a combination of the video quality attribute, the bit rate attribute, and the encoding runtime attribute;selecting, from the plurality of segment-optimized encoding ladders, a first segment-optimized encoding ladder; andencoding, in real-time, future live events of the given category of live content using the first segment-optimized encoding ladder.2. The method of claim 1, further comprising, for each of the representative segments:generating a boundary curve by curve fitting in the first bit rate-quality data space to the encoded video segments generated by encoding the representative segment using the plurality of encoding profiles, the boundary curve bounding a peak quality for the encoded video segments, wherein identifying the segment-optimized encoding ladder for the representative segment comprises identifying an encoding ladder optimized for the representative segment by minimizing an area or volume between the boundary curve and performance curves associated with the encoded video segments.3. The method of claim 2, wherein selecting the first segment-optimized encoding ladder comprises determining that the first segment-optimized encoding ladder has a performance curve with the smallest sum of areas or volumes between the performance curve and the boundary curve bounding the peak quality for the encoded video segments, relative to at least some of the performance curves associated with the other encoded video segments.4. The method of claim 1, wherein selecting, from each of the previously-live video presentations, the representative segments comprises:for the previously-live video presentation, determining an encoded bitrate distribution of a constant-quality-constrained encoding of the presentation, wherein determining the encoded bitrate distribution comprises determining that 5% of the constant-quality-constrained encoding has an encoded bitrate above a given encoded bitrate; andfor the previously-live video presentation, selecting the representative segments such that each of the representative segments selected from the previously-live video presentation has an average encoded bitrate that is at least as high as the given encoded bitrate.5. A method, comprising:selecting previously-live video presentations characterized by a set of characteristics;encoding each of the previously-live video presentations using each of a plurality of encoding profiles thereby generating a set of encoded video presentations for each of the previously-live video presentations;determining values for each of a plurality of attributes for each of the encoded video presentations, the plurality of attributes including a quality attribute, a bit rate attribute, and an encoding runtime attribute, the plurality of attributes defining a multi-dimensional space;identifying an encoding ladder for each set of encoded video presentations by identifying a subset of the set of encoded video presentations based on a constraint within the multi-dimensional space, the constraint being based on the quality attribute, the bit rate attribute, and the encoding runtime attribute;selecting a first encoding ladder from among the encoding ladders, the first encoding ladder best meeting the constraint for all of the sets of encoded video presentations; andencoding live content characterized by the set of characteristics using the first encoding ladder.6. The method of claim 5, wherein the set of characteristics comprises content-based characteristics.7. The method of claim 5, wherein identifying the encoding ladder for each set of encoded video presentations comprises:generating a boundary curve by curve fitting in the multi-dimensional space to the set of encoded video presentations, the boundary curve bounding a peak quality for the set of encoded video presentations; andidentifying, as the encoding ladder for the set, a set of encoding profiles that minimize an area or volume below the boundary curve.8. The method of claim 7, wherein selecting the first encoding ladder from among the encoding ladders comprises:for each of the encoding ladders, summing areas or volumes between the boundary curves of the other sets of encoded video presentations and respective performance results of encoding the previously-live video presentations with that encoding ladder; andselecting the encoding ladder having the smallest sum of areas or volumes as the first encoding ladder.9. The method of claim 5, wherein selecting the first encoding ladder from among the encoding ladders comprises:for each of the encoding ladders, determining how well the encoding ladder performs when encoding each of the previously-live video presentations; anddetermining that the first encoding ladder, out of all the encoding ladders, has the best average performance when encoding each of the previously-live video presentations.10. The method of claim 5, wherein the encoding ladder for each set of encoded video presentations comprises a presentation-optimized encoding ladder optimized for the previously-live video presentation from which the set of encoded video presentations was generated.11. The method of claim 5, wherein the first encoding ladder comprises an encoding ladder optimized for encoding live content characterized by the set of characteristics.12. The method of claim 5, further comprising:selecting additional previously-live video presentations characterized by another set of characteristics;encoding each of the additional previously-live video presentations using each of the plurality of encoding profiles thereby generating a set of additional encoded video presentations for each of the additional previously-live video presentations;determining values for each of the plurality of attributes for each of the additional encoded video presentations;identifying an encoding ladder for each set of additional encoded video presentations by identifying a subset of the set of additional encoded video presentations based on the constraint within the multi-dimensional space;selecting a second encoding ladder from among the encoding ladders associated with each set of additional encoded video presentations, the second encoding ladder best meeting the constraint for all of the sets of additional encoded video presentations; andencoding live content characterized by the another set of characteristics using the second encoding ladder.13. A system, comprisingone or more processors and memory configured to:select a subset of a plurality of encoding profiles, each encoding profile including a unique set of values for a set of encoding parameters, the selected encoding profiles representing a first bit rate-quality data space that encompasses all of the sets of values of encoding parameters represented by the plurality of encoding profiles;select previously-live video presentations of a shared content category, wherein the previously-live video presentations are a subset of less than all the video presentations in a collection of previously-live video content;encode at least a portion of each of the previously-live video presentations using each of the plurality of encoding profiles, thereby generating a plurality of encoded video segments;determine values for a plurality of attributes for each of the plurality of encoded video segments, the plurality of attributes including a video quality attribute and a bit rate attribute;identify a first subset of the first plurality of encoded video segments based on a first plurality of constraints, the first plurality of constraints including a minimum video quality constraint and a maximum bit rate constraint;identify a plurality of encoding ladders, wherein identifying the plurality of encoding ladders comprises identifying a second subset of the first subset of the plurality of encoded video segments based on a second constraint, the second constraint corresponding to a combination of the video quality attribute and the bit rate attribute;select, from the plurality of encoding ladders, a first encoding ladder; andencode, in real-time, future live events of the shared content category of live content using the first encoding ladder.14. The system of claim 13, wherein the first plurality of constraints further comprising a maximum encoding runtime constraint.15. The system of claim 13, wherein identifying the plurality of encoding ladders comprises:for each previously-live video presentation, generate a boundary curve by curve fitting in the first bit rate-quality data space of the encoded video segments encoded from the previously-live video presentation, the boundary curve representing a peak encoding quality as a function of bitrate for encodings of the previously-live video presentation; andfor each previously-live video presentation, identifying a set of encoding profiles, which together form the encoding ladder, that minimizes any area or volume under the boundary curve for encodings of the previously-live video presentation.16. The system of claim 15, wherein identifying the first encoding ladder comprises:for each of the encoding ladders, measuring performance results of that encoding ladder in encoding all of the previously-live video presentations by summing the areas or volumes between the boundary curves and the performance results of that encoding ladder; andidentifying, as the first encoding ladder, whichever encoding ladder has the smallest sum of areas or volumes between the boundary curves and the performance results of that encoding ladder.17. The system of claim 13, wherein identifying the first encoding ladder comprises:for each of the encoding ladders, measuring performance results of that encoding ladder in encoding all of the previously-live video presentations; andidentifying, as the first encoding ladder, whichever encoding ladder has the best performance results when averaged across all of the previously-live video presentations.18. The system of claim 13, wherein the shared content category comprises a category selected from the group consisting of: sporting events held by a given sports league, events in which a given team is playing, sporting events broadcast by a given media network, regular season sporting events of a given sports league, playoff season sporting events of a given sports league, live motorsports events, live outdoor concerts, live indoor concerts, newscasts, live variety shows, online auction broadcasts, online gaming broadcasts, live broadcasts using fixed or stationary cameras, and live broadcasts using mobile cameras.19. The system of claim 13, wherein, for each previously-live video presentation, encoding at least the portion of each of the previously-live video presentation comprises:determining an encoded bitrate distribution of an encoding of the previously-live video presentation made with a given-constant-quality-constraint;determining that 5% of the constant-quality-constrained encoding has an encoded bitrate above a given encoded bitrate; andselecting, as the portion to encode using each of the plurality of encoding profiles, first segments of the previously-live video presentation such that each of the first segments, when encoded with the given-constant-quality-constraint, has an average encoded bitrate that is at least as high as the given encoded bitrate.20. The system of claim 13, wherein, for each previously-live video presentation, encoding at least the portion of each of the previously-live video presentation comprises:determining an encoded bitrate distribution of an encoding of the previously-live video presentation made with a given-constant-quality-constraint;determining that 20% of the constant-quality-constrained encoding has an encoded bitrate above a given encoded bitrate; andselecting, as the portion to encode using each of the plurality of encoding profiles, first segments of the previously-live video presentation such that each of the first segments, when encoded with the given-constant-quality-constraint, has an average encoded bitrate that is at least as high as the given encoded bitrate.