The invention claimed is:1. A method using a processor, the method comprising:receiving an input 3D image, the input 3D image comprising a left eye (LE) input image frame and a right eye (RE) input image frame;generating, based on the LE input image frame and the RE input image frame, a first multiplexed image frame comprising first spatial frequency content unfiltered in a vertical direction including first high spatial frequency content in the vertical direction and first reduced resolution content in a horizontal direction;wherein the first spatial frequency content unfiltered in the vertical direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the vertical direction;wherein the first high spatial frequency content in the vertical direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the vertical direction;wherein the first reduced resolution content in the horizontal direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the horizontal direction;generating, based on the LE input image frame and the RE input image frame, a second multiplexed image frame comprising second spatial frequency content unfiltered in the horizontal direction including second high spatial frequency content in the horizontal direction and second reduced resolution content in the vertical direction;wherein one of the first multiplexed image frame or the second multiplexed image frame comprises residual image data in combination with carrier image data, wherein the residual image data is generated by subtracting reference image data generated based on the other of the first multiplexed image frame or the second multiplexed image frame from input image data derived from the LE input image frame and the RE input image frame; wherein all the carrier image data comprise pixel values of the same fixed value;wherein the second spatial frequency content unfiltered in the horizontal direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the horizontal direction;wherein the second high spatial frequency content in the horizontal direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the horizontal direction;wherein the second reduced resolution content in the vertical direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the vertical direction; andencoding and outputting the first multiplexed image frame and the second multiplexed image frame to represent the input 3D image.2. The method as recited in claim 1, wherein the 3D input image is a first 3D input image in a sequence of 3D input images comprising a second different 3D input image having a second LE input image frame and a second RE input image frame; and the method further comprising:generating, based on the second LE input image frame and the second RE input image frame, a third multiplexed image frame comprising third high spatial frequency content in the vertical direction and third reduced resolution content in the horizontal direction;generating, based on the second LE input image frame and the second RE input image frame, a fourth multiplexed image frame comprising fourth high spatial frequency content in the horizontal direction and fourth reduced resolution content in the vertical direction; andencoding and outputting the third multiplexed image frame and the fourth multiplexed image frame to represent the second input 3D image.3. The method as recited in claim 1, wherein the first multiplexed image frame comprises a first LE image data portion and a first RE image data portion; wherein the first LE image data portion and the first RE image data portion are of a same spatial resolution along both horizontal and vertical directions; wherein the second multiplexed image frame comprises a second LE image data portion and a second RE image data portion; and wherein the second LE image data portion and the second RE image data portion are of a same spatial resolution along both horizontal and vertical directions.4. The method as recited in claim 3, wherein each of the first LE image data portion and the first RE image data portion represents a subsampled version of a full resolution image frame; wherein the first multiplexed image frame adopts a side-by-side (SbS) format to carry the first LE image data portion and the first RE image data portion; wherein each of the second LE image data portion and the second RE image data portion represents a subsampled version of a full resolution image frame; and wherein the second multiplexed image frame adopts a top-and-bottom (TaB) format to carry the second LE image data portion and the second RE image data portion.5. The method as recited in claim 1, wherein the first multiplexed image frame adopts a first multiplexing format that preserves the high spatial frequency content in the vertical direction, and wherein the second multiplexed image frame adopts a second multiplexing format that preserves the high spatial frequency content in the horizontal direction.6. The method as recited in claim 1, wherein one of the first multiplexed image frame or the second multiplexed image frame is outputted in a base layer bitstream in a plurality of bit streams, while the other of the first multiplexed image frame or the second multiplexed image frame is outputted in an enhancement layer bitstream in the plurality of bit streams.7. The method as recited in claim 1, further comprising:generating, based at least in part on the first multiplexed image frame, prediction reference image data; andencoding an enhancement layer video signal based on differences between the prediction reference image data and the input 3D image.8. The method as recited in claim 1, further comprising:applying one or more first operations comprising at least one of (a) spatial frequency filtering operations or (b) spatial subsampling operations in the second direction to the first input image frame and the second input image frame in generating the first multiplexed image frame, wherein the one or more first operations removes high spatial frequency content in the second direction and preserves high spatial frequency content in the first direction; andapplying one or more second operations comprising at least one of (a) spatial frequency filtering operations or (b) spatial subsampling operations in the first direction to the first input image frame and the second input image frame in generating the second multiplexed image frame, wherein the one or more second operations removes high spatial frequency content in the first direction and preserves high spatial frequency content in the second direction.9. The method as recited in claim 1, further comprising converting one or more 3D input images represented, received, transmitted, or stored with one or more input video signals into one or more 3D output images represented, received, transmitted, or stored with one or more output video signals.10. The method as recited in claim 1, wherein the input 3D image comprises image data encoded in one of a high dynamic range (HDR) image format, a RGB color space associated with the Academy Color Encoding Specification (ACES) standard of the Academy of Motion Picture Arts and Sciences (AMPAS), a P3 color space standard of the Digital Cinema Initiative, a Reference Input Medium Metric/Reference Output Medium Metric (RIMM/ROMM) standard, an sRGB color space, or a RGB color space associated with the BT.709 Recommendation standard of the International Telecommunications Union (ITU).11. A method using a processor, the method comprising:receiving a 3D image represented by a first multiplexed image frame and a second multiplexed image frame, the first multiplexed image frame comprising first spatial frequency content unfiltered in a vertical direction including first high spatial frequency content in the vertical direction and first reduced resolution content in a horizontal direction, and the second multiplexed image frame comprising second spatial frequency content unfiltered in the horizontal direction including second high spatial frequency content in the horizontal direction and second reduced resolution content in the vertical direction; wherein one of the first multiplexed image frame or the second multiplexed image frame comprises residual image data, wherein the residual image data has been generated by subtracting reference image data generated based on the other of the first multiplexed image frame or the second multiplexed image frame from input image data derived from a LE input image frame and a RE input image frame;wherein the first spatial frequency content unfiltered in the vertical direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the vertical direction;wherein the first high spatial frequency content in the vertical direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the vertical direction;wherein the first reduced resolution content in the horizontal direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the horizontal direction;wherein the second spatial frequency content unfiltered in the horizontal direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the horizontal direction;wherein the second high spatial frequency content in the horizontal direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the horizontal direction;wherein the second reduced resolution content in the vertical direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the vertical direction;wherein one of the first multiplexed image frame and the second multiplexed image frame comprises a residual image frame and a carrier image frame;wherein all the carrier image frame comprises pixel values of the same fixed value;generating, based on the first multiplexed image frame and the second multiplexed image frame, a left eye (LE) image frame and a right eye (RE) image frame, the LE image frame comprising LE high spatial frequency content in both horizontal and vertical directions, and the RE image frame comprising RE high spatial frequency content in both horizontal and vertical directions; andrendering the 3D image by rendering the LE image frame and the RE image frame.12. The method as recited in claim 11, wherein the 3D image is a first 3D image in a sequence of 3D images comprising a second different 3D image having third multiplexed image frame and a fourth multiplexed image frame, the third multiplexed image frame comprising third high spatial frequency content in the vertical direction and third reduced resolution content in the horizontal direction, and the fourth multiplexed image frame comprising fourth high spatial frequency content in the horizontal direction and fourth reduced resolution content in the vertical direction; and the method further comprising:generating a second LE image frame and a second RE image frame, the second LE image frame comprising high spatial frequency content in both horizontal and vertical directions, and the second LE image frame comprising high spatial frequency content in both horizontal and vertical directions; andrendering the second 3D image by rendering the second LE image frame and the second RE image frame.13. The method as recited in claim 11, wherein at least one of the first multiplexed image frame or the second multiplexed image frame comprises an LE image data portion and an RE image data portion; and wherein the LE image data portion and the RE image data portion are of a same spatial resolution.14. The method as recited in claim 13, wherein each of the LE image data portion and the RE image data portion represents a subsampled version of a full resolution image frame; and wherein the LE image data portion and the RE image data portion forms a single image frame in one of a side-by-side format or a top-and-bottom format.15. The method as recited in claim 11, wherein one of the first multiplexed image frame or the second multiplexed image frame is decoded from a base layer bitstream in a plurality of bit streams, while the other of the first multiplexed image frame or the second multiplexed image frame is decoded from an enhancement layer bitstream in the plurality of bit streams.16. The method as recited in claim 11, further comprising:generating, based at least in part on one of the first multiplexed image frame or the second multiplexed image frame, prediction reference image data; andgenerating, based on enhancement layer (EL) data decoded from an EL video signal and the prediction reference image data, one of the LE image frame or the RE image frame.17. The method as recited in claim 11, further comprising:applying one or more first operations comprising at least one of (a) spatial frequency filtering operations or (b) demultiplexing operations in generating the LE image frame, wherein the one or more first operations combine LE high spatial frequency content, as derived from the first multiplexed image frame and the second multiplexed image frame, of both horizontal and vertical directions into the LE image frame; andapplying one or more second operations comprising at least one of (a) spatial frequency filtering operations or (b) demultiplexing operations in generating the RE image frame, wherein the one or more second operations combine RE high spatial frequency content, as derived from the first multiplexed image frame and the second multiplexed image frame, of both horizontal and vertical directions into the RE image frame.18. The method as recited in claim 17, wherein the one or more first operations and the one or more second operations comprise at least a high pass filtering operation.19. The method as recited in claim 17, wherein the one or more first operations and the one or more second operations comprise a processing sub-path that replaces at least one high pass filtering operation; and wherein the processing sub-path comprises at least one subtraction operation and no high pass filtering operation.20. The method as recited in claim 11, the method further comprising:decoding and processing enhancement layer image data without generating prediction reference data from the other of the first multiplexed image frame or the second multiplexed image frame.21. The method as recited in claim 11, further comprising processing one or more 3D images represented, received, transmitted, or stored with one or more input video signals.22. The method as recited in claim 11, wherein the 3D image comprises image data encoded in one of a high dynamic range (HDR) image format, a RGB color space associated with the Academy Color Encoding Specification (ACES) standard of the Academy of Motion Picture Arts and Sciences (AMPAS), a P3 color space standard of the Digital Cinema Initiative, a Reference Input Medium Metric/Reference Output Medium Metric (RIMM/ROMM) standard, an sRGB color space, or a RGB color space associated with the BT.709 Recommendation standard of the International Telecommunications Union (ITU).23. A system, comprising:an encoder configured toreceive an input 3D image, the input 3D image comprising a left eye (LE) input image frame and a right eye (RE) input image frame;generate, based on the LE input image frame and the RE input image frame, a first multiplexed image frame comprising first spatial frequency content unfiltered in a vertical direction including first high spatial frequency content in the vertical direction and first reduced resolution content in a horizontal direction;wherein the first spatial frequency content unfiltered in the vertical direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the vertical direction;wherein the first high spatial frequency content in the vertical direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the vertical direction;wherein the first reduced resolution content in the horizontal direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the horizontal direction;generate, based on the LE input image frame and the RE input image frame, a second multiplexed image frame comprising second spatial frequency content unfiltered in the horizontal direction including second high spatial frequency content in the horizontal direction and second reduced resolution content in the vertical direction;wherein one of the first multiplexed image frame or the second multiplexed image frame comprises residual image data in combination with carrier image data, wherein the residual image data is generated by subtracting reference image data generated based on the other of the first multiplexed image frame or the second multiplexed image frame from input image data derived from the LE input image frame and the RE input image frame;wherein all the carrier image data comprises pixel values of the same fixed value;wherein the second spatial frequency content unfiltered in the horizontal direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the horizontal direction;wherein the second high spatial frequency content in the horizontal direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the horizontal direction;wherein the second reduced resolution content in the vertical direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the vertical direction; andencode and outputting the first multiplexed image frame and the second multiplexed image frame to represent the input 3D image;a decoder configured to:receive the first multiplexed image frame and the second multiplexed image frame in a plurality of video bitstreams;wherein one of the first multiplexed image frame and the second multiplexed image frame comprises the residual image data in combination with carrier image data;wherein all the carrier image data comprises pixel values of the same fixed value;generate, based on the first multiplexed image frame and the second multiplexed image frame, a left eye (LE) image frame and a right eye (RE) image frame, the LE image frame comprising LE high spatial frequency content in both horizontal and vertical directions, and the RE image frame comprising RE high spatial frequency content in both horizontal and vertical directions; andrender the LE image frame and the RE image frame.24. A method for encoding 3D frame compatible full resolution (FCFR) images using a processor, the method comprising:receiving an input 3D image, the input 3D image comprising a left eye (LE) input image frame and a right eye (RE) input image frame;generating, based on the LE input image frame and the RE input image frame, a first multiplexed image frame comprising first spatial frequency content unfiltered in a first direction including first high spatial frequency content in the first direction and first reduced resolution content in a second direction, the second direction being orthogonal to the first direction;wherein the first spatial frequency content unfiltered in the vertical first spatial direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the vertical first spatial direction;wherein the first high spatial frequency content in the first spatial direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the first spatial direction;wherein the first reduced resolution content in the second spatial direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the second spatial direction;generating, based on the first multiplexed image frame, reference image data and carrier image data;subtracting the reference image data from the input 3D image data to generate residual image data;generating, based on the residual image data and the carrier image data, a second multiplexed image frame comprising second spatial frequency content unfiltered in the second direction including second high spatial frequency content in the second direction and second reduced spatial frequency content in the first direction;wherein the second spatial frequency content unfiltered in the second spatial direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the second spatial direction;wherein the second high spatial frequency content in the second spatial direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the second spatial direction;andencoding and outputting the first multiplexed image frame and the second multiplexed image frame to represent the input 3D image, wherein generating the carrier image data comprises:applying a first spatial filtering in a first direction of a base layer image frame to down-sample the base layer image frame and generate an intermediate image, the base layer image based at least in part on the first multiplexed image frame; andapplying a second spatial filtering in a second direction to the intermediate image to up-sample the intermediate image to generate the carrier image data, wherein the second direction is orthogonal to the first direction;wherein all the carrier image data comprise pixel values of the same fixed value.25. A method using a processor for decoding 3D signals coded in a frame compatible full resolution (FCFR) format, the method comprising:receiving a 3D image represented by a first multiplexed image frame and second multiplexed image frame, the first multiplexed image frame comprising first spatial frequency content unfiltered in a first spatial direction including first high spatial frequency content in the first spatial direction and first reduced resolution content in a second spatial direction, and the second multiplexed image frame comprising second high spatial frequency content in the second spatial direction and second reduced resolution content in the first spatial direction, wherein the second spatial direction is orthogonal to the first spatial direction;wherein the first spatial frequency content unfiltered in the first spatial direction comprises spatial frequency content of an LE input image frame and an RE input image frame, unfiltered in the first spatial direction;wherein the first high spatial frequency content in the first spatial direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the first spatial direction;wherein the first reduced resolution content in the second spatial direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the second spatial direction;wherein the second spatial frequency content unfiltered in the second spatial direction comprises spatial frequency content of the LE input image frame and the RE input image frame, unfiltered in the second spatial direction;wherein the second high spatial frequency content in the second spatial direction comprises high spatial frequency content of the LE input image frame and the RE input image frame, in the second spatial direction;wherein the second reduced resolution content in the first spatial direction comprises reduced resolution content of the LE input image frame and the RE input image frame, in the first spatial direction;generating based on the first multiplexed image frame a frame-compatible pair of image frames (FC-L, FC-R) comprising the first spatial frequency content unfiltered in the second direction including the first high spatial frequency content in the first spatial direction and the first reduced spatial frequency content in the second spatial direction; andgenerating based on the first multiplexed image frame and the second multiplexed image frame a full-resolution pair of frames (FR-LE, FR-RE) comprising high spatial frequency content in both the first spatial direction and the second spatial direction, wherein the second multiplexed image frame comprises a residual image frame combined with a carrier image frame and wherein the carrier image frame is generated based on the first multiplexed image frame and then subtracted from the second multiplexed image frame to generate the residual image frame;wherein all the carrier image frame comprises pixel values of the same fixed value.26. The method of claim 25, further comprising applying a high-pass filter in the second spatial direction of the second multiplexed image frame to generate the residual image frame.