白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Systems and methods for 3D image distification

專利號
US11176414B1
公開日期
2021-11-16
申請人
STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY(US IL Bloomington)
發(fā)明人
Elizabeth Flowers; Puneit Dua; Eric Balota; Shanna L. Phillips
IPC分類
G06K9/62; G06K9/42; G06K9/00
技術(shù)領(lǐng)域
3d,2d,image,images,or,computing,matrix,in,2d3d,model
地域: IL IL Bloomington

摘要

Systems and methods are described for Distification of 3D imagery. A computing device may obtain a three dimensional (3D) image that includes rules defining a 3D point cloud used to generate a two dimensional (2D) image matrix. The 2D image matrix may include 2D matrix point(s) mapped to the 3D image, where each 2D matrix point can be associated with a horizontal coordinate and a vertical coordinate. The computing device can generate an output feature vector that includes, for at least one of the 2D matrix points, the horizontal coordinate and the vertical coordinate of the 2D matrix point, and a depth coordinate of a 3D point in the 3D point cloud of the 3D image. The 3D point can have a nearest horizontal and vertical coordinate pair that corresponds to the horizontal and vertical coordinates of the at least one 2D matrix point.

說明書

RELATED APPLICATION(S)

The present application is a continuation of U.S. patent application Ser. No. 15/596,171, entitled “SYSTEMS AND METHODS FOR 3D IMAGE DISTIFICATION,” filed on May 16, 2017, the disclosure of which is hereby incorporated herein by reference.

FIELD OF THE DISCLOSURE

The present disclosure generally relates to systems and methods for providing 2D and 3D imagery interpolation, and more particularly to predictive modeling and classifications using 2D and 3D imagery.

BACKGROUND

Images and video taken from modern digital camera and video recording devices can be generated and stored in a variety of different formats and types. For example, digital cameras may capture dimensional (2D) images and store them in a vast array of data formats, including, for example, JPEG (Joint Phonographic Experts Group), TIFF (Tagged Image File Format), PNG (Portable Network Graphics), BMP (Windows Bitmap), or GIF (Graphics Interchange Format). Digital videos typically have their own formats and types, including, for example, FLV (Flash Video), AVI (Audio Video Interleave), MOV (QuickTime Format), WMV (Windows Media Video), and MPEG (Moving Picture Experts Group).

權(quán)利要求

1
What is claimed is:1. A computing device configured to Distify 3D imagery, the computing device comprising one or more processors configured to:obtain a three dimensional (3D) image, wherein the 3D image includes rules defining a 3D point cloud;generate a two dimensional (2D) image matrix based upon the 3D image, wherein the 2D image matrix includes one or more 2D matrix points mapped to the 3D image, and wherein each 2D matrix point has a horizontal coordinate and a vertical coordinate; andgenerate an output feature vector as a data structure that includes a horizontal coordinate and a vertical coordinate of at least one 2D matrix point of the 2D image matrix, and a 3D point in the 3D point cloud of the 3D image,wherein the 3D point in the point cloud of the 3D image has a nearest distance with respect to a coordinate pair comprised of the horizontal coordinate and the vertical coordinate of the 2D image matrix compared with any other coordinate pair of the 2D image matrix, andwherein the output feature vector is input into a predictive model for determining a user behavior.2. The computing device of claim 1, wherein the output feature vector indicates one or more image feature values associated with the 3D point, wherein each image feature value defines one or more items of interest in the 3D image.3. The computing device of claim 2, wherein the one or more items of interest in the 3D image include one or more of the following: a person's head, a person's facial features, a person's hand, or a person's leg.4. The computing device of claim 1, wherein the output feature vector further includes a distance value generated based on the distance from the at least one 2D matrix point to the 3D point.5. The computing device of claim 1, wherein the 3D image and rules defining the 3D point cloud are obtained from one or more respective PLY files or PCD files.6. The computing device of claim 1, wherein the 3D image is a frame from a 3D movie.7. The computing device of claim 1, wherein the 3D image is obtained from one or more of the following: a camera computing device, a sensor computing device, a scanner computing device, a smart phone computing device or a tablet computing device.8. The computing device of claim 1, wherein a total quantity of the one or more 2D matrix points mapped to the 3D image is less than a total quantity of horizontal and vertical coordinate pairs for all 3D points in the 3D point cloud of the 3D image.9. The computing device of claim 1, wherein the computing device is further configured to Distify a second 3D image in parallel with the 3D image.10. A computer-implemented method for Distification of 3D imagery using one or more processors, the method comprising:obtaining a three dimensional (3D) image, wherein the 3D image includes rules defining a 3D point cloud;generating a two dimensional (2D) image matrix based upon the 3D image, wherein the 2D image matrix includes one or more 2D matrix points mapped to the 3D image, and wherein each 2D matrix point has a horizontal coordinate and a vertical coordinate; andgenerate an output feature vector as a data structure that includes a horizontal coordinate and a vertical coordinate of at least one 2D matrix point of the 2D image matrix, and a 3D point in the 3D point cloud of the 3D image,wherein the 3D point in the point cloud of the 3D image has a nearest distance with respect to a coordinate pair comprised of the horizontal coordinate and the vertical coordinate of the 2D image matrix compared with any other coordinate pair of the 2D image matrix, andwherein the output feature vector is input into a predictive model for determining a user behavior.11. The computer-implemented method of claim 10, wherein the output feature vector indicates one or more image feature values associated with the 3D point, wherein each image feature value defines one or more items of interest in the 3D image.12. The computer-implemented method of claim 11, wherein the one or more items of interest in the 3D image include one or more of the following: a person's head, a person's facial features, a person's hand, or a person's leg.13. The computer-implemented method of claim 10, wherein the output feature vector further includes a distance value generated based on the distance from the at least one 2D matrix point to the 3D point.14. The computer-implemented method of claim 10, wherein the 3D image and rules defining the 3D point cloud are obtained from one or more respective PLY files or PCD files.15. The computer-implemented method of claim 10, wherein the 3D image is a frame from a 3D movie.16. The computer-implemented method of claim 10, wherein the 3D image is obtained from one or more of the following: a camera computing device, a sensor computing device, a scanner computing device, a smart phone computing device or a tablet computing device.17. The computer-implemented method of claim 10, wherein a total quantity of the one or more 2D matrix points mapped to the 3D image is less than a total quantity of horizontal and vertical coordinate pairs for all 3D points in the 3D point cloud of the 3D image.18. The computer-implemented method of claim 10, wherein the computing device is further configured to Distify a second 3D image in parallel with the 3D image.
微信群二維碼
意見反饋