白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Systems and methods for 3D image distification

專利號
US11176414B1
公開日期
2021-11-16
申請人
STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY(US IL Bloomington)
發(fā)明人
Elizabeth Flowers; Puneit Dua; Eric Balota; Shanna L. Phillips
IPC分類
G06K9/62; G06K9/42; G06K9/00
技術(shù)領(lǐng)域
3d,2d,image,images,or,computing,matrix,in,2d3d,model
地域: IL IL Bloomington

摘要

Systems and methods are described for Distification of 3D imagery. A computing device may obtain a three dimensional (3D) image that includes rules defining a 3D point cloud used to generate a two dimensional (2D) image matrix. The 2D image matrix may include 2D matrix point(s) mapped to the 3D image, where each 2D matrix point can be associated with a horizontal coordinate and a vertical coordinate. The computing device can generate an output feature vector that includes, for at least one of the 2D matrix points, the horizontal coordinate and the vertical coordinate of the 2D matrix point, and a depth coordinate of a 3D point in the 3D point cloud of the 3D image. The 3D point can have a nearest horizontal and vertical coordinate pair that corresponds to the horizontal and vertical coordinates of the at least one 2D matrix point.

說明書

For the foregoing reasons, systems and methods are disclosed herein for generating an enhanced prediction from a 2D and 3D image-based ensemble model. As described herein, a computing device may be configured to obtain one or more sets of 2D and 3D images. Each of the 2D and 3D images may be standardized to allow for comparison and interoperability between the images. In one embodiment, the 3D images are standardized using Distification. In addition, corresponding 2D and 3D image pairs (i.e., a “2D3D image pair”) may be determined from the standardized 2D and 3D pairs where, for example, the 2D and 3D images correspond based on a common attribute, such as a similar timestamp or time value. The enhanced prediction may utilize separate underlying 2D and 3D prediction models, where, for example, the corresponding 2D and 3D images of a 2D3D pair are each input to the respective 2D and 3D prediction models to generate respective 2D and 3D predict actions.

權(quán)利要求

1
微信群二維碼
意見反饋