白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Method and system for color representation generation

專利號(hào)
US11176715B2
公開日期
2021-11-16
申請(qǐng)人
THE GOVERNING COUNCIL OF THE UNIVERSITY OF TORONTO(CA Toronto)
發(fā)明人
Maria Shugrina; Amlan Kar; Sanja Fidler; Karan Singh
IPC分類
G06T11/00; G06T15/50; G06T11/40
技術(shù)領(lǐng)域
color,sail,colors,in,image,neural,sails,alpha,patchwork,masks
地域: Toronto

摘要

There is provided a system and method for color representation generation. In an aspect, the method includes: receiving three base colors; receiving a patchwork parameter; and generating a color representation having each of the three base colors at a vertex of a triangular face, the triangular face having a color distribution therein, the color distribution discretized into discrete portions, the amount of discretization based on the patchwork parameter, each discrete portion having an interpolated color determined to be a combination of the base colors at respective coordinates of such discrete portion. In further aspects, one or more color representations are generated based on an input image and can be used to modify colors of a reconstructed image.

說明書

The third neural network can also be trained in an unsupervised fashion using, for example, adversarial loss. For each unlabeled image, simulated user input can be generated by sampling a point v in the image such that it is not near any edge (for example, an edge map could be obtained using any number of existing algorithms). The third neural network can then be trained using gradient descent on batches of this data to minimize a loss comprised of three terms, EL2, EA, EGAN. The first term EL2 is pixel-wise Least Square Errors (L2) loss between the original image I and the image reconstructed by substituting colors of pixels in the mask with best matching colors in the predicted color sail (for example, similar to Eq. 9). The second term EA is the area loss. Starting from the sampled point p in a training image, a flood fill algorithm could be run to define a region R of similar colors around the point. Because color sails can definitely model a single color well, R can serve as an approximate lower bound for the area of the predicted mask. R can be represented as a one channel image with everything in R marked as 1. This allows us to formulate the first, area-based, unsupervised loss component: EAx,yR(x, y)(R(x, y)?M(x, y)); where M is the predicted mask. Thus, all pixels that are in R, but not in M are penalized. However, if M extends beyond R (which is a lower bound), then no error is incurred.

權(quán)利要求

1
微信群二維碼
意見反饋