白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Techniques for determining settings for a content capture device

專(zhuān)利號(hào)
US11968456B2
公開(kāi)日期
2024-04-23
申請(qǐng)人
Magic Leap, Inc.(US FL Plantation)
發(fā)明人
Brian Keith Smith; Ilya Tsunaev
IPC分類(lèi)
H04N23/72; H04N5/222; H04N5/265; H04N23/63; H04N23/71; H04N23/73; H04N23/741; H04N23/743; H04N23/76
技術(shù)領(lǐng)域
luma,pixel,image,may,in,be,aec,examples,object,weight
地域: FL FL Plantation

摘要

A method for computing a total weight array includes receiving an image frame captured by a content capture device and identifying a plurality of objects in the image frame. Each object of the plurality of objects corresponds to one of a plurality of pixel groups. The method also includes providing a plurality of neural networks and calculating, for each object of the plurality of objects, an object weight using a corresponding neural network of the plurality of neural networks. The method further includes computing the total weight array by summing the object weight for each of the plurality of objects.

說(shuō)明書(shū)

CROSS-REFERENCES TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 16/879,468, filed May 20, 2020, now U.S. Pat. No. 11,303,818, issued Apr. 12, 2022, entitled “TECHNIQUES FOR DETERMINING SETTINGS FOR A CONTENT CAPTURE DEVICE,” which is a divisional of U.S. patent application Ser. No. 15/841,043, filed Dec. 13, 2017, U.S. Pat. No. 10,701,276, issued Jun. 30, 2020, entitled “TECHNIQUES FOR DETERMINING SETTINGS FOR A CONTENT CAPTURE DEVICE,” which is a non-provisional of and claims the benefit of and priority to U.S. Provisional Patent Application No. 62/438,926, filed Dec. 23, 2016, entitled “METHOD AND SYSTEM FOR DETERMINING EXPOSURE LEVELS,” the disclosures of which are hereby incorporated by reference in their entirety for all purposes.

BACKGROUND

This disclosure generally relates to determining settings (such as an exposure setting) for a content capture device. The exposure setting may relate to an amount of light a sensor of a content capture device receives when content (e.g., an image or a video) is captured. Examples of exposure settings include a shutter speed, an aperture setting, or an International Standards Organization (ISO) speed.

Traditional solutions for setting exposure were handled by a user. For example, a user would adjust exposure settings to their liking. However, this proved to be unreliable, and often produced suboptimal results.

權(quán)利要求

1
What is claimed is:1. A method for computing a total weight array, the method comprising:receiving an image frame captured by a content capture device;identifying a plurality of objects in the image frame, wherein each object of the plurality of objects is represented by one of a plurality of pixel groups corresponding to a shape of each object of the plurality of objects;providing a plurality of neural networks;calculating, for each object of the plurality of objects, an object weight using a corresponding neural network of the plurality of neural networks; andcomputing the total weight array by summing the object weight for each of the plurality of objects.2. The method of claim 1 wherein each of the plurality of neural networks comprises a different neural network.3. The method of claim 1 wherein each object of the plurality of objects is associated with a row r and a column c of the image frame.4. The method of claim 3 wherein the total weight array is w T [ r , c ] = i = 0 N o w i [ r , c ] , where No is the number of objects and wi[r, c] is the object weight for each of the plurality of objects.5. The method of claim 1 wherein each object weight comprises a single value.6. The method of claim 5 wherein the single value is applied to all pixels in each of the pixel groups of the plurality of pixel groups.7. The method of claim 1 further comprising:identifying a target luma value for the image frame;calculating an image luma value using the total weight array; andcomputing a difference between the image luma value and the target luma value; andupdating a setting of the content capture device based upon the computed difference.8. The method of claim 1 further comprising:receiving, by each neural network of the plurality of neural networks, a plurality of inputs corresponding to the object associated with the corresponding neural network; andoutputting, by each neural network of the plurality of neural networks, a single weight for the object associated with the corresponding neural network.9. The method of claim 1 further comprising receiving, by each neural network of the plurality of neural networks, a plurality of attributes as inputs to each neural network of the plurality of neural networks.10. A method comprising:receiving an image captured by a content capture device;identifying a target luma value for the image;providing a plurality of neural networks;identifying a plurality of objects in the image, wherein each of the plurality of objects is represented by a pixel group corresponding to a shape of each object of the plurality of objects;calculating, for each object of the plurality of objects, an object weight using a corresponding neural network of the plurality of neural networks;defining a first set of pixel groups associated with the plurality of objects;defining a second set of pixel groups not associated with the plurality of objects;calculating a pixel group luma value for each pixel group of the first set of pixel groups;multiplying the pixel group luma value by the object weight to provide a weighted pixel group luma value for each pixel group of the first set of pixel groups; andcalculating a total luma value for the image.11. The method of claim 10 further comprising:computing a difference between the total luma value and the target luma value; andupdating a setting of the content capture device based upon the computed difference.12. The method of claim 10 wherein the image comprises one image of a stream of images.13. The method of claim 10 wherein the image comprises pixels, each having a pixel luma value, and the target luma value corresponds to an average of the pixel luma values.14. The method of claim 10 wherein the image comprises pixels, each having a pixel luma value and a weight, and the target luma value corresponds to a weighted average of the pixel luma values.15. The method of claim 10 further comprising identifying one or more attributes for each of the plurality of objects in the image.16. The method of claim 15 wherein the one or more attributes include at least one of a priority weight array for object priority, a size weight array for object size, a distance weight array for object distance, or a gaze weight array for eye gaze.17. The method of claim 15 wherein each corresponding neural network uses the one or more attributes as input.18. The method of claim 10 wherein the pixel group luma value comprises an average of luma values for each pixel of the pixel group.19. The method of claim 10 wherein the total luma value equals a summation of the weighted pixel group luma value for each pixel group of the first set of pixel groups times the pixel group luma value for each pixel group of the first set of pixel groups.20. The method of claim 10 wherein each of the plurality of neural networks comprises a different neural network.
微信群二維碼
意見(jiàn)反饋