FIG. 3 includes an example of predicted alignments vs. ground truth 360, which illustrates an accuracy of the mesh vertices 350 compared to the original point cloud (e.g., input point cloud 310). For example, the predicted alignments vs. ground truth 360 illustrates portions of the point cloud 310 (e.g., represented as specific data points) interleaved with the mesh vertices 350 (e.g., represented as the outline), indicating that there are only minor differences between the two. Thus, the mesh vertices 350 accurately approximates the input point cloud 310 such that there only small portions of the input point cloud 310 distinguishable from the mesh vertices 350.
FIG. 3 illustrates a simplified example of processing an input point cloud to generate output mesh data. However, to conceptually illustrate a specific implementation, FIG. 4A illustrates a more detailed example of processing an input point cloud to generate output mesh data.
FIGS. 4A-4B illustrate examples of mesh reconstruction according to embodiments of the present disclosure. As illustrated in FIG. 4A, after receiving authorization (e.g., from the user 5 or the person being scanned), the system 100 may perform a raw scan 410 to generate a point cloud 210 (e.g., X={x1, . . . , xn}), may process the point cloud 210 using fixed basis points 220 (e.g., B={b1, . . . , bk}T) to generate distance values 420, and may process the distance values 420 using a dense convolution network (DenseNet) 430 to generate a high resolution mesh 440.