论文标题
恒定膨胀足以与生成先验进行压缩感应
Constant-Expansion Suffices for Compressed Sensing with Generative Priors
论文作者
论文摘要
在经验上发现了生成性神经网络在为压缩传感提供有效的结构先验方面非常有前途,因为可以训练它们在高维信号空间中跨越低维数据歧管。尽管产生的优化问题具有非跨性别性,但从理论上讲,对于具有随机高斯权重的神经网络,网络范围内的信号可以有效地从一些嘈杂的测量中恢复。但是,这些理论保证的主要瓶颈是网络的扩展条件:神经网络的每一层必须大于对数因素。我们的主要贡献是打破这种强大的扩张性假设,表明恒定的扩张性足以获得有效的恢复算法,此外也是信息从理论上讲是必要的。为了克服现有方法中的理论瓶颈,我们证明了一种新型的均匀浓度定理,用于随机函数,可能不是Lipschitz,但可以满足我们称为“ Pseudo-Lipschitzness”的放松概念。使用该定理,我们可以证明矩阵浓度不平等称为重量分布条件(WDC),以前仅知道其适用于具有对数纵横比的高斯矩阵,实际上也具有恒定纵横比。由于WDC是所有现有理论保证的核心基本矩阵浓度不平等,因此我们更紧密的界限立即在文献中的所有已知结果中都能改善有关具有深层生成先验的压缩感应的所有已知结果,包括一位恢复,一位检索,相位检索,低率矩阵恢复等等。
Generative neural networks have been empirically found very promising in providing effective structural priors for compressed sensing, since they can be trained to span low-dimensional data manifolds in high-dimensional signal spaces. Despite the non-convexity of the resulting optimization problem, it has also been shown theoretically that, for neural networks with random Gaussian weights, a signal in the range of the network can be efficiently, approximately recovered from a few noisy measurements. However, a major bottleneck of these theoretical guarantees is a network expansivity condition: that each layer of the neural network must be larger than the previous by a logarithmic factor. Our main contribution is to break this strong expansivity assumption, showing that constant expansivity suffices to get efficient recovery algorithms, besides it also being information-theoretically necessary. To overcome the theoretical bottleneck in existing approaches we prove a novel uniform concentration theorem for random functions that might not be Lipschitz but satisfy a relaxed notion which we call "pseudo-Lipschitzness." Using this theorem we can show that a matrix concentration inequality known as the Weight Distribution Condition (WDC), which was previously only known to hold for Gaussian matrices with logarithmic aspect ratio, in fact holds for constant aspect ratios too. Since the WDC is a fundamental matrix concentration inequality in the heart of all existing theoretical guarantees on this problem, our tighter bound immediately yields improvements in all known results in the literature on compressed sensing with deep generative priors, including one-bit recovery, phase retrieval, low-rank matrix recovery, and more.