论文标题

在连续空间上的神经估计和定向信息的优化

Neural Estimation and Optimization of Directed Information over Continuous Spaces

论文作者

Tsur, Dor, Aharoni, Ziv, Goldfeld, Ziv, Permuter, Haim

论文摘要

这项工作开发了一种新方法,用于估计和优化两个共同固定和偏僻的随机过程之间的定向信息率。基于机器学习的最新进展,我们提出了一个基于RNN参数的梯度上升来优化的基于渐变的神经网络(RNN)的估计器。估计器不需要对基础关节和边缘分布的先验知识。估计器还很容易地在深层生成模型实现的连续输入过程上进行了优化。我们证明了提出的估计和优化方法的一致性,并将它们组合在一起以获得端到端的性能保证。探索了通道容量估算连续通道的应用,并探索了记忆的连续通道,并提供了证明我们方法的可伸缩性和准确性的经验结果。当通道无内存时,我们研究了优化的输入生成器所学的映射。

This work develops a new method for estimating and optimizing the directed information rate between two jointly stationary and ergodic stochastic processes. Building upon recent advances in machine learning, we propose a recurrent neural network (RNN)-based estimator which is optimized via gradient ascent over the RNN parameters. The estimator does not require prior knowledge of the underlying joint and marginal distributions. The estimator is also readily optimized over continuous input processes realized by a deep generative model. We prove consistency of the proposed estimation and optimization methods and combine them to obtain end-to-end performance guarantees. Applications for channel capacity estimation of continuous channels with memory are explored, and empirical results demonstrating the scalability and accuracy of our method are provided. When the channel is memoryless, we investigate the mapping learned by the optimized input generator.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源