3.8 Proceedings Paper

Information Leakage in Zero-Error Source Coding: A Graph-Theoretic Perspective

出版社

IEEE
DOI: 10.1109/ISIT45174.2021.9517778

关键词

-

资金

  1. ARC [DP190100770, FT190100429]
  2. US National Science Foundation [CNS-1815322]
  3. Australian Research Council [FT190100429] Funding Source: Australian Research Council

向作者/读者索取更多资源

We investigate information leakage to a guessing adversary in zero-error source coding, using a confusion graph to capture distinguishability between source symbols. We develop a single-letter characterization of optimal normalized leakage under the basic adversary model and discover that this optimal leakage is equal to the optimal compression rate with fixed-length source codes. Extending the leakage measurement to generalized adversary models, we derive single-letter lower and upper bounds for cases with multiple guesses and some level of distortion allowed.
We study the information leakage to a guessing adversary in zero-error source coding. The source coding problem is defined by a confusion graph capturing the distinguishability between source symbols. The information leakage is measured by the ratio of the adversary's successful guessing probability after and before eavesdropping the codeword, maximized over all possible source distributions. Such measurement under the basic adversarial model where the adversary makes a single guess and the guess is regarded successful if and only if the estimator sequence equals to the true source sequence is known as the maximum min-entropy leakage or the maximal leakage in the literature. We develop a single-letter characterization of the optimal normalized leakage under the basic adversarial model, together with an optimum-achieving memoryless stochastic mapping scheme. An interesting observation is that the optimal normalized leakage is equal to the optimal compression rate with fixed-length source codes, both of which can be simultaneously achieved by some deterministic coding schemes. We then extend the leakage measurement to generalized adversarial models where the adversary makes multiple guesses and allows a certain level of distortion, for which we derive single-letter lower and upper bounds.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据