Facebook Instagram Twitter RSS Feed PodBean Back to top on side

Approximation of information divergences for statistical learning with applications

In: Mathematica Slovaca, vol. 68, no. 5
Milan Stehlík - Ján Somorčík - Luboš Střelec - Jaromír Antoch

Details:

Year, pages: 2018, 1149 - 1172
Keywords:
deconvolution, information divergence, likelihood, change in intensity of Poisson process
About article:
In this paper we give a partial response to one of the most important statistical questions, namely, what optimal statistical decisions are and how they are related to (statistical) information theory. We exemplify the necessity of understanding the structure of information divergences and their approximations, which may in particular be understood through deconvolution. Deconvolution of information divergences is illustrated in the exponential family of distributions, leading to the optimal tests in the Bahadur sense. We provide a new approximation of $I$-divergences using the Fourier transformation, saddle point approximation, and uniform convergence of the Euler polygons. Uniform approximation of deconvoluted parts of $I$-divergences is also discussed. Our approach is illustrated on a real data example.
How to cite:
ISO 690:
Stehlík, M., Somorčík, J., Střelec, L., Antoch, J. 2018. Approximation of information divergences for statistical learning with applications. In Mathematica Slovaca, vol. 68, no.5, pp. 1149-1172. 0139-9918. DOI: https://doi.org/10.1515/ms-2017-0177

APA:
Stehlík, M., Somorčík, J., Střelec, L., Antoch, J. (2018). Approximation of information divergences for statistical learning with applications. Mathematica Slovaca, 68(5), 1149-1172. 0139-9918. DOI: https://doi.org/10.1515/ms-2017-0177
About edition:
Published: 31. 10. 2018