[go: up one dir, main page]

Kybernetika 34 no. 4, 485-494, 1998

About the maximum information and maximum likelihood principles

Igor Vajda and Jiří Grim

Abstract:

Neural networks with radial basis functions are considered, and the Shannon information in their output concerning input. The role of information-preserving input transformations is discussed when the network is specified by the maximum information principle and by the maximum likelihood principle. A transformation is found which simplifies the input structure in the sense that it minimizes the entropy in the class of all information-preserving transformations. Such transformation need not be unique - under some assumptions it may be any minimal sufficient statistics.