I, AIXI

September 10, 2009

A new cool eprint has appeared:

J. Veness, et al. A Monte Carlo AIXI Approximation. eprint: 0909.0801

The question that defines the context for this article is: How should probabilities be assigned? One way, with much to recommend itself, is take them to be algorithmic probabilities or universal priors. Suppose one has observed the first N values of a discrete time series, maybe a byte stream, and wishes to predict or make a bet about the next value. Is there a general probability measure appropriate for all cases that fit this abstract setting and, if so, which one? Read the rest of this entry »


A priori bias in the Dembski-Marks representation

September 8, 2009

Dembski and Marks (2009b) recently published a minimalistic (and simplistic) representation of a search problem consisting of a search space \Omega and a distinguished target set T \subset \Omega (blogger reactions: 1, 2, 3, 4, 5, 7). From the discussion in the paper and two other articles it is clear that the authors’ object of study is related to, though not equivalent to, the issues raised by Wolpert and Macready’s No Free Lunch theorems. Despite its minimalistic character, the Dembski-Marks representation is not less restrictive than the Wolpert-Macready representation of a search/optimization problem. The Dembski-Marks representation, i.e., a distinguished target in a search space, can easily be introduced as an extra feature in the Wolpert-Macready representation. However, it is not possible to introduce the full Wolpert-Macready representation within the Dembski-Marks representation. Indeed, the absence of a constant, distinguished target set is a prerequisite for all No Free Lunch theorems. It is therefore interesting to estimate how much of a restriction it is to make the Wolpert-Macready representation conform to the Dembski-Marks representation. Read the rest of this entry »


Clusters and tree structure from genealogical data

September 6, 2009

Modern biology provides a wealth of interesting mathematical challenges in the modeling and reconstruction of evolution. A new eprint explores the theoretical prospects for defining a phylogenetic tree structure, despite complications like lateral gene transfer, hybdridization and the difference between gene and species trees:

A. Dress, et al. Species, Clusters and the ‘Tree of Life’: A graph-theoretic perspective. eprint: 0908.2885

Read the rest of this entry »


Entropy decrease results in memory loss

September 2, 2009

I found a fun arrow-of-time paper in PRL, arguing that dynamical decreases in the entropy of an isolated system are not at all impossible, just impossible to remember!

L. Maccone. Quantum Solution to the Arrow-of-Time Dilemma, Phys. Rev. Lett. 103:080401 (2009), arXiv:0802.0438

Abstract: The arrow-of-time dilemma states that the laws of physics are invariant for time inversion, whereas the familiar phenomena we see everyday are not (i.e., entropy increases). I show that, within a quantum mechanical framework, all phenomena which leave a trail of information behind (and hence can be studied by physics) are those where entropy necessarily increases or remains constant. All phenomena where the entropy decreases must not leave any information of their having happened. This situation is completely indistinguishable from their not having happened at all. In the light of this observation, the second law of thermodynamics is reduced to a mere tautology: physics cannot study those processes where entropy has decreased, even if they were commonplace.

While interesting, I wonder if this approach is really promising. Basically, the author proves his Eq. (2), stating that the sum of entropy changes in a system A containing the observer and another system C equals the entropy change in a reservoir plus the change in the total amount of correlations (i.e., mutual information) between A and C. In the most interesting case when the entropy of the reservoir is constant, any entropy decreases in A and C must come at the expense of decreasing the amount/strength of correlation between A and C. This, according to the author, means that an entropy decrease in C automatically results in the observer in system A losing any memories or records (a memory/record is a kind of correlation) of C’s previous higher entropy state. However, very little correlation (a very small memory/record) is needed to retain, say, just the numerical values of C’s entropy at different times. Entropy decreases in C could, for all we know, come at the expense of other, more detailed, correlations between A and C, while leaving memories of measured entropies intact. Thus, it seems that much more work is needed to actually establish that entropy decreases are unobservable (due to being impossible to remember). The Phys. Rev. Focus commentary also hints at this problem.