April 23, 2010
A remarkable eprint just appeared: T. Bolognesi, Causal sets from simple models of computation, arXiv:1004.3128.
In the theory of relativity, the relation “event A happened before event B” is not a total ordering relation. When events have space-like separation, the relation “before” is observer-dependent. By restricting attention to a partial ordering relation, that only applies to pairs of events with time-like separation, an observer-independent relation is obtained.
The theory of relativity is concerned with continuous, smooth manifolds. Computation is usually conceived of in terms of a discretized state space. To connect some aspects of the two, the new eprints defines a partial ordering relation on computational events (e.g. state transitions in a Turing machine) that represents a “before” relation. The resulting visualizations of the partial ordering relations as graphs are intriguing. Whether or not one believes this sort of mathematical structure will have useful applications in physics, the eprint is worth a read just for its visualizations.
January 2, 2010
In 1995 and the following few years, Wolpert and Macready published a technical reports and a journal article on what has come to be called the No Free Lunch theorems. These theorems say that all optimization algorithms yield the same average performance in optimizing a completely random function. Read the rest of this entry »
September 10, 2009
A new cool eprint has appeared:
J. Veness, et al. A Monte Carlo AIXI Approximation. eprint: 0909.0801
The question that defines the context for this article is: How should probabilities be assigned? One way, with much to recommend itself, is take them to be algorithmic probabilities or universal priors. Suppose one has observed the first N values of a discrete time series, maybe a byte stream, and wishes to predict or make a bet about the next value. Is there a general probability measure appropriate for all cases that fit this abstract setting and, if so, which one? Read the rest of this entry »
July 9, 2009
A new eprint (arXiv:0907.0455) reports a computational study of the Peter Principle. Many who, like me, have never heard of the Peter Principle before will be amused to learn that the it asserts that employees climb the career ladder until they reach a maximal level of incompetence with respect to their work tasks. I’m sure many workers will find support for this in their own experiences.
The underlying assumption of the principle is that an employees competence at one level of the career ladder is uncorrelated (and statistically independent) with the next level. Thus, promoting the best does no good, as their competence at the next rung of the ladder is random. Read the rest of this entry »