Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World
Format: PDF / Kindle (mobi) / ePub
How does life prosper in a complex and erratic world? While we know that nature follows patterns—such as the law of gravity—our everyday lives are beyond what known science can predict. We nevertheless muddle through even in the absence of theories of how to act. But how do we do it?
In Probably Approximately Correct, computer scientist Leslie Valiant presents a masterful synthesis of learning and evolution to show how both individually and collectively we not only survive, but prosper in a world as complex as our own. The key is “probably approximately correct” algorithms, a concept Valiant developed to explain how effective behavior can be learned. The model shows that pragmatically coping with a problem can provide a satisfactory solution in the absence of any theory of the problem. After all, finding a mate does not require a theory of mating. Valiant’s theory reveals the shared computational nature of evolution and learning, and sheds light on perennial questions such as nature versus nurture and the limits of artificial intelligence.
Offering a powerful and elegant model that encompasses life’s complexity, Probably Approximately Correct has profound implications for how we think about behavior, cognition, biological evolution, and the possibilities and limits of human and machine intelligence.
Physics is without parallel, not because he described gravity or made any other particular discovery, but because it was through his work that it became accepted that the physical world obeys laws that can be described by mathematical equations, and that solving these equations could yield accurate predictions of what will happen in the future. Newton’s theories not only had the immediate generality that they applied very broadly to mechanical systems. They had a higher level supergenerality in.
Out earlier, learning is based on a deep interplay of computational and statistical phenomena. If there are limits to learning, then these are the directions in which they will be found. First consider statistical limits, which, though weak, are significant. These impose a condition on the minimum number of training examples needed in order to learn reliably. This number does depend on the distribution. For easy distributions high accuracy can be reached with few examples. An extreme case of an.
Can any mechanism account for this remarkable unfolding drama? Chapter Seven * * * The Deducible How can one reason with imprecise concepts? True genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information. WINSTON CHURCHILL 7.1 Reasoning The tension between reasoning and learning has a long history, reaching back at least as far as Aristotle, who, as already mentioned, contrasted the “syllogistic and inductive” in his Posterior.
The relationship between the function this circuit computes and the outside reality is one of PAC semantics. 7.6 The Challenge of Grounding Finally we arrive at the fourth challenge, which I call grounding. It is intimately related to both semantics and brittleness, and it deals, to put it simply, with two primary issues: the scope of the knowledge that is claimed to be represented, and the constraints of time, space, or other limitations within which the PAC semantics are to be accurate.
Be recovered from functional MRI images of the person’s blood flow in the brain.6 This recovery is achieved by standard learning algorithms applied to the images as examples and the categories of words as the labels. These images constitute a theoryless arena since we understand so little about how knowledge is represented in the brain. Yet these images apparently abound in regularities that can be learned. This is an excellent illustration of the fact that learnable regularities may be found.