摘要:
This study examines the history of machine learning in the second half of the twentieth century. The disunified forms of machine learning from the 1950s until the 1990s expanded what constituted “legitimate” and “efficacious” descriptions of society and physical reality, by using computer learning to accommodate the variability of data and to spur creative and original insights. By the early 1950s researchers saw “machine learning” as a solution for handling practical classification tasks involving uncertainty and variability; a strategy for producing original, creative insights in both science and society; and a strategy for making decisions in new contexts and new situations when no causal explanation or model was available. Focusing heavily on image classification and recognition tasks, pattern recognition researchers, building on this earlier learning tradition from the mid-1950s to the late-1980s, equated the idea of “learning” in machine learning with a program’s capacity to identify what was “significant” and to redefine objectives given new data in “ill-defined” systems. Classification, for these researchers, encompassed individual pattern recognition problems, the process of scientific inquiry, and, ultimately, all subjective human experience: they viewed all these activities as specific instances of generalized statistical induction. In treating classification as generalized induction, these researchers viewed pattern recognition as a method for acting in the world when you do not understand it. Seeing subjectivity and sensitivity to “contexts” as a virtue, pattern recognition researchers distinguished themselves from the better-known artificial intelligence community by emphasizing values and assumptions they necessarily “smuggled in” to their learning programs. Rather than a bias to be removed, the explicit contextual subjectivity of machine learning, including its sensitivity to the idiosyncrasies of its training data, justified its use from the 1960s t