Would You like a feature Interview?
All Interviews are 100% FREE of Charge
If Wu Tang had made it in 23 instead of 1993, they would have called it DREAM. Because data rules everything around me. Whereas our society once mediated power based on the strength of arms and purse strings, today’s world is driven by data powering the algorithms that classify, silo, and sell us. These arrogant and imperceptible decision-making black box oracles take out a mortgagewho gets bail, who finds love, who gets their children taken out of the country.
in their new book How Data Originated: A History From the Age of Reason to the Age of Algorithms, Columbia University professors Chris Wiggins and Matthew L Jones explore how data can be curated into actionable information, covering everything from political views and social conventions to military responses and economic activity. I’m looking into how it’s used to shape. In the excerpt below, Wiggins and Jones look at the work of mathematician John McCarthy, who single-handedly coined the term “artificial intelligence” as part of his ruse to secure summer research funding. Junior Professor at Dartmouth College.
WW Norton
excerpt from How Data Originated: A History From the Age of Reason to the Age of Algorithms By Chris Wiggins and Matthew L. Jones. Published by WW Norton. Copyright © 2023 by Chris Wiggins and Matthew L. Jones. all rights reserved.
Completion of “artificial intelligence”
Mathematician John McCarthy, an avid proponent of the symbolic approach, is often credited with inventing the term “artificial intelligence”. Research aimed at “the long-term goal of achieving human-level intelligence”. The “summer study” in question was titled “Dartmouth Summer Research Project on Artificial Intelligence” and the funding requested was from the Rockefeller Foundation. McCarthy, then a junior professor of mathematics at Dartmouth College, was assisted in his pitch to Rockefeller by his former mentor Claude his Shannon. As McCarthy explains the positioning of the term, “Shannon thought artificial intelligence was too flashy a term that could attract unwanted attention.” (including “neural networks” and Turing machines), I took the position of declaring a new field. “So I decided not to fly false flags anymore.” His ambitions were huge. His 1955 proposal argued that “all aspects of learning and other features of intelligence can be described in principle so precisely that they can be simulated by machines.” At his 1956 conference, which became known as his workshop at Dartmouth, McCarthy ended up getting more brain modelers than the kind of axiomatic mathematicians he hoped for. The event brought together a variety of often conflicting efforts to get digital computers to perform tasks deemed intelligent, but as artificial intelligence historian Johnny Penn argues, workshops Lacking expertise in psychology, the description of intelligence was “primarily a group of professionals working outside the human sciences.” Each participant saw their company’s roots differently. McCarthy said, “Everyone there was pretty adamant about pursuing the ideas they had before he came. Also, as far as I could see, there was no real exchange of ideas.”
Like Turing’s 1950 paper, the proposal for the 1955 Summer Workshop on Artificial Intelligence seems incredibly prescient in retrospect. His seven problems that McCarthy, Shannon, and their collaborators proposed to study became major pillars of his computer science and artificial intelligence fields.
-
“automatic calculator” (programming language)
-
“How can I program a computer to use a language?” (Natural Language Processing)
-
“Neuron Nets” (neural nets and deep learning)
-
“Theory of Computation Size” (computational complexity)
-
“Self-improvement” (machine learning)
-
“abstraction” (feature engineering)
-
“Randomness and Creativity” (Monte Carlo methods including probabilistic learning).
The term “artificial intelligence” in 1955 was an aspiration rather than a commitment to one method. AI in this broad sense encompasses both discovering what constitutes human intelligence by attempting to create machine intelligence and philosophical research to simply force computers to perform difficult activities that humans might attempt. It included both efforts that are not.
Only a few of these aspirations have sparked an effort that has become synonymous with artificial intelligence in its current usage: the idea that machines can learn from data. Among computer scientists, learning from data will not be emphasized for generations.
Most of the first half-century of artificial intelligence focused on combining hard-coded knowledge and logic in machines. Little importance was attached to data collected from everyday activities. It lost its fame next to logic. In the last five years or so, artificial intelligence and machine learning have become synonymous. Remembering that it didn’t have to be this way is a powerful thought exercise. During the first decades of the birth of artificial intelligence, learning from data was seen as the wrong approach, an unscientific approach used by people who didn’t like to “just program” knowledge into computers. I was. Before data ruled, rules ruled.
Despite their enthusiasm, most participants in the Dartmouth Workshop yielded little tangible results. One group was different. A team at RAND Corporation, led by Herbert Simon, has brought a commodity in the form of an automated theorem prover. This algorithm can generate proofs of basic arithmetic and logic theorems. But math was just a test case for them. As historian Hunter Huyke emphasizes, the group does not study computing and mathematics, but how to understand large bureaucracies and the psychology of the people who solve problems within them. It started with For Simon and Newell, the human brain and computers were problem solvers of the same kind.
Our position is that the proper way to describe some of the problem-solving behaviors is in terms of programs. That is, a specification of what living things do under different environmental conditions in terms of the specific basic information processing that they can perform. Digital computers appear simply because they can be induced to perform information processing. Therefore, as we will see, these programs describe solving both human and machine problems at the level of information processing.
Simon and Newell provided many of the first major successes in early artificial intelligence, but focused on practical investigations of human organization. They were concerned with human problem-solving that blended what Johnny Penn called “a composite of early 20th-century British symbolic logic and American managerial logic of ultra-rationalized organizations.” . Before adopting the AI moniker, they positioned their work as the study of “information processing systems” involving humans and machines alike, based on the best understanding of human reasoning at the time.
Simon and his collaborators were deeply involved in the debate about human nature. Simon later won the Nobel Prize in Economics for his work on the limits of human rationality. He, along with many post-war intellectuals, was interested in refuting the idea that human psychology should be understood as an animal-like response to positive and negative stimuli. Like others, he rejects the behaviorist view that humans are almost automatically driven by reflexes, and that learning is primarily concerned with the accumulation of facts gained through such experience. said there is. Superior human abilities, such as speaking a natural language or performing advanced mathematics, are never born from experience alone. Focusing solely on data was a misunderstanding of human spontaneity and intelligence. Central to the development of cognitive science, this generation of intellectuals emphasized abstraction and creativity over the analysis of sensory or other data. Historian Jamie Cohen-Cole explains: Emphasis on this concept was central to Simon and Newell’s Logic Theorist program. The program not only grounds logical processes, but also deploys human-like “heuristics” to accelerate the search for means to an end. Scholars such as George Poriya, who have explored how mathematicians solved problems, have emphasized the creativity involved in using heuristics to solve mathematical problems. So the math wasn’t daunting—it wasn’t like doing long divisions over and over again, or cutting down large amounts of data. It was a creative activity and, in the eyes of its creators, a bulwark against the totalitarian vision of man, whether left-wing or right-wing. This diagram doesn’t have to be monotonous, it can be a place to get creative, don’t tell your employees about it.)
All products recommended by Engadget are selected by an editorial team independent of the parent company. Some stories contain affiliate links. When you purchase something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publication.