[ Music ] >> Humanity is at the beginning of the most profound technological and economic changes the world has ever seen. In from computers to artificial intelligence, a history, we will trace the story of this remarkable transformation. In this first program, 1936 to 1951, we shall see how the conceptual framework and architecture of the modern computer came into existence, learn how the transistor became the breakthrough component of a digital computer, a breakthrough that allowed for the commercialization of computers and see the emerging of the nascent idea that led to the possibility of artificial intelligence. [ Music ] Alan Turing, born on the 23rd of June 1912 in London, England, was one of the great geniuses of the 20th century and is considered to be one of the fathers of the modern computer, a man who is best known by the public as a result of the biopic, the imitation game, a 2014 film. It stars Benedict Cumberbatch as Alan Turing, who used his computer skills to decrypt **** German intelligence messages for the British government during the Second World War. [ Music ] However, Turing's great idea was the invention of what has become known as the universal computing Turing machine. Today, when a person opens their computer or cell phone, they know that there are many programs stored in memory waiting to be activated and perform a variety of designated tasks, such as showing the weather forecast or viewing a YouTube video. But in the early days of computing, such as with the American INIAC computer, which helped solve the riddle of the atomic bomb during World War II. [ Music ] Each task the computer was asked to perform required rewiring the computer by hand. Indeed, early on, it was thought that each task would require a different computer. [ Music ] That all changed when Alan Turing came up with the concept of controlling computing machine's operations by means of a program, a program made up of coded instructions stored in the computer's memory. By inserting different programs into the memory, the computing machine is made to carry out different computations. In other words, a single machine of fixed structure by making use of coded instructions stored in memory could change itself, a million like, from a machine dedicated to one task into another machine dedicated to quite different tasks. [ Music ] Capable of, in principle, an endless number of different tasks. This seems so obvious now, but Turing's universal computing machine totally changed the world of computing. Indeed, Turing proved that his theoretical universal computing machine could do everything given enough time. Turing would go on to make many contributions to the world of computing, mathematics and artificial intelligence through what is known as the Turing test, a test which could be used to decide whether a machine is intelligent or not. Most unfortunate for the scientific community in the world at large, as a gay man, Turing was harassed throughout his life. The life of this brilliant man which ended on June 7, 1954, when Alan Turing took his own life. [ Music ] Nearly every American white-collar worker since the 1960s has stood at a machine like this, placed a document on the glass surface, pushed a button and waited for a miracle to happen, a miracle that could not have been imagined 60 years ago. Then it appears, a perfect photocopy of the original document. The process behind this miracle was the brainchild of a singular American inventor and entrepreneur. This entrepreneur would go on to help establish the company whose name would come to stand for this process. The company is Xerox, and the inventor is Chester Floyd Carlson. Born in Seattle, Washington in 1906, growing up Carlson was interested in all manners of graphic arts, printing, and chemistry. After graduating from Cal Tech in 1930, he took various jobs in East Coast research labs, including Bell Labs. Eventually, he got a law degree and became a patent attorney. He noticed that the growth of business brought about by the Industrial Revolution caused a desperate need for a more efficient and inexpensive means of quickly duplicating documents than carbon paper in the mimeograph machine. Carlson reasoned that the answer lay in the new field of photo conductivity, a field investigated by Hungarian physicist Paul Selene. He discovered that when light strikes the surface of certain chemicals or metals, it increases. Carlson realized that if the image of a document or photograph was to be projected onto a photo conductive surface, current would only flow through the areas where the light fell. He worked at his invention from the kitchen of his house in Queens, New York, and arrived at something he termed "zerography." This is the first image Carlson created with "zerography" on October 22, 1938. However, because of the depression, Carlson had a hard time finding investors for his new invention. He was turned down by IBM and the U.S. Army Signal Corps. It took him eight years to find an investor, the Hal Lloyd Photographic Company, which would later become the Xerox Corporation. Zerography became commercially available in 1950 by the Xerox Corporation. But it wasn't until 1958, when the first commercial push button playing paper photocopier, the Xerox 914 was introduced, that Carlson's copy machine changed the American workplace forever. Considered to be the most successful single product of all time, the Xerox 914's core technology has remained the same since Carlson invented it in the 1930s, and would later be transferred to printers and scanners. The first, and at the time highly secret digital computer looked like this, housed at the University of Pennsylvania, the electronic numerical integrator and computer, known as ENIAC, was the first programmable electronic general-purpose digital computer. It was Turing-complete and able to solve a large class of numerical problems through reprogramming. ENIAC's basic architecture, the same architecture used by every computer today, that architecture was designed by the brilliant mathematician John von Neumann in 1945. Hungarian-American polymath and Manhattan Project mathematician John von Neumann is one of the most revered scientists in the history of computing. People who had met him and interacted with him realized that his brain was more powerful than anyone else they ever encountered. I remember Hans Beethoven said only half in just that for nine months, a brain was a new development of the human brain. During the war, von Neumann worked at Los Alamos National Lab on the mathematics of explosive shockwaves for the implosion-type atomic bomb. There, he worked with IBM Mechanical tabulating machines, computing machines tailored for this one and only purpose. As von Neumann grew familiar with the tabulators, he began to imagine a more general machine, a machine that could handle far more general mathematical challenges, a machine that today we call a computer. Neumann's idea of what a working computer should be was put together in the report on the architecture of such a machine. Today, that architecture is still called the von Neumann architecture. This is what von Neumann said was necessary to be a computer. There must be a processing unit that contains an arithmetic logic unit and processor registers, a control unit that contains an instruction register and program counter, a memory that stores data and instructions, external mass storage, input and output mechanisms. John von Neumann's dream of a general machine became a reality with the building of ENIAC. Towards the end of his short life, von Neumann envisioned independent self-replicating analog robots to improve themselves with each new generation, adding greatly to the future possibilities of artificial intelligence and space exploration. Early computers used tubes like this for processing information. Called vacuum tubes, they were turned on or off to represent zeros and ones. As a result, computers were huge, required massive amounts of energy, and they filled giant air conditioned rooms. Then in 1947, the modern transistor was invented at Bell Labs, an invention that changed the world. Without the transistor, what would the digital economy look like? Would it exist at all? Would Microsoft, Google, Facebook have become business giants? Would computer geeks have become cool rich guys, driving BMWs and becoming the wealthiest people the world has ever seen? Not likely. American physicist John Bardeen and Walter Bratton, working at Bell Labs under William Shockley, developed what was called a point-contact transistor. A point-contact transistor is a semiconductor device that can amplify an electrical signal or switch electrical signals into one of two positions, on or off. Semiconductors are metals, germanium and silicon are the simplest, that fall between great electrical conductors such as copper and insulators like rubber. Semiconductors also have interesting properties in relation to light and heat. Using these principles of semiconductors, Walter Bratton describes the invention of the transistor. We finally decided that by using a electrolyte, one should be able to make in principle an amplifier. Out of the semiconductor, one morning, after we had discussed the results of this type of experiment, John Bardeen, walked into my office and suggested a geometry for a producing an amplifier using the electrolyte. Bardeen suggested that on a piece of germanium, they place a drop of electrolyte, in this case, water. Then to wax a metal wire and push it through the water to make contact with the germanium. The wax would insulate the wire from the water. Another point would make contact with the water. Potentials would be supplied between the water and the germanium, and also between the wax point and the germanium. It was thought that the potential on the electrolyte would influence the flow of current between the wax point and the germanium. This experiment worked. They had produced a semiconductor amplifier. This experiment was the key that led to the invention of the transistor. But there were several problems. One of them was the water evaporated too quickly. So they changed the electrolyte to glycol borate, which evaporates very slowly. This worked quite well. But the device would not amplify above eight cycles per second. They were quite sure that the electrolyte was responsible for the slow response. They tried to replace the electrolyte and its contact with the gold. Instead of the wax point, they placed a wire contact near the edge of the gold spot. It was then they observed an entirely new phenomenon, now called the transistor effect. Four days later, on December 23, 1947, utilizing the transistor effect, the very first transistor was made. Foreshadowing how fast digital technology accelerated. By 1953, the first high-performance germanium transistors capable of switching at 60 millihertz were created. They were first used for transistor car radios, but were also suited to high-speed computing. The first silicon transistor was fabricated a year later in 1954, again by Bell Labs. And later that year, Texas Instruments created the first commercial mass-produced silicon transistor. By 2019, the largest transistor count in a commercially available microprocessor was put at over 39 billion. As a result of their work, Chuck Lee, Bardeen, and Bratton share the 1956 Nobel Prize in Physics for the invention of the transistor. Supercomputers, the personal computer, tablets, cell phones, and smart technology, inserted into thousands of items such as self-driving cars, they are the offspring of the first digital computer, and they are the miracle of the modern-day digital economy. Indeed, the digital computer is the greatest commercial product the world has ever seen. However, in the late 1940s, the idea that digital computers would have any commercial applications was not clear. That changed with the introduction in 1950 of the UNIVAC I, the first commercially developed computer. It was a computer that attracted widespread public attention when it successfully predicted Eisenhower's Presidential election landslide in 1952. Manufactured by Remington Rand, the UNIVAC I was created by Pressper Eckert and John Mockley, designers of the early NEAC computer. UNIVAC I used 5,200 vacuum tubes and weighed 29,000 pounds. A product video demonstrates this computational behemoth. Until tonight, we've brought you down here to Remington Rand's New York Computing Center in order that we might show you that something actually is being done about it. The hero of our story tonight is a giant electronic brain developed by Remington Rand, UNIVAC. Now, recent experiments show that future use of UNIVAC may give us faster and more accurate weather predictions than were ever possible before. You see, UNIVAC can take the past histories of thousands and thousands of storms, analyze them, compare them with developing conditions, and make predictions all in a matter of minutes. Calculations that would ordinarily take hundreds of man hours to complete. Data from guided weather rockets, radar observation stations, weather stations, all of this can be fed into the computer through these magnetic tapes at a rate of 12,000 numbers or letters per second. The memory tanks make immediately available 12,000 additional units of information. And all of this complex weather data is analyzed in the heart of the UNIVAC, the electronic central computer capable of making over 2,000 mathematical calculations per second. Remington Rand eventually sold 46 UNIVAC ones for more than $1 million each, truly the beginning of the computer-based digital economy. Marvin Minsky, an American cognitive and computer scientist whose lifelong research on artificial intelligence at the Massachusetts Institute of Technologies AI Laboratory, earned him the title of "The Father of Artificial Intelligence." As a student, Minsky had dreamed of producing machines, which could learn by providing them with memory neurons connected to synapses. The machine would also have to possess past memory in order to function efficiently when faced with different situations. Artificial neural nets, deep learning, sometimes called machine learning, has produced the greatest leap in artificial intelligence in the 21st century. This breakthrough started in 1951 when Marvin Minsky teamed with graduate student Dean Edmonds to build the first artificial neural network in Harvard's computer lab. They designed the first 40 neuron neural computer, SNARK, stochastic neural analog reinforcement computer, with synapses that adjusted their weights according to the success of performing a specified task. The machine was built of tubes, motors, and clutches, and it successfully modeled the behavior of a rat in a maze searching for food. Must have been in 1949. I was working on this theory of making a reinforcement-based, a Skinner-based reinforcement learning theory, except it would have a random network of neurons and a way of changing the simulated synapses with little motors and potentiometers and things like that. So I designed this machine, which would be a randomly wired neural network learning machine. Minsky describes how this machine, consisting of a labyrinth of valves, small motors, gears, and wires, linked up the various neurons. We designed this thing, and the machine was a rack of 40 of these. So it was about as big as a grand piano and full of racks of equipment. And here is a machine that has a memory, which is the probability that if a signal comes into one of these inputs, another signal will come out the output. And the probability that will happen goes from zero if this volume control is turned down to one if it's turned all the way up. This first practical example included numerous connections between its various neurons. Acting like a mammal nervous system able to overcome any eventual information interruption due to one of the neurons failing. As the 20th century came to an end, neural nets became an afterthought during what is called the first artificial intelligence winter as the Internet exploded. However, neural nets have staged remarkable comeback in the 21st century. Marvin Minsky would go on to make major contributions to the field of artificial intelligence and robotics before passing in 2016 at the age of 88. Thanks for watching From Computers to Artificial Intelligence, A History. In the next program, 1952 to 1964, the term artificial intelligence is introduced, and it becomes clear that the digital world is progressing exponentially. [Music]