sitn.hms.harvard.edu Open in urlscan Pro
134.174.149.33  Public Scan

URL: https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
Submission: On October 31 via manual from US — Scanned from IT

Form analysis 2 forms found in the DOM

GET https://sitn.hms.harvard.edu/

<form role="search" method="get" class="search-form" action="https://sitn.hms.harvard.edu/" aria-expanded="false">
  <label>
    <span class="screen-reader-text">Search for:</span>
    <input type="search" class="search-field" placeholder="Search …" value="" name="s">
  </label>
  <input type="submit" class="search-submit" value="Search">
</form>

POST https://sitn.hms.harvard.edu/wp-comments-post.php

<form action="https://sitn.hms.harvard.edu/wp-comments-post.php" method="post" id="commentform" class="comment-form">
  <p class="comment-notes"><span id="email-notes">Your email address will not be published.</span> <span class="required-field-message">Required fields are marked <span class="required">*</span></span></p>
  <p class="comment-form-comment"><label for="comment">Comment <span class="required">*</span></label> <textarea id="comment" name="comment" cols="45" rows="8" maxlength="15360" required="" minlength="15"></textarea></p>
  <p class="comment-form-author"><label for="author">Name <span class="required">*</span></label> <input id="author" name="author" type="text" value="" size="30" maxlength="245" autocomplete="name" required=""></p>
  <p class="comment-form-email"><label for="email">Email <span class="required">*</span></label> <input id="email" name="email" type="email" value="" size="30" maxlength="100" aria-describedby="email-notes" autocomplete="email" required=""></p>
  <p class="comment-form-url"><label for="url">Website</label> <input id="url" name="url" type="url" value="" size="30" maxlength="200" autocomplete="url"></p>
  <p class="comment-form-cookies-consent"><input id="wp-comment-cookies-consent" name="wp-comment-cookies-consent" type="checkbox" value="yes"> <label for="wp-comment-cookies-consent">Save my name, email, and website in this browser for the next time
      I comment.</label></p>
  <p class="comment-subscription-form"><input type="checkbox" name="subscribe_comments" id="subscribe_comments" value="subscribe" style="width: auto; -moz-appearance: checkbox; -webkit-appearance: checkbox;"> <label class="subscribe-label"
      id="subscribe-label" for="subscribe_comments">Notify me of follow-up comments by email.</label></p>
  <p class="comment-subscription-form"><input type="checkbox" name="subscribe_blog" id="subscribe_blog" value="subscribe" style="width: auto; -moz-appearance: checkbox; -webkit-appearance: checkbox;"> <label class="subscribe-label"
      id="subscribe-blog-label" for="subscribe_blog">Notify me of new posts by email.</label></p>
  <p class="form-submit"><input name="submit" type="submit" id="submit" class="submit" value="Post Comment"> <input type="hidden" name="comment_post_ID" value="12779" id="comment_post_ID">
    <input type="hidden" name="comment_parent" id="comment_parent" value="0">
  </p>
  <p style="display: none;"><input type="hidden" id="akismet_comment_nonce" name="akismet_comment_nonce" value="12fdd8e26c"></p>
  <noscript><input type="hidden" name="JS04X7" value="NS1"></noscript>
  <noscript>
    <p><strong>Currently you have JavaScript disabled. In order to post comments, please make sure JavaScript and Cookies are enabled, and reload the page.</strong>
      <a href="http://enable-javascript.com/" rel="nofollow external">Click here for instructions on how to enable JavaScript in your browser.</a></p>
  </noscript>
  <p style="display: none !important;" class="akismet-fields-container" data-prefix="ak_"><label>Δ<textarea name="ak_hp_textarea" cols="45" rows="8" maxlength="100"></textarea></label><input type="hidden" id="ak_js_1" name="ak_js"
      value="1730391004879">
    <script>
      document.getElementById("ak_js_1").setAttribute("value", (new Date()).getTime());
    </script>
  </p>
</form>

Text Content

Skip to content


SCIENCE IN THE NEWS


OPENING THE LINES OF COMMUNICATION BETWEEN RESEARCH SCIENTISTS AND THE WIDER
COMMUNITY


Primary Menu
 * Blog
   * Astronomy
   * Biology & Medicine
     * COVID-19
   * Chemistry
   * Computer Science
   * Environment
   * Physics & Engineering
   * Public Health & Policy
 * Special Edition
   * Alan Turing
   * Artificial Intelligence
   * Biodiversity
   * Chemistry
   * Climate Change
   * Dear Mister President
   * Diversity
   * GMOs and Our Food
   * Infectious Diseases
   * Meet a Scientist
   * Networks
   * Neurotechnology
   * Picture a Scientist
   * Social Justice
   * Space Exploration
   * Stem Cells
   * Sustainable Energy
   * Tomorrow’s Technology
   * Water
 * PodCast
 * Art
   * Images
   * Videos
   * Music and Poetry
   * Featured Artists
   * Recent
   * Info-comics
     * COVID-19
 * Public Events
   * Seminars
   * DayCon
   * Science by the Pint
   * Model Organism Zoo
 * About Us
   * What is SITN?
   * Get Involved!
   * Event Feedback
   * SITN Leadership
   * Press
 * Subscribe

Search
Search for:
 * SITN Facebook Page
 * SITN Twitter Feed
 * SITN Instagram Page
 * SITN Lectures on YouTube
 * SITN Podcast on SoundCloud
 * Subscribe to the SITN Mailing List
 * SITN Website RSS Feed

August 28, 2017

Blog, Special Edition on Artificial Intelligence


THE HISTORY OF ARTIFICIAL INTELLIGENCE

by Rockwell Anyoha


CAN MACHINES THINK?

In the first half of the 20th century, science fiction familiarized the world
with the concept of artificially intelligent robots. It began with the
“heartless” Tin man from the Wizard of Oz and continued with the humanoid robot
that impersonated Maria in Metropolis. By the 1950s, we had a generation of
scientists, mathematicians, and philosophers with the concept of artificial
intelligence (or AI) culturally assimilated in their minds. One such person was
Alan Turing, a young British polymath who explored the mathematical possibility
of artificial intelligence. Turing suggested that humans use available
information as well as reason in order to solve problems and make decisions, so
why can’t machines do the same thing? This was the logical framework of his 1950
paper, Computing Machinery and Intelligence in which he discussed how to build
intelligent machines and how to test their intelligence.


MAKING THE PURSUIT POSSIBLE

Unfortunately, talk is cheap. What stopped Turing from getting to work right
then and there? First, computers needed to fundamentally change. Before 1949
computers lacked a key prerequisite for intelligence: they couldn’t store
commands, only execute them. In other words, computers could be told what to do
but couldn’t remember what they did. Second, computing was extremely expensive.
In the early 1950s, the cost of leasing a computer ran up to $200,000 a month.
Only prestigious universities and big technology companies could afford to
dillydally in these uncharted waters. A proof of concept as well as advocacy
from high profile people were needed to persuade funding sources that machine
intelligence was worth pursuing.


THE CONFERENCE THAT STARTED IT ALL

Five years later, the proof of concept was initialized through Allen Newell,
Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a
program designed to mimic the problem solving skills of a human and was funded
by Research and Development (RAND) Corporation. It’s considered by many to be
the first artificial intelligence program and was presented at the Dartmouth
Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John
McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy,
imagining a great collaborative effort, brought together top researchers from
various fields for an open ended discussion on artificial intelligence, the term
which he coined at the very event. Sadly, the conference fell short of
McCarthy’s expectations; people came and went as they pleased, and there was
failure to agree on standard methods for the field. Despite this, everyone
whole-heartedly aligned with the sentiment that AI was achievable. The
significance of this event cannot be undermined as it catalyzed the next twenty
years of AI research.


ROLLER COASTER OF SUCCESS AND SETBACKS

From 1957 to 1974, AI flourished. Computers could store more information and
became faster, cheaper, and more accessible. Machine learning algorithms also
improved and people got better at knowing which algorithm to apply to their
problem. Early demonstrations such as Newell and Simon’s General Problem Solver
and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving
and the interpretation of spoken language respectively. These successes, as well
as the advocacy of leading researchers (namely the attendees of the DSRPAI)
convinced government agencies such as the Defense Advanced Research Projects
Agency (DARPA) to fund AI research at several institutions. The government was
particularly interested in a machine that could transcribe and translate spoken
language as well as high throughput data processing. Optimism was high and
expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from
three to eight years we will have a machine with the general intelligence of an
average human being.” However, while the basic proof of principle was there,
there was still a long way to go before the end goals of natural language
processing, abstract thinking, and self-recognition could be achieved.



Breaching the initial fog of AI revealed a mountain of obstacles. The biggest
was the lack of computational power to do anything substantial: computers simply
couldn’t store enough information or process it fast enough. In order to
communicate, for example, one needs to know the meanings of many words and
understand them in many combinations. Hans Moravec, a doctoral student of
McCarthy at the time, stated that “computers were still millions of times too
weak to exhibit intelligence.”  As patience dwindled so did the funding, and
research came to a slow roll for ten years.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic
toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized
“deep learning” techniques which allowed computers to learn using experience. On
the other hand Edward Feigenbaum introduced expert systems which mimicked the
decision making process of a human expert. The program would ask an expert in a
field how to respond in a given situation, and once this was learned for
virtually every situation, non-experts could receive advice from that program.
Expert systems were widely used in industries. The Japanese government heavily
funded expert systems and other AI related endeavors as part of their Fifth
Generation Computer Project (FGCP). From 1982-1990, they invested $400 million
dollars with the goals of revolutionizing computer processing, implementing
logic programming, and improving artificial intelligence. Unfortunately, most of
the ambitious goals were not met. However, it could be argued that the indirect
effects of the FGCP inspired a talented young generation of engineers and
scientists. Regardless, funding of the FGCP ceased, and AI fell out of the
limelight.

Ironically, in the absence of government funding and public hype, AI thrived.
During the 1990s and 2000s, many of the landmark goals of artificial
intelligence had been achieved. In 1997, reigning world chess champion and grand
master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer
program. This highly publicized match was the first time a reigning world chess
champion loss to a computer and served as a huge step towards an artificially
intelligent decision making program. In the same year, speech recognition
software, developed by Dragon Systems, was implemented on Windows. This was
another great step forward but in the direction of the spoken language
interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t
handle. Even human emotion was fair game as evidenced by Kismet, a robot
developed by Cynthia Breazeal that could recognize and display emotions.


TIME HEALS ALL WOUNDS

We haven’t gotten any smarter about how we are coding artificial intelligence,
so what changed? It turns out, the fundamental limit of computer storage that
was holding us back 30 years ago was no longer a problem. Moore’s Law, which
estimates that the memory and speed of computers doubles every year, had finally
caught up and in many cases, surpassed our needs. This is precisely how Deep
Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was
able to defeat Chinese Go champion, Ke Jie, only a few months ago. It offers a
bit of an explanation to the roller coaster of AI research; we saturate the
capabilities of AI to the level of our current computational power (computer
storage and processing speed), and then wait for Moore’s Law to catch up again.


ARTIFICIAL INTELLIGENCE IS EVERYWHERE

We now live in the age of “big data,” an age in which we have the capacity to
collect huge sums of information too cumbersome for a person to process. The
application of artificial intelligence in this regard has already been quite
fruitful in several industries such as technology, banking, marketing, and
entertainment. We’ve seen that even if algorithms don’t improve much, big data
and massive computing simply allow artificial intelligence to learn through
brute force. There may be evidence that Moore’s law is slowing down a tad, but
the increase in data certainly hasn’t lost any momentum. Breakthroughs in
computer science, mathematics, or neuroscience all serve as potential outs
through the ceiling of Moore’s Law.


THE FUTURE

So what is in store for the future? In the immediate future, AI language is
looking like the next big thing. In fact, it’s already underway. I can’t
remember the last time I called a company and directly spoke with a human. These
days, machines are even calling me! One could imagine interacting with an expert
system in a fluid conversation, or having a conversation in two different
languages being translated in real time. We can also expect to see driverless
cars on the road in the next twenty years (and that is conservative). In the
long term, the goal is general intelligence, that is a machine that surpasses
human cognitive abilities in all tasks. This is along the lines of the sentient
robot we are used to seeing in movies. To me, it seems inconceivable that this
would be accomplished in the next 50 years. Even if the capability is there, the
ethical questions would serve as a strong barrier against fruition. When that
time comes (but better even before the time comes), we will need to have a
serious conversation about machine policy and ethics (ironically both
fundamentally human subjects), but for now, we’ll allow AI to steadily improve
and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology
with a background in physics and genetics. His current project employs the use
of machine learning to model animal behavior. In his free time, Rockwell enjoys
playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence.


FOR MORE INFORMATION:

Brief Timeline of AI

https://www.livescience.com/47544-history-of-a-i-artificial-intelligence-infographic.html

Complete Historical Overview

http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

Dartmouth Summer Research Project on Artificial Intelligence

https://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802

Future of AI

https://www.technologyreview.com/s/602830/the-future-of-artificial-intelligence-and-cybernetics/

Discussion on Future Ethical Challenges Facing AI

http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-intelligence

Detailed Review of Ethics of AI

https://intelligence.org/files/EthicsofAI.pdf


SHARE THIS:

 * Click to print (Opens in new window)
 * Click to email a link to a friend (Opens in new window)
 * Click to share on Facebook (Opens in new window)
 * Click to share on Twitter (Opens in new window)
 * Click to share on Reddit (Opens in new window)
 * 

August 28, 2017April 23, 2020artificial intelligence, history


312 THOUGHTS ON “THE HISTORY OF ARTIFICIAL INTELLIGENCE”

 1. ziebart baku says:
    October 29, 2024 at 8:44 am
    
    Thanks for sharing the information. ziebart baku
    
    Reply
    
 2. Charlie says:
    September 19, 2024 at 4:44 am
    
    Thankyou for such an informative and interesting post. I really appreciate
    it! I’m currently looking into the topic of AI for a college project and I
    have a higher understanding of AI now.
    
    Reply
    
 3. Frank Greco says:
    July 6, 2024 at 10:51 am
    
    Thank you. Very informative post.
    
    Imo, you need to include the intense interest in mathematical biology in the
    1930s with Nicolas Rashevsky as a leading researcher. This work undoubtedly
    inspired Weiner, McCullough, and Pitts in the 1940s to investigate patterns
    and artificial neural networks, which of course inspired McCarthy in the
    1950s to extend the research into intelligence.
    
    Reply
    
 4. Pious nkrumah says:
    June 19, 2024 at 6:33 pm
    
    It is a great work and I really appreciate that because it saves me alot
    
    Reply
    
 5. Ikenna Akuchi says:
    June 18, 2024 at 1:05 am
    
    Wierd comments, helpful pieece
    
    Reply
    
 6. rat says:
    June 10, 2024 at 4:45 am
    
    who ever did ai is a fucking dickhead
    
    Reply
    


COMMENT NAVIGATION

Older Comments



LEAVE A REPLY CANCEL REPLY

Your email address will not be published. Required fields are marked *

Comment *

Name *

Email *

Website

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.





Currently you have JavaScript disabled. In order to post comments, please make
sure JavaScript and Cookies are enabled, and reload the page. Click here for
instructions on how to enable JavaScript in your browser.

Δ


POST NAVIGATION

Previous Previous post: Psychosis, Dreams, and Memory in AI
Next Next post: How Artificial Intelligence Will Revolutionize the Energy
Industry


This work by SITNBoston is licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International License.

 
Unless otherwise indicated, attribute to the author or graphics designer and
SITNBoston, linking back to this page if possible.

Proudly powered by WordPress Theme: Canard by Automattic.