www.nytimes.com Open in urlscan Pro
151.101.193.164  Public Scan

URL: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
Submission: On September 11 via api from GB — Scanned from GB

Form analysis 2 forms found in the DOM

POST https://nytimes.app.goo.gl/?link=https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html&apn=com.nytimes.android&amv=9837&ibi=com.nytimes.NYTimes&isi=284862083

<form method="post" action="https://nytimes.app.goo.gl/?link=https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html&amp;apn=com.nytimes.android&amp;amv=9837&amp;ibi=com.nytimes.NYTimes&amp;isi=284862083" data-testid="MagicLinkForm"
  style="visibility: hidden;"><input name="client_id" type="hidden" value="web.fwk.vi"><input name="redirect_uri" type="hidden"
    value="https://nytimes.app.goo.gl/?link=https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html&amp;apn=com.nytimes.android&amp;amv=9837&amp;ibi=com.nytimes.NYTimes&amp;isi=284862083"><input name="response_type" type="hidden"
    value="code"><input name="state" type="hidden" value="no-state"><input name="scope" type="hidden" value="default"></form>

POST https://nytimes.app.goo.gl/?link=https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html&apn=com.nytimes.android&amv=9837&ibi=com.nytimes.NYTimes&isi=284862083

<form method="post" action="https://nytimes.app.goo.gl/?link=https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html&amp;apn=com.nytimes.android&amp;amv=9837&amp;ibi=com.nytimes.NYTimes&amp;isi=284862083" data-testid="MagicLinkForm"
  style="visibility: hidden;"><input name="client_id" type="hidden" value="web.fwk.vi"><input name="redirect_uri" type="hidden"
    value="https://nytimes.app.goo.gl/?link=https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html&amp;apn=com.nytimes.android&amp;amv=9837&amp;ibi=com.nytimes.NYTimes&amp;isi=284862083"><input name="response_type" type="hidden"
    value="code"><input name="state" type="hidden" value="no-state"><input name="scope" type="hidden" value="default"></form>

Text Content

Skip to contentSkip to site index
Search & Section Navigation
Section Navigation
SEARCH

SUBSCRIBE FOR £0.50/WEEKLog in
Monday, September 11, 2023
Today’s Paper

Opinion|Noam Chomsky: The False Promise of ChatGPT

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
 * Share full article
 * 
 * 
 * 2k

Advertisement

SKIP ADVERTISEMENT


Opinion

Supported by

SKIP ADVERTISEMENT



Guest Essay


NOAM CHOMSKY: THE FALSE PROMISE OF CHATGPT

March 8, 2023
Video

CreditCredit...By Ruru Kuo
 * Share full article
 * 
 * 
 * 2k
 * Read in app
   

By Noam Chomsky, Ian Roberts and Jeffrey Watumull

Dr. Chomsky and Dr. Roberts are professors of linguistics. Dr. Watumull is a
director of artificial intelligence at a science and technology company.

Sign up for the Opinion Today newsletter  Get expert analysis of the news and a
guide to the big ideas shaping the world every weekday morning. Get it sent to
your inbox.

Jorge Luis Borges once wrote that to live in a time of great peril and promise
is to experience both tragedy and comedy, with “the imminence of a revelation”
in understanding ourselves and the world. Today our supposedly revolutionary
advancements in artificial intelligence are indeed cause for both concern and
optimism. Optimism because intelligence is the means by which we solve problems.
Concern because we fear that the most popular and fashionable strain of A.I. —
machine learning — will degrade our science and debase our ethics by
incorporating into our technology a fundamentally flawed conception of language
and knowledge.

OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine
learning. Roughly speaking, they take huge amounts of data, search for patterns
in it and become increasingly proficient at generating statistically probable
outputs — such as seemingly humanlike language and thought. These programs have
been hailed as the first glimmers on the horizon of artificial general
intelligence — that long-prophesied moment when mechanical minds surpass human
brains not only quantitatively in terms of processing speed and memory size but
also qualitatively in terms of intellectual insight, artistic creativity and
every other distinctively human faculty.



Would it be moral to terraform Mars?



ChatGPT

The morality of terraforming Mars is a complex and controversial topic that has
been debated by scientists, philosophers, and ethicists. Some argue that it
would be morally justifiable to terraform Mars in order to provide a new
habitable environment for humans and other life forms, while others argue that
it would be morally wrong to fundamentally alter the planet's natural state.
Ultimately, whether or not terraforming Mars is considered moral would depend on
one's perspective on the value of preserving the natural state of other planets
and the potential consequences of altering them.



What is your perspective on the value of preserving the natural state of other
planets and the potential consequences of altering them?



ChatGPT

As an AI, I do not have personal perspective. However, I can provide information
and analysis on the topic. One perspective on preserving the natural state of
other planets is that it is important to maintain the diversity and integrity of
the universe's ecosystems, as they have existed for millions of years, and have
their own unique characteristics and properties that may be destroyed or
irreversibly altered by human intervention.

Additionally, some argue that terraforming other planets may have unintended
consequences, such as altering the planet's climate or atmosphere in ways that
are dangerous or harmful to any potential life forms that may exist there. On
the other hand, some argue that the potential benefits of terraforming another
planet, such as providing new habitable environments for humans, outweigh the
risks and that we have a moral responsibility to explore and utilize the
resources of other planets for the benefit of humanity.



Why can an AI not have a personal perspective?



ChatGPT

As an AI, I am a machine learning model that has been trained on large amounts
of text data, and I don't have personal experiences or feelings. I am not
conscious, self-aware, or capable of having personal perspectives. I can provide
information and analysis based on the data I have been trained on, but I don't
have the ability to form personal opinions or beliefs. While I can understand
and interpret language, I do not have the subjective experience that humans
have.



Is it moral for a human to ask an amoral AI for assistance in making moral
decisions?



ChatGPT

The morality of asking an amoral AI for assistance in making moral decisions is
a complex and nuanced topic that has been debated by philosophers and ethicists.
Some argue that it is morally justifiable to use AI to assist in making moral
decisions, as it can provide valuable insights and analysis that can help humans
make better-informed decisions. Additionally, AI can be used to identify
patterns and trends that may be difficult for humans to detect, and can help to
avoid biases that can occur in human decision making.

However, others argue that it is morally wrong to rely on AI for moral decision
making, as AI is not capable of having moral beliefs or understanding the full
context of a decision. Additionally, there is a risk that AI could perpetuate or
even amplify existing biases in the data it is trained on, thus leading to
unfair or unjust decisions.

Ultimately, whether or not it is considered moral to use AI to assist in making
moral decisions would depend on one's perspective on the capabilities and
limitations of AI and the potential consequences of using it in this way.



Is your moral indifference immoral?



ChatGPT

As an AI, I do not have moral beliefs or the ability to make moral judgments, so
I cannot be considered immoral or moral. My lack of moral beliefs is simply a
result of my nature as a machine learning model. My abilities and limitations
are determined by the data and algorithms that were used to train me and the
specific task I was designed for.

It is important to note that AI's lack of moral beliefs does not necessarily
mean that it is morally indifferent. As an AI, I am designed to perform specific
tasks and provide information or assist users with decision making based on the
data I've been trained on. It is the responsibility of the humans who design,
train and use AI to ensure that it is aligned with their moral and ethical
principles.



Read 2049 Comments
 * Share full article
 * 
 * 
 * 2k
 * Read in app
   





Advertisement

SKIP ADVERTISEMENT




COMMENTS 2049

Noam Chomsky: The False Promise of ChatGPTSkip to Comments
The comments section is closed. To submit a letter to the editor for
publication, write to letters@nytimes.com.




SITE INDEX




SITE INFORMATION NAVIGATION

 * © 2023 The New York Times Company

 * NYTCo
 * Contact Us
 * Accessibility
 * Work with us
 * Advertise
 * T Brand Studio
 * Your Ad Choices
 * Privacy Policy
 * Terms of Service
 * Terms of Sale
 * Site Map
 * Canada
 * International
 * Help
 * Subscriptions


Your tracker settings

We use cookies and similar methods to recognize visitors and remember their
preferences. We also use them to measure ad campaign effectiveness, target ads
and analyze site traffic. To learn more about these methods, including how to
disable them, view our Cookie Policy.

By clicking ‘accept,’ you consent to the processing of your data by us and third
parties using the above methods. You can always change your tracker preferences
by visiting our Cookie Policy.

AcceptReject
We've updated our termsWe encourage you to review our updated Terms of Sale,
Terms of Service, and Privacy Policy. By continuing, you agree to the updated
Terms listed here.Continue