Pause AI a FAQ

Arvind Tiwary
6 min readApr 18, 2023

I was in the first 1,000 signatories to the Pause AI request call by Elon Musk sponsored Future of Life .

[ Edit You may read

https://arxiv.org/abs/2306.00891

This paper presents perspectives on the state of AI, as held by a sample of experts. These experts were early signatories of the recent open letter from Future of Life, which calls for a pause on advanced AI development. Utmost effort was put into accurately representing the perspectives of our interviewees, and they have all read and approved of their representation. However, no paper could offer a perfect portrayal of their position. We feel confident in what opinions we do put forward, but we do not hold them tightly. In such dynamic times, we feel that no one should be resolved in their expectations for AI and its future. ]

We should see the entertaining fun and creativity in story telling by the Chat GPT types LLM. They are a wonderful conversation engines less fact finder or search or Ask engines. They are also pretty good ‘universal’ translators from one language to another

Maybe we have mis labelled them. You would find scores of examples on the internet and twitter…

In such a case a simple disclaimer that this is a story telling machine’s creation would minimize many bad outcomes!!
I liked Byrne Hobart quotable bite in Working with Copilot

but LLMs are better at sounding informed than being informed, so the result will be a bull market in the Dunning-Kruger Effect.

What I find amusing is ‘deliberate’ twisting a pause in a newer release to ‘ ban on all work including new releases’. The request to pause new releases is fairly reasonable , a ban was never asked. But temperatures have been raised and talking past each other is prevalent.

Do keep track of tribal camps provoking needlessly from a important and impactful discussion

  • “AI Safetyists” who worry about the risks of AI,
  • “Boomers” who view AI through culture war frames,
  • “Intelligence Deniers” who reject the concept of AI intelligence
  • “Techno Utopia” who dream of technology derived Utopia here and now
  • “Profit maximiser” rushing fast and may be unethical.

This will help obscure differences in benefits, risks and ability to discuss at some level of subtleties to evolve a balanced take. Every one will never have the same opinion and even with those ‘tribal camps; there will be scattered opinions.

The objective is to leverage good ( amrit) and throw away bad( vish ). The major issue is less super intelligent machines as the creative storytelling being such a hit that we lose quick simple ways to decide fiction or fact ..

Right now we don’t even have good measures of capabilities and qualities of LLM

These are my personal points . I did not have any discussion with other signatories

1 Why Six months?

Enough time to sensitize without becoming a hurdle. Its not stopping R&D Work . Just delay general releases

2 What will be achieved?

Hopefully some sets of policies around development, deployment, transparency and audit and correction of hurtful outcomes and the emergence of funded groups developing tools and methodologies for these

3 How can you stop?

We can not. Society may if it considers to. Many countries require driving certificates or regulate facilities for biolabs or atomic power plants. GPU Data centres energy consumption may exceed Bitcoin and may need some care to be a green user. Water consumption of GPU data centres is also significant.

4 What should techno-optimists do ?

Especially those developing Ask engines (LLM).

Some questions to techno optimists on improving good outcomes.

a) How do we measure quality which is relevant to the multicultural world, not just a reflection of US values. Are Model cards published by tools enough? Do we need more transparency on data used ( quality and provenance), tracking of ‘corrections’ and ability to show both data quality and inference engines are improving by external replicable process

b) Error correction. Make it up is a feature of LLM. How can this be identified in technical usage like medical care, law or repair of machines etc. Is there a need to develop process and instrumentation to assure not just correction of a particular case by heuristic rules but instrumented probes into the multi layer LLM engines and logs.

c) Monetization will drive features. Should fair use of copyright and Open Source Software be modified to reduce machine created derivatives work at scale and harm the OSS ecosystem. Stack overflow has limited ChatGPT replies for now. Should machines pay for data used in training?

d) How does liability for harm operate? Machines are like underage children and parents are responsible. What if machines reach legal age?

e) Guardrails if any? Please propose a change from free-for-all. Should we shape the machine based ecosystem or allow a tidal wave of technology driven change by big-tech and profit maximisers to overtake us and maybe polarize and fragment. Future Shock is here and now. The reply by authors of Stochastic Parrot argues for end users and societies to have a voice. They fear regulatory capture and don’t have high faith in ‘experts’ paid by those actors

f) ESG: Any considerations on energy usage exceeding Bitcoin? Water usage is also big. See CHATGPT IS CONSUMING A STAGGERING AMOUNT OF WATER.

Training estimated used water similar to cooling an Atomic power station.

What’s more: “ChatGPT needs to ‘drink’ [the equivalent of] a 500 ml bottle of water for a simple conversation of roughly 20–50 questions and answers,” the paper notes. “While a 500ml bottle of water might not seem too much, the total combined water footprint for inference is still extremely large, considering ChatGPT’s billions of users.”

Don’t kill by oversight:

Innovations have strange ways of adoption and becoming purposeful. Let creative Conversation engines thrive in a sustainable way. I have a 25 minute essay walking through the technology and issues .

There is a deeper insightful articulation of need for plurality ( users, cultures are different and need to choose), participation ( only ‘experts’ may lead to dysfunction as many or ultimately on payroll of the providers!!) and procedure( a hallmark of democratic legal systems)
The role of the arts and humanities in thinking about artificial intelligence (AI) | Ada Lovelace Institute

Chat GPT: god (play) thinking, creative, emotive silicon life?

I have highlighted 2 sections below for a quick summary in my essay

Do remember that a century ago there was great debate on vital force for organic chemistry but then alas the march of progress showed that prosaic inorganic chemistry would subsume the vital force!! And non-carbon life and machines do not need to mimic carbon life. Airplanes did not flap wings to be useful

1 Silicon Intelligence vs Carbon Intelligence

This explores the mischaracterization of these machines as a pattern matcher ( AI ethicist Gebru ex Google wrote a 2020 paper On the Dangers of Stochastic Parrots ) . They are much more with linguistic and epistemological abilities to reason like an intern and have a theory of the world and of minds of person as they interact and like socialized speakers make things up but keep conversation alive and fun!!

2 Slowing Chat GPT Ask Engines .

I am a strong proponent of innovation including disruptive ones like Crypto threatening established regulation methods by nation states with near time settlements, 24X7 permissionless services and decreasing intermediary fees.
But this is really different and needs more time for society to understand the impact and safety needed before big tech dominates more converting open source data into persuasion and engagement captures thru a business model which in the past has produced polarization and deepfakes.

--

--

Arvind Tiwary

GreenPill: Compounding of Human Knowledge Futurist, #IoTforIndia, Technopreneur, Golf addict