Artificial Ignorance

Share this post

AI Roundup 039: The governance issue

www.ignorance.ai

Discover more from Artificial Ignorance

The AI newsletter for founders. A nuanced exploration of AI with projects, essays, news, and interviews.
Over 4,000 subscribers
Continue reading
Sign in

AI Roundup 039: The governance issue

November 3, 2023.

Charlie Guo
Nov 3, 2023
5
Share this post

AI Roundup 039: The governance issue

www.ignorance.ai
Share
Artwork created with Midjourney.

Biden's AI Executive Order

On Monday, President Biden signed a sweeping Executive Order on AI and launched AI.gov, a new site promoting policies and AI recruitment.

The bottom line:

  • Key themes include limiting large model training, adding more AI talent, and directing government agencies to think about AI.

  • Most AI companies will not be affected by this EO (yet). Foundation model developers will be impacted, along with infrastructure-as-a-service platforms and federal contractors.

  • We will undoubtedly see much more regulation on the back of this EO. But it's too early to say whether the government is stifling innovation and/or adequately accounting for AI risks.

Dive deeper:

What President Biden's AI executive order actually means

What President Biden's AI executive order actually means

Charlie Guo
·
Nov 1
Read full story

Artwork created with Midjourney.

Sunak's AI Safety Summit

The UK hosted its AI Safety Summit on Wednesday and Thursday, with a substantial guest list spanning academics, think tanks, government officials, and AI companies.

All the announcements I could find:

  • The Bletchley Declaration, an agreement of 28 countries to collaborate on mitigating “frontier AI risk.”

  • The UK AI Safety Institute (and the announcement of a US version), led by Ian Hogarth and Yoshua Bengio.

  • A £225 million government investment into Isambard-AI, a UK supercomputer.

  • An agreement from leading AI companies to allow governments to test their latest models before public release.

  • News that the next AI Safety Summit will be held six months from now in South Korea, and France will host the third summit in a year.

  • A "fireside chat" between the Prime Minister and Elon Musk, for some reason.

  • And while not yet an official announcement, Sunak is reportedly working on an OpenAI-powered chatbot for UK citizens to pay taxes an access pensions.

Elsewhere in AI geopolitics:

  • Voice cloning tools are allowing Indian politicians to send personalized messages ahead of elections.

  • New US rules may force Nvidia to cancel over $5 billion in GPU orders from Chinese companies.

  • The UN announced a new advisory body to address AI governance issues.

  • And the G7 is expected to announce a code of conduct for companies developing advanced AI.

Artwork created with Midjourney.

Fearmongering

Playing out alongside all of the regulations has been a fierce debate among AI experts. Both Yann LeCun and Andrew Ng have accused doomsayers of having ulterior motives. Some of the accused, like Demis Hassabis, are pushing back.

Why it matters:

  • The debate is whether AI CEOs actually believe AI poses an existential risk, or if that narrative is an effort to get governments to clamp down on AI - and to cement their current leads.

  • To quote Ben Thompson, "if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm."

  • And with so much recent government action, many fear that sweeping regulations will stifle innovation, strange open-source, and put the future of AI in the hands of a few.

The AI apocalypse isn't what you should be worried about

The AI apocalypse isn't what you should be worried about

Charlie Guo
·
May 18
Read full story

Elsewhere in AI anxiety:

  • A look at Justine Bateman's fight against generative AI in Hollywood.

  • Viral deepfakes of the Israel-Hamas war are causing people to doubt real images and video.

  • And Scarlett Johansson takes legal action against an AI ad that cloned her voice without her permission.

Artificial Ignorance is reader-supported. If you found this interesting or insightful, consider becoming a free or paid subscriber.

Things happen

A rogue AI company is putting GPUs into international waters. New Jersey high school students create deepfake porn of their classmates. LinkedIn's GPT-4-powered "job coach" available to Premium users. The first new Beatles song since 1995, made with AI. The latest AlphaFold model is more useful for drug discovery. AI Seinfeld is broken, maybe forever. Judge pares back copyright lawsuit against Midjourney and Stable Diffusion. Alibaba's Tongyi Qianwen 2 LLM, with industry-specific variants. A profile of Ilya Sutskever, Chief Scientist at OpenAI. Microsoft 365 Copilot is here (for enterprise users only). Sally Ignore Previous Instructions. Brave's new AI assistant, Leo. Copying Angry Birds with nothing but AI. Google commits up to $2B in Anthropic funding. ChatGPT can now work with your PDFs. Phind is (supposedly) better than GPT-4 at coding. AI detectors are destroying writers' livelihoods. The word of the year is AI.

Last week’s roundup

AI Roundup 038: Preparedness

Charlie Guo
·
Oct 27
AI Roundup 038: Preparedness

OpenAI formed Preparedness, a new team to assess, evaluate, and probe AI models to protect against “catastrophic risks.” The company also launched a Preparedness Challenge, where people can submit novel ideas for catastrophic misuse and win up to $25,000 in API credits.

Read full story
5
Share this post

AI Roundup 039: The governance issue

www.ignorance.ai
Share
Previous
Next
Comments
Top
New
Community

No posts

Ready for more?

© 2023 Charlie Guo
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing