Pi is quite wonderful at being an insightful and enthusiastic conversation partner - not sure if you've tried it? If you did, curious to hear your impressions.
I missed the context for the "flat out wrong" quote about open-sourcing AI models in this piece. So I had to look it up. Sutskever's full quote indicates he believes that open-sourcing is a bad idea because AGI is coming and it's dangerous to have more (bad) actors in the game? Or something.
Full quote: "We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”
Not sure I buy the premise, because if it's that risky, I don't see how having a single company cook it up in a non-transparent "black box" is a comforting thought?
As always, super insightful comment. I've tried out Pi briefly but mostly to test out it's reasoning and conversational skills - at this point my muscle memory is to reach for GPT-4 before anything else on day-to-day productivity tasks.
I tend to agree with your comment on whether keeping AI closed source makes sense or not. Historically, software tends to become *more* secure when it's open sourced, as the community can find and patch far more bugs than the developers alone. Of course, plenty of AI folks will say "AGI is different." Sam Altman has said in multiple interviews that if we get AGI wrong, it's game over for humanity.
Anthropic has a similar vibe - recent profiles of employees quote some of them as seeing themselves as "modern day Robert Oppenheimers," in fear of what their creations will unleash and spending their lunch hours stressing about the AI apocalypse. But we don't hear about many execs or engineers who decide to just... stop? Instead the framing is always around "we're the only ones who can be trusted to build this safely" or "Congress needs to regulate us before we/others build something terrible." False dichotomies like that tend to trigger my skepticism.
Yeah Pi is definitely not a replacement for GPT-4 for daily productivity tasks, but it's calibrated to have a very enthusiastic and compassionate "personality" so it does succeeed in being a pleasant conversation partner.
As for the open-source and regulation topic, there's also the geopolitical dimension to most of these arguments. "We can't stop working on AI because what if e.g. China wins the AI arms race and uses AI to do harm." Seen in that light, you could make an argument against open-sourcing, but then we're back to who should be the one we trust to do the right thing, etc.
I did enjoy reading Marc Andreesen's take on things in the "Why AI Will Save the World" piece (https://a16z.com/2023/06/06/ai-will-save-the-world/) - he's essentially all-in on AI which I guess makes him the polar opposite of Eliezer Yudkowsky. In his view, we should have as little regulation as possible while allowing open-source to proliferate and smaller startups to compete freely, because the positives outweigh the negatives.
The eternal optimist in me would like to buy into Marc's viewpoint.
But man...I can't claim to know the intricacies of the topic deeply enough to have a fully independent opinion. I can certainly appreciate the potential of AI as a daily user of Midjourney and Bing+ChatGPT, but the risks are much more difficult to get a personal sense of.
Great round-up, Charlie! While I'm familiar with all of the names on the list, it's nice to have a comparative look in a single place.
I've signed up for Claude all the way back in V1 days but sadly, much like many other AI products, it's not yet available to us Europeans. Still on the waitlist for now. According to this post by Ethan Mollick, Claude actually outperforms ChatGPT when it comes to parsing and working with PDF files (https://www.linkedin.com/posts/emollick_there-was-a-big-new-ai-release-today-claude-activity-7084733526662623232-1XDb/)
Pi is quite wonderful at being an insightful and enthusiastic conversation partner - not sure if you've tried it? If you did, curious to hear your impressions.
I missed the context for the "flat out wrong" quote about open-sourcing AI models in this piece. So I had to look it up. Sutskever's full quote indicates he believes that open-sourcing is a bad idea because AGI is coming and it's dangerous to have more (bad) actors in the game? Or something.
Full quote: "We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”
Not sure I buy the premise, because if it's that risky, I don't see how having a single company cook it up in a non-transparent "black box" is a comforting thought?
As always, super insightful comment. I've tried out Pi briefly but mostly to test out it's reasoning and conversational skills - at this point my muscle memory is to reach for GPT-4 before anything else on day-to-day productivity tasks.
I tend to agree with your comment on whether keeping AI closed source makes sense or not. Historically, software tends to become *more* secure when it's open sourced, as the community can find and patch far more bugs than the developers alone. Of course, plenty of AI folks will say "AGI is different." Sam Altman has said in multiple interviews that if we get AGI wrong, it's game over for humanity.
Anthropic has a similar vibe - recent profiles of employees quote some of them as seeing themselves as "modern day Robert Oppenheimers," in fear of what their creations will unleash and spending their lunch hours stressing about the AI apocalypse. But we don't hear about many execs or engineers who decide to just... stop? Instead the framing is always around "we're the only ones who can be trusted to build this safely" or "Congress needs to regulate us before we/others build something terrible." False dichotomies like that tend to trigger my skepticism.
Yeah Pi is definitely not a replacement for GPT-4 for daily productivity tasks, but it's calibrated to have a very enthusiastic and compassionate "personality" so it does succeeed in being a pleasant conversation partner.
As for the open-source and regulation topic, there's also the geopolitical dimension to most of these arguments. "We can't stop working on AI because what if e.g. China wins the AI arms race and uses AI to do harm." Seen in that light, you could make an argument against open-sourcing, but then we're back to who should be the one we trust to do the right thing, etc.
I did enjoy reading Marc Andreesen's take on things in the "Why AI Will Save the World" piece (https://a16z.com/2023/06/06/ai-will-save-the-world/) - he's essentially all-in on AI which I guess makes him the polar opposite of Eliezer Yudkowsky. In his view, we should have as little regulation as possible while allowing open-source to proliferate and smaller startups to compete freely, because the positives outweigh the negatives.
The eternal optimist in me would like to buy into Marc's viewpoint.
But man...I can't claim to know the intricacies of the topic deeply enough to have a fully independent opinion. I can certainly appreciate the potential of AI as a daily user of Midjourney and Bing+ChatGPT, but the risks are much more difficult to get a personal sense of.
Guess we'll have to wait and see.
Nice, Charlie!
Open AI has the zeitgeist for now, and it's hard to imagine one of these upending their dominance, but if history is any indicator, it'll happen.