18 Comments
User's avatar
Sean 🤓's avatar

Fantastic article, thanks for sharing. You’ve articulated really well concepts I’ve been thinking about but haven’t been able to put into words!

Expand full comment
Charlie Guo's avatar

To be fair, it took me ages to put this into words myself - I've had the bones of this article in my mind for at least six months.

Expand full comment
dan mantena's avatar

nice thoughts Charlie.

curious on your thoughts on Google's Project Astra which seems to take away the personal AI assistant away from a chatbox and into a real world companion (via wearable glasses)

Expand full comment
Charlie Guo's avatar

I think "ambient AI" will be a significant target for AI companies soon. One big hurdle right now is that every interaction with AI chatbots involves two-person turn taking. Not particularly passive and not particularly agentic. As we move more toward agents (albeit a very, very fuzzy, overhyped term right now), a much better UX pattern is to pull/push additional context when needed.

Expand full comment
Scott Spanier's avatar

So true! This has been on my mind for a while and actually inspired me to build something new 👀

Expand full comment
Kate Chaparro's avatar

The UX of it all, finally being talked about. Thank you

Expand full comment
Richard Domurat's avatar

I’ve been thinking about this too. As much as I love the new capabilities working with LLMs, the UX is quite taxing. I often use them for researching topics the way I’d use Wikipedia. The problem with the chat UX is it’s generally myopic and linear, so it’s hard to get an overview for where things are going or to take tangents. I often go down rabbit holes and want to return to the original inquiry but have a hard time picking back up—literally like having an engaging conversation and finding yourself saying “so where were we”. It will be incredibly powerful to have some bird’s eye view of the high level things that have been or will be covered, and be able to seamlessly jump between different paths—I think this is very similar to your concept of different layers of abstraction, just on the research vs creative side. I’m sure people are working on this already so I’m excited to see where it goes.

Expand full comment
Roy Vella's avatar

How we literally returned to what is effectively a DOS Command Line interface after decades of GUI is almost laughable.

Expand full comment
Charlie Guo's avatar

This made me laugh. You're so right Roy!

Expand full comment
Al Williams's avatar

I knew WIMP (windows icon mouse pointer) was a fad! I knew it! lol

Expand full comment
Nic Mulvaney's avatar

Great article. Many people thinking it, not enough people saying it and attempting to do it.

Expand full comment
Daniel Nest's avatar

Great piece, and I fully agree.

I was going to bring up ChatGPT's "Canvas" and Claude's "Artifacts" as the first examples of us moving into the direction of a better UX. (Then you mentioned "Canvas" at the end.)

Before that, we had baby steps in the form of being able to highlight a chunk of text in a ChatGPT response and use the "quote" icon that popped up to ask questions about specifically that line. It's still there, and it's quite useful, but isn't very visual.

As for reducing the cognitive load, it's no wonder that some of the best Custom GPTs out there pre-prompt ChatGPT to use keyboard shortcuts for some specific actions, so that the end user can tap a single key to get stuff done.

I'm sure we're moving in the direction of much more integrated interfaces.

(By the way, one of my very first articles, even before ChatGPT came out, was about an AI-assisted text editor called Lex, which used GPT-3 under the hood at the time. It used a few intuitive additions to put the text editing in focus while having the AI assistant at your call. It's still around and thriving, now as a paid tool: lex.page)

Expand full comment
Charlie Guo's avatar

I have been meaning to try out Lex and Type, to see if they're any good!

Expand full comment
Al Williams's avatar

So maybe I'm not using Canvas correctly, but I hate it. My workflow is I want the tool to just show me the very specific changes it suggests because I'm going to cherry-pick. 99% of the time, I don't make the exact change, but I think about why it suggested the change and then I do something to address the core issue. So Canvas either sucks for that or I haven't taken the time to fully understand what to do with it.

Expand full comment
Charlie Guo's avatar

I haven't seen anything that exactly fits your description, but Perplexity us certainly a step in that direction. It would definitely be interesting if you could visualize your queries in different ways

Expand full comment
Al Williams's avatar

Good read. I have had a lot of similar thoughts. Even before AI, I have often said: "big things" either fade away or fade into being normal. So years ago we were all agog over "multimedia" and "networking." Now, that's just how computers work. It isn't odd or special anymore. Then, of course, there are things that just fade away unused, but that's less interesting.

I've come to think that for creative pursuits , we probably won't care as much about "generative AI" as much as we will "assistive AI". That is, using AI to help us do what we want to do creatively, not necessarily do it for us.

Think of cars. If you are running a taxi service, then a self-driving vehicle makes sense. If you have a restored 1968 Mustang, you really don't want it to drive itself. YOU want to drive it. But maybe you slip in a GPS because, you know, it isn't 1968. So, sure, AI can write my business e-mails, but not my short stories, my poetry (no, I don't write poetry, but you know what I mean), my love letters.

Some related thoughts: https://open.substack.com/pub/readplanet/p/can-you-plagiarize-ai-can-it-plagiarize?r=g3gyw&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true

Expand full comment
Harpreet Singh's avatar

Loved this article. We at Launchable came to similar conclusions while building our application.

Expand full comment
Ryan Callihan's avatar

At my current company, 2023/2024 was characterised by watching the integration of chatbots into competitors products and customers workflows, but avoiding integrating one ourselves. I think it is paying off because, imo, the rise of the chatbot is the easiest way to conceptualise GenAI for the average person. It was their first expose ala CGPT and has a very cause and effect workflow.

But the mental overhead of knowing _what_ to type into a chatbot vs being presented with options tailored to your situation is starting to play out. 2025 will be interesting.

Expand full comment