Hey Charlie, awesome breakdown! Building an AI search engine sounds like a massive job, and your step-by-step rundown is super clear. Love how you’re using Claude to generate related queries and then grabbing content and adding citations—that totally boosts trust, which is so important for any search tool.
The idea of using LLMs to make smarter searches is really cool, and streaming responses? Way better than waiting forever for an answer.
Just wondering—what’s been the toughest part about getting the LLM to give short but complete responses? And have you hit any unexpected roadblocks while working with the Brave API or tweaking Claude's prompts?
This is really cool! Neat to see the end result video.
Somewhat coincidentally, my upcoming post (tomorrow) also involves using Claude and creating useful apps.
Except, unlike you, I don't know how to code at all. So it's Claude that will be doing the coding. I'm asking Claude for much more simple apps that can run directly inside the Artifacts window using React components it has access to. Still a fun experiment.
I love this! I just read your post: I think a cool next step would be to use something like Cursor Composer (or OpenAIs new Canvas mode) to see if you can build even more complex apps .
Hey Charlie, is's an awesome work which really helps me a lot! But here is my only concern: as shown in your final video, the whole response is so fast. I'm wondering whether you speed up the video manually? If not, the high speed is attributed to LLM you use or some tricks used in your code? THX again!
The videos aren't sped up, but I did end up recording multiple "takes" and using the fastest one. Sometimes the search API or the LLM API had unpredictable latency.
Hey Charlie, awesome breakdown! Building an AI search engine sounds like a massive job, and your step-by-step rundown is super clear. Love how you’re using Claude to generate related queries and then grabbing content and adding citations—that totally boosts trust, which is so important for any search tool.
The idea of using LLMs to make smarter searches is really cool, and streaming responses? Way better than waiting forever for an answer.
Just wondering—what’s been the toughest part about getting the LLM to give short but complete responses? And have you hit any unexpected roadblocks while working with the Brave API or tweaking Claude's prompts?
This is really cool! Neat to see the end result video.
Somewhat coincidentally, my upcoming post (tomorrow) also involves using Claude and creating useful apps.
Except, unlike you, I don't know how to code at all. So it's Claude that will be doing the coding. I'm asking Claude for much more simple apps that can run directly inside the Artifacts window using React components it has access to. Still a fun experiment.
I love this! I just read your post: I think a cool next step would be to use something like Cursor Composer (or OpenAIs new Canvas mode) to see if you can build even more complex apps .
Yup, that's next on my list. I definitely want to see whether the error rate and instruction following are improved compared to pure LLMs.
Hey Charlie, is's an awesome work which really helps me a lot! But here is my only concern: as shown in your final video, the whole response is so fast. I'm wondering whether you speed up the video manually? If not, the high speed is attributed to LLM you use or some tricks used in your code? THX again!
The videos aren't sped up, but I did end up recording multiple "takes" and using the fastest one. Sometimes the search API or the LLM API had unpredictable latency.