3 Comments
User's avatar
Daniel Nest's avatar

Oh man, I love seeing stuff like this: AI applied in a real-world context, at scale, to a well-defined purpose, and with measurable positive effects. Really cool. Thanks for sharing!

You mentioned o1-preview in a footnote, stressing how it wasn't available at the time. But my guess is that, for your specific purpose, it wouldn't have been the most appropriate model anyway, right? It would take too much time at inference to be scalable without any major improvement - since its ability to reason carefully through a specific problem doesn't yield much benefit in the context of extracting key insights from a broad dataset. Or do you think it'd have its merits?

Expand full comment
Charlie Guo's avatar

Good question! I think we would have skipped o1 out of cost concerns alone - it's literally 6x the price of GPT 4o. But based on other work I've done with o1, I don't know that it would have provided meaningfully better analysis. I do think it would have been at least a little better at classifying the calls, and reasoning through data around the stage of company and it's industry. But I'm not sure it would have been worth the extra cost, given how many transcripts we had to work with.

Expand full comment
Daniel Nest's avatar

Yeah, that was my assumption here as well, which is what I was inelegantly hinting at by saying it'd "take too much time at inference to be scalable."

But yeah, great use case of applied AI to learn from - you could definitely turn it into a few more generic lessons for companies trying to work with LLMs.

Expand full comment