Discussion about this post

User's avatar
Sahar Mor's avatar

Another trend I believe was a key contributing factor is big tech's adoption and widespread distribution of LLMs. Think about a scenario where OpenAI doesn't launch ChatGPT and Microsoft keeps their Bing chatbot under wraps. In that setting, many developers might feel they have to fix the "hallucination" issue before releasing anything.

Sure, Microsoft, OpenAI, and later Google faced criticism for users' early encounters with LLMs going off-track [0], but thanks to that, and only a year after ChatGPT's launch, even the average user is aware that hallucinations can occur in AI responses. This widespread understanding helps LLM builders and incumbents deploy LLM-powered apps faster and with less scrutiny.

[0] https://fortune.com/2023/02/21/bing-microsoft-sydney-chatgpt-openai-controversy-toxic-a-i-risk + https://www.npr.org/2023/02/09/1155650909/google-chatbot--error-bard-shares

Expand full comment
Sharif Islam's avatar

Thank you for this summary. For me, the crucial part is, "But as it turns out, efficiently learning the relationships between pieces of data is useful for many, many domains beyond translation." As the dust settles, for me, it becomes increasingly evident that the quality of data and the ability to accurately establish these relationships are paramount for the next phase.

In addition to the importance of data quality, it is worth delving into the significance of semantic mapping in this context. Semantic mapping plays a pivotal role in enabling AI systems to not only understand data relationships but also to derive meaningful insights and context from diverse datasets.

Expand full comment
5 more comments...

No posts