The discovery of the Rosetta Stone had a profound and transformative effect on our understanding of ancient Egyptian history and culture. Before then, early efforts to decipher Egyptian hieroglyphics had been unsuccessful, and some scholars doubted if it was even possible to understand the language without access to a reliable primer. Initial attempts to understand hieroglyphs often led to speculative and inaccurate interpretations.
Executive Summary
Viewpoint: The latest advancements in AI are changing the way we interpret and apply large datasets because they are finally speaking our language, writes Elad Tsur, the CEO of Planck, an AI-powered data platform for commercial insurance.Among of other benefits, Tsur suggests there is transparency in large language models owing to their unique capability to communicate conversationally, which allows them to offer coherent insights into their own logic—fostering a deeper understanding of how the models arrive at specific conclusions.
Likening generative AI to a modern-day Rosetta Stone, Tsur also describes the benefits he sees for the industry in streamlining process, identifying emerging trends and uncovering hidden risks.
In 1799, a large slab of black basalt inscribed with a decree from King Ptolemy V was discovered by a French soldier in the town of Rosetta. The decree was written in three languages: Greek, a common Egyptian script and hieroglyphic. This trilingual inscription was originally included to ensure a wide audience for the king’s announcement—a measure that provided scholars with the key to understanding ancient Egyptian hieroglyphs centuries later. Using known data points in the Greek language, a translation was made possible.
Fast forward to the digital age, where massively complex datasets and the Internet of Things are the new frontiers for decoding and comprehension, and generative artificial intelligence models (GenAI), and specifically the large language models (LLMs), serve as our modern Rosetta Stone.
Before GenAI, deciphering large amounts of information through AI was a black-box process, where the inner logic was virtually inaccessible. Older AI models were complex enough to effectively process and interpret huge datasets, but lacked, at least by an order of magnitude, the complexity needed to explain the rationale behind their conclusions and predictions. As such, it was difficult to apply much confidence to the provided outputs. The development of GenAI provided a revolutionary leap forward in transparency.