Mind Matters Reporting on Natural and Artificial Intelligence
entrance-gate-to-persepolis-persia-iran-gate-of-all-nations-stockpack-adobe-stock.jpg
Entrance gate to Persepolis Persia Iran Gate of All Nations
Entrance gate to Persepolis Persia Iran Gate of All Nations

How Much Can New AI Tell Us About Ancient Times?

An ambitious new project hopes to use the predictive text that cell phones use to unlock their stories

Many researchers hope that AI will leading to a“golden age” of discovery for lost languages, hard to decipher writings, and badly damaged Biblical scrolls. Algorithms can chug through vast numbers of possibilities of interpretation, presenting the scholar with probabilities to choose from.

But even powerful algorithms have their work cut out for them. For example, of the hundreds of thousands of clay (cuneiform) tablets that survive from an ancient part of the Near East called Mesopotamia, many are damaged. We may know the language but we don’t know what’s missing from the text and what difference the missing part makes to what is being said.

Experts try to fill in the missing parts but guessing at all the possibilities is a very time-consuming job, prone to wild-goose chases that could put them off the track indefinitely.

Here’s an example. Suppose the text read something like this:

“The king had three sons and two [large gap]”

It is reasonable to think that the key missing piece of text is “daughters” and that the subsequent text refers to various marriage arrangements for the king’s children.

But there are other possibilities. What if the missing piece of text is

“island kingdoms. And the king was unsure how to divide the inheritance fairly.” It might be a while before we knew enough of the story to tumble to that.

Is it worth trying to figure it out? Some of the earliest known agricultural civilizations originated in Mesopotamia. It is the backdrop to a number of incidents important to various cultures today. Abraham and Sarah set out from Ur, the children of Israel were exiled in Babylon, and a key battle in Shia Islamic history (Karbala) was fought there. So scholars who want to know more of the life and times of ordinary people who have lived in Mesopotamia have been using AI to try to fill in the gaps in the masses of records and documents of many aspects of their lives.

A recent effort involves putting recurrent neural networks to work on the Achaemenid Empire (550 BC–330 BC) founded by Cyrus the Great (pictured). Cyrus’s empire had a deep influence on the development of civilizations in the area. He appears in the Bible 23 times and is discussed by Greek historians of the era.

A recurrent neural network is the kind of predictive text program you may notice on a cell phone. It predicts what you might be intending to say and fills it in for you. For example, “I am run”[ing late. Start without me.] If you (and/or many others) have often typed those words in the past, the program may “predict” what you mean to say and fill it in:

Recurrent Neural Networks can be thought of as a series of networks linked together. They often have a chain-like architecture, making them applicable for tasks such as speech recognition, language translation, etc. An RNN can be designed to operate across sequences of vectors in the input, output, or both. For example, a sequenced input may take a sentence as an input and output a positive or negative sentiment value. Alternatively, a sequenced output may take an image as an input, and produce a sentence as an output.

Recurrent Neural Network” at DeepAI.org

Once the researchers have placed enough text in a format on which the algorithm can work, they will be in a position to sift through the probabilities much faster.

They face some risks, however, as experimental physicist Rob Sheldon, who has studied classical Hebrew and uses neural networks, told Mind Matters News. He tried using Google neural nets to convert an oral science debate into a readable text but found that in every third sentence, the meaning was completely garbled. He concluded, “Evidently the training set wasn’t used to scientific discussions.”

The “training set” is the large number of examples used to prompt the AI to choose one outcome as more probable than another. He concluded,

This is my problem with all neural nets—they can only regurgitate what they have been fed. They don’t understand meaning. The problem with ancient texts isn’t blurry letters, it is broken tablets and large gaps. What a neural net will do to fill the gap will look like mashed-up training texts.

Thus the missing text that the AI proposes can look sort of reasonable but we don’t really know if it is correct. It’s just not unreasonable.

It sorta looks reasonable, but then the text loses information, since we’ve purposely inserted the lowest information content word in its place (which is precisely what neural nets are trained to do). Do that enough times, and the text turns to pablum.

I would challenge anyone out there to take a text, remove 1/10 the words, run it through a neural net reconstructor (not trained on the original text!), and see how much information is lost. My guess is that the output is worse than if one in ten of the words had been left out.

Dr. Sheldon makes clear that neural networks can be very useful, depending on the problem researchers are addressing:

Neural nets are adept at pattern recognition. Inasmuch as written language is a pattern, neural nets can help. If, for example, you wanted to know which 5 letters were embedded in the “Captcha” image, a neural net could help you. If you want to know, “are these scratches on a paleolithic rock natural?” perhaps a neural net could help you.

But as soon as you ask something not related to images, the neural net will fail. Google Translate, for example, does fine on low information content, commonplace texts, but fails spectacularly on more complex texts.

So if neural nets are being used to say “Hey, there’s a palimpsest under this text.” or “This is not a typical cuneiform grocery receipt” then that is what they excel at. If you want to know what it means, however, hire a human.

But, he notes, hiring a human won’t solve all the problems. The human understands meaning (for example, the difference between “daughters” and “islands” is not merely a different sequence of binary code; it completely changes what the narrative is likely to be about). But that human may also have many misconceptions about what the text could mean, due to the errors of previous translators, working with fragmentary information.

And then there’s the problem of context, he says:

Context is crucial to translation. A neural net doesn’t really understand context. What it has is statistics on which words usually appear around other words. This is not meaning, this is association. If all you have is association, what you are doing is making an unknown word repeat information already present in the sentence. That means the unknown word adds nothing to the information already known.

Humans don’t talk like that, or at least, not unless they are trained as politicians. Rather, each word should carry its own weight—or so they teach in English class. For example, let’s take an English proverb, “A stitch in time saves nine.” Suppose you didn’t know the meaning of “nine.” What would you insert instead from the context? “A stitch in time saves minutes”? “A stitch in time saves jobs”? “A stitch in time saves the footrace?”

As you look over the options, you realize that all of them might be “reasonable.” but not one of them conveys what the original proverb meant. The last one uses an analogy to running, which might even be considered “wrong game, wrong rules.”

He offers an example from translations of the Book of Proverbs in the Bible:

We don’t know what all the words mean, so we use parallelism and context to guess. By the time we are done, many proverbs sound like a lot of boring tautologies. They have lost all their information content.

Take, for example, the Biblical phrase “He who touches Israel touches the apple of my eye.” We get the gist. But what is this “apple of my eye”? For centuries, it was interpreted as “something very precious, something beautiful.” With modern linguistics it turned out to mean “the pupil of my eye.” Gives the phrase a whole new meaning, no? Nothing to do with precious and beautiful after all; much more to do with painful and personal.

Like a stick in the eye, that shock of information content is what happens in normal human discourse, and this is precisely what is missing in AI.

Seeing how AI can continue to help us with ancient texts is exciting but we are clearly only at the beginning of the journey.

Note 1: The map of Mesopotamia is courtesy: Goran tek-en/Wikimedia Commons/CC BY-SA 4.0

Note 2: The scholarly paper discussing this topic is Restoration of fragmentary Babylonian texts using recurrent neural networks Ethan Fetaya, Yonatan Lifshitz, Elad Aaron, Shai Gordin Proceedings of the National Academy of Sciences Sep 2020, 117 (37) 22743-22751; DOI: 10.1073/pnas.2003794117 (subscription required at PNAS but currently free at arXiv )


You may also enjoy:

Surprising ways AI can help recover lost languages. Researchers into lost languages hail the new technologies as a golden age for discovery.

And

Can AI help us decipher lost languages? That depends mainly on the reasons we haven’t yet deciphered ancient texts.

Does AI challenge Biblical archeology? Sadly, many surviving documents are so damaged that they cannot be read using traditional methods. AI can sometimes be used to decipher them.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

How Much Can New AI Tell Us About Ancient Times?