Sentient Patent Explainer: How Randomness Makes Models Smarter

Imagine your commute. If you’re like most people, you’re driving to work in the same car on the same route each and every day. More than likely, you don’t even have to think about the turns to take, which streets are most often congested, what the speed limits are, or any other substantive detail. It’s the same commute, day in, day out.

Now imagine you were told today, the moment you put the key in the ignition, that you can’t go down streets with names that begin with letters in the first half of the alphabet. How does that change your calculus? Suddenly, every rote action you take on a daily basis requires actual thought. You have to readjust, on the fly, for the entirety of the commute.

The same is true for an algorithm. Algorithms make decisions, whether they show you a search result or predict the likelihood of a disease diagnosis, and those decisions are based on data. But what happens when you insert randomness into the algorithm’s behavior? What decisions does it make based on that state?

For example, let’s consider a random time series database, like the exchange rates of the dollar versus the yen. Further, let’s posit that you’re trading one versus the other. An algorithm will predict the future price based on the data it has (currency prices on a daily basis, say) and will recommend which currency you should invest in based on this prediction.

Now, say you’re training that algorithm to be successful in this space. Generally, it will be trained on historical data; in our example it would train for the years 2000 through 2010. The algorithm will decide what it should purchase or sell on July 25th, 2006 based on the training data it has in hopes of making a successful (in this case lucrative) decision. And if the decision in this case is to purchase more of the dollar because the indicators predict July 25th is a smart time to do so, the algorithm will do that, every time.

If you’re wondering what this has to do with driving to work, we’re getting there now. Say you tell the algorithm on July 25th, you know what, you’re not going to purchase the dollar. In fact, you’re going to purchase yen. You assign this position randomly, a shot in the dark. Why? Because you want to see what decisions the algorithm will make. You want to see how it reacts to finding itself in an unusual situation.

In layman’s terms, this is the substance behind our most recent patent, titled “Data mining technique with induced environmental alteration.” Awarded to Babak Hodjat, Hormoz Shahzad, and Gilles Demaneuf, it’s an advancement uniquely tailored to time-series datasets which allows AI practitioners to get more testing (and, hopefully, better algorithms) out of a single dataset. This is because the models can be judged not just on what they’ve learned from the training data itself and their actions in the unseen set, but from novel circumstances they’re forced into. Much like we could judge a driver’s ability to adapt and think on the fly when introduced to bizarre circumstances, so too can you judge an algorithm’s fitness. It’s an interesting and, we hope, important addition to our patent suite and we think it’s something worth celebrating.

This is Sentient’s tenth awarded patent, a number we hope will be growing in the next few months here. To read about some of our other research and patents (including one we just announced yesterday), see below.