Cutting-edge AI techniques require tremendous compute power. At Sentient, we’ve created a unique network of distributed compute that allows us to solve complex problems, create breakthrough products, and patent innovative AI techniques across disciplines.

Our compute infrastructure connects over two million CPU cores and 5,000 GPU cards in thousands of locations across the world. But it’s not simply the size that makes it work. It’s the proprietary middleware that allows us to leverage our compute resources differently for every problem we’re working on.

For example, in evolutionary computation, our compute infrastructure gives us the ability to distribute a dataset to myriad resources and allow each to evolve and mutate into possible solutions. That middleware also lets us collect the successful generations of solutions, redistribute them to more resources, and continue evolving smarter and smarter algorithms.

And that’s just a single example. Scale allows us to parcel massive data sets. It allows us to research new avenues in our preferred disciplines. It gives our AI the freedom to make more observations, to try more novel ideas, and, ultimately, to make better decisions. Scale is the backbone of our AI platform.