Sentient’s Traveling Roadshow: Come Say Hi at GECCO

gecco-logo2016The Genetic and Evolutionary Computation Conference has long been one of our favorite yearly events. It’s a phenomenal opportunity to hear from the brightest minds in the field, rub shoulders with other innovative AI practitioners, and present some of our most recent papers. This year is no different and we’re thrilled to be both sponsoring GECCO and delivering a slew of speeches and talks.

If you’re in Denver for this year’s GECCO, please do come by and say hello. We’ll be manning our table all day and we’re always looking for new talent in the field. We’d of course also love it if you stopped by and checked out any of our talks over the long weekend. You can find that schedule (and the related abstracts) below:

Introductory Tutorial on Evolving Neural Networks

Speaker: Risto Miikkulainen (Sentient Technologies Inc. and the University of Texas at Austin)

Time & location: Thursday, July 21 – 2:00 p.m. – 3:50 (Chasm Creek B)

Abstract: In the only talk not expressly associated with a recent paper, Risto Miikkulainen will give a concise introduction to evolving neural networks. If you’re interested in the field, it’s a great primer to the current state of the art and ideas behind ENNs.

Student Workshop 2016: Evolutionary Computation Research at Sentient Technologies, Inc.

Speaker: Risto Miikkulainen (Sentient Technologies Inc. and the University of Texas at Austin)

Time & location: Thursday, July 21 – 4:10 p.m. – 5:50 (Wind River A)

Estimating the Advantage of Age-Layering in Evolutionary Algorithms

Authors: Hormoz Shahrzad (Sentient Technologies), Babak Hodjat (Sentient Technologies) and Risto Miikkulainen (Sentient Technologies and the University of Texas at Austin)

Time & location: Friday, July 22 – 2:25 p.m. -2:50 (Wind Star A)

Abstract: In an age-layered evolutionary algorithm, candidates are evaluated on a small number of samples first; if they seem promising, they are evaluated with more samples, up to the entire set. In this manner, weak candidates can be eliminated quickly, and evolution can proceed faster. In this paper, the fitness-level method is used to derive a theoretical upper bound for the run- time of (k + 1) age-layered evolutionary strategy, showing a significant potential speedup compared to a non-layered counterpart. The parameters of the upper bound are estimated experimentally in the 11-Multiplexer problem, verifying that the theory can be useful in configuring age layering for maximum advantage. The predictions are validated in a practical implementation of age layering, confirming that 60-fold speedups are possible with this technique.

Evolving LSTMs with Maximum Information Objective to Achieve Deep Memory

Authors: Aditya Rawal (Sentient Technologies / University of Texas at Austin) and Risto Miikkulainen (Sentient Technologies and the University of Texas at Austin)

Time & location: Friday, July 22 – 4:35 p.m. -5:00 (Chasm Creek B)

Abstract: Reinforcement Learning agents with memory are constructed in this paper by extending neuroevolutionary algorithm NEAT to incorporate LSTM cells, i.e. special memory units with gating logic. Initial evaluation on POMDP tasks indicated that memory solutions obtained by evolving LSTMs outperform traditional RNNs. Scaling neuroevolution of LSTM to deep memory problems is challenging because: (1) the fitness landscape is deceptive, and (2) a large number of associated parameters need to be optimized. To overcome these challenges, a new secondary optimization objective is introduced that maximizes the information (Info-max) stored in the LSTM network. The network training is split into two phases. In the first phase (unsupervised phase), independent memory modules are evolved by optimizing for the info-max objective. In the second phase, the networks are trained by optimizing the task fitness. Results on two different memory tasks indicate that neuroevolution can discover powerful LSTM-based memory solution that outperform traditional RNNs.

A Many-task Approach to Behavior Characterization for Novelty Search

Authors: Elliot Meyerson (Sentient Technologies / The University of Texas at Austin), Joel Lehman (IT University of Copenhagen), and Risto Miikkulainen (Sentient Technologies and the University of Texas at Austin)

Time & location: Sunday, July 24 – 9:00 a.m. – 10:15 (Mesa Verde B)

Abstract: Novelty search and related diversity-driven algorithms provide a promising approach to overcoming deception in complex domains. The behavior characterization (BC) is a critical choice in the application of such algorithms. The BC maps each evaluated individual to a behavior, i.e., some vector representation of what the individual is or does during evaluation. Search is then driven towards diversity in a metric space of these behaviors. BCs are built from hand-designed features that are limited by human expertise, or upon generic descriptors that cannot exploit domain nuance. The main contribution of this paper is an approach that addresses these shortcomings. Generic behaviors are recorded from evolution on several training tasks, and a new BC is learned from them that funnels evolution towards successful behaviors on any further tasks drawn from the domain. This approach is tested in increasingly complex simulated maze-solving domains, where it outperforms both hand-coded and generic BCs, in addition to outperforming objective-based search. The conclusion is that adaptive BCs can improve search in many-task domains with little human expertise.

Hope to see you in Denver!