About
I bring machine learning to billions of people.
Most recently, I spent a few years…
Articles by Jason
Activity
-
Sad news at Ford. The Lightning was wicked quick and had so much potential. Hoping the surrounding technologies and groups that supported the…
Sad news at Ford. The Lightning was wicked quick and had so much potential. Hoping the surrounding technologies and groups that supported the…
Liked by Jason Gauci
-
What do you do when your website needs to run a long job (such as transcribing a long audio file)? What happens when your program that runs on ten…
What do you do when your website needs to run a long job (such as transcribing a long audio file)? What happens when your program that runs on ten…
Shared by Jason Gauci
-
Check out our latest work applying Reinforcement Learning (RL) at scale to outperform a frontier model on the Deep Research use case. Deep Research…
Check out our latest work applying Reinforcement Learning (RL) at scale to outperform a frontier model on the Deep Research use case. Deep Research…
Liked by Jason Gauci
Experience
Education
-
University of Central Florida
-
-
Co-invented HyperNEAT, a novel method for evolving large artificial neural networks. Created first HyperNEAT implementation, now adapted by research institutions worldwide. Created a machine learning agent that is capable of mastering most board games without any knowledge of the rules.
-
-
-
Publications
-
Evolving neural networks for geometric game-tree pruning
GECCO 2011
Game-tree search is the engine behind many computer game opponents. Traditional game-tree search algorithms decide which move to make based on simulating actions, evaluating future board states, and then applying the evaluations to estimate optimal play by all players. Yet the limiting factor of such algorithms is that the search space increases exponentially with the number of actions taken (i.e. the depth of the search). More recent research in game-tree search has revealed that even more…
Game-tree search is the engine behind many computer game opponents. Traditional game-tree search algorithms decide which move to make based on simulating actions, evaluating future board states, and then applying the evaluations to estimate optimal play by all players. Yet the limiting factor of such algorithms is that the search space increases exponentially with the number of actions taken (i.e. the depth of the search). More recent research in game-tree search has revealed that even more important than evaluating future board states is effective pruning of the search space. Accordingly, this paper discusses Geometric Game-Tree Pruning (GGTP), a novel evolutionary method that learns to prune game trees based on geometric properties of the game board. The experiment compares Cake, a minimax-based game-tree search algorithm, with HyperNEAT-Cake, the original Cake algorithm combined with an indirectly encoded, evolved GGTP algorithm. The results show that HyperNEAT-Cake wins significantly more games than regular Cake playing against itself.
Other authorsSee publication -
Autonomous Evolution of Topographic Regularities in Artificial Neural Networks
Neural Computation
Looking to nature as inspiration, for at least the past 25 years, researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this letter shows that when geometry is introduced to evolved ANNs…
Looking to nature as inspiration, for at least the past 25 years, researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains. That is, if the neurons in evolved ANNs are situated at locations in space (i.e., if they are given coordinates), then, as experiments in evolving checkers-playing ANNs in this letter show, topographic maps with symmetries and regularities can evolve spontaneously. The ability to evolve such maps is shown in this letter to provide an important advantage in generalization. In fact, the evolved maps are sufficiently informative that their analysis yields the novel insight that the geometry of the connectivity patterns of more general players is significantly smoother and more contiguous than less general ones. Thus, the results reveal a correlation between generality and smoothness in connectivity patterns. They also hint at the intriguing possibility that as NE matures as a field, its algorithms can evolve ANNs of increasing relevance to those who study neural computation in general.
Other authorsSee publication -
Indirect Encoding of Neural Networks for Scalable Go
PPSN 2010
he game of Go has attracted much attention from the artificial intelligence community. A key feature of Go is that humans begin to learn on a small board, and then incrementally learn advanced strategies on larger boards. While some machine learning methods can also scale the board, they generally only focus on a subset of the board at one time. Neuroevolution algorithms particularly struggle with scalable Go because they are often directly encoded (i.e. a single gene maps to a single…
he game of Go has attracted much attention from the artificial intelligence community. A key feature of Go is that humans begin to learn on a small board, and then incrementally learn advanced strategies on larger boards. While some machine learning methods can also scale the board, they generally only focus on a subset of the board at one time. Neuroevolution algorithms particularly struggle with scalable Go because they are often directly encoded (i.e. a single gene maps to a single connection in the network). Thus this paper applies an indirect encoding to the problem of scalable Go that can evolve a solution to 5×5 Go and then extrapolate that solution to 7×7 Go and continue evolution. The scalable method is demonstrated to learn faster and ultimately discover better strategies than the same method trained on 7×7 Go directly from the start.
Other authorsSee publication -
A hypercube-based encoding for evolving large-scale neural networks
MIT Press
Research in neuroevolution—that is, evolving artificial neural networks (ANNs) through evolutionary algorithms—is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called…
Research in neuroevolution—that is, evolving artificial neural networks (ANNs) through evolutionary algorithms—is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.
Other authorsSee publication -
A Case Study on the Critical Role of Geometric Regularity in Machine Learning
AAAI 2008
An important feature of many problem domains in machine learning is their geometry. For example, adjacency relationships, symmetries, and Cartesian coordinates are essential to any complete description of board games, visual recognition, or vehicle control. Yet many approaches to learning ignore such information in their representations, instead inputting flat parameter vectors with no indication of how those parameters are situated geometrically. This paper argues that such geometric…
An important feature of many problem domains in machine learning is their geometry. For example, adjacency relationships, symmetries, and Cartesian coordinates are essential to any complete description of board games, visual recognition, or vehicle control. Yet many approaches to learning ignore such information in their representations, instead inputting flat parameter vectors with no indication of how those parameters are situated geometrically. This paper argues that such geometric information is critical to the ability of any machine learning approach to effectively generalize; even a small shift in the configuration of the task in space from what was experienced in training can go wholly unrecognized unless the algorithm is able to learn the regularities in decision-making
across the problem geometry. To demonstrate the importance of learning from geometry, three variants of the same evolutionary learning algorithm (NeuroEvolution of Augmenting Topologies), whose representations vary in their capacity to encode geometry, are compared in checkers. The result is that the variant that can learn geometric regularities produces a significantly more general solution. The conclusion is that it is important to enable machine learning to detect and thereby learn from the geometry of its problems.Other authorsSee publication -
Generating large-scale neural networks through discovering geometric regularities
GECCO 2007
Connectivity patterns in biological brains exhibit many repeating motifs. This repetition mirrors inherent geometric regularities in the physical world. For example, stimuli that excite adjacent locations on the retina map to neurons that are similarly adjacent in the visual cortex. That way, neural connectivity can exploit geometric locality in the outside world by employing local connections in the brain. If such regularities could be discovered by methods that evolve artificial neural…
Connectivity patterns in biological brains exhibit many repeating motifs. This repetition mirrors inherent geometric regularities in the physical world. For example, stimuli that excite adjacent locations on the retina map to neurons that are similarly adjacent in the visual cortex. That way, neural connectivity can exploit geometric locality in the outside world by employing local connections in the brain. If such regularities could be discovered by methods that evolve artificial neural networks (ANNs), then they could be similarly exploited to solve problems that would otherwise require optimizing too many dimensions to solve. This paper introduces such a method, called Hypercube-based Neuroevolution of Augmenting Topologies (HyperNEAT), which evolves a novel generative encoding called connective Compositional Pattern Producing Networks (connective CPPNs) to discover geometric regularities in the task domain. Connective CPPNs encode connectivity patterns as concepts that are independent of the number of inputs or outputs, allowing functional large-scale neural networks to be evolved. In this paper, this approach is tested in a simple visual task for which it effectively discovers the correct underlying regularity, allowing the solution to both generalize and scale without loss of function to an ANN of over eight million connections.
Other authorsSee publication
Patents
-
TEXT TRANSCRIPT GENERATION FROM A COMMUNICATION SESSION
Filed US 61/529,607
Projects
-
Programming Throwdown
- Present
See projectProgramming Throwdown attempts to educate Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.
-
Trivipedia
-
See projectTrivia game using content extracted from wikipedia. Over 300,000 questions are generated from wikipedia text automatically.
Honors & Awards
-
Presidential Doctoral Fellowship
University of Central Florida
Two undergraduate students from each department of the university are selected annually to receive the Presidential Doctoral Fellowship. These awards provide multi-year support to the most qualified PhD students.
-
National Merit Scholar
National Merit Scholarship Corporation
The National Merit® Scholarship Program is an academic competition for recognition and scholarships that began in 1955. High school students enter the National Merit Program by taking the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT®)–a test which serves as an initial screen of approximately 1.5 million entrants each year–and by meeting published program entry/participation requirements. About 10,000 students go on to become National Merit Scholars.
Languages
-
English
Native or bilingual proficiency
Organizations
-
Association for Computing Machinery
Vice President, UCF Chapter
- Present
More activity by Jason
-
Last night was great hearing a fireside with Jade Wang and Kenton Varda, chatting with Ehren Kret CTO at Signal Messenger. Favorite moment: Ehren…
Last night was great hearing a fireside with Jade Wang and Kenton Varda, chatting with Ehren Kret CTO at Signal Messenger. Favorite moment: Ehren…
Liked by Jason Gauci
-
After 7 incredible years — 5 at Argo and 2 at Latitude — it’s time for me to turn the page. I’m taking an extended break to rest, reset, and figure…
After 7 incredible years — 5 at Argo and 2 at Latitude — it’s time for me to turn the page. I’m taking an extended break to rest, reset, and figure…
Liked by Jason Gauci
-
We’re hiring a Distinguished Software Engineer at LinkedIn to shape the next generation of agentic AI and large-scale productivity systems. This is a…
We’re hiring a Distinguished Software Engineer at LinkedIn to shape the next generation of agentic AI and large-scale productivity systems. This is a…
Liked by Jason Gauci
-
I’m excited to share that I’ve joined SigmaSense to lead development of software-defined, physical AI sensing with high-fidelity, real-time…
I’m excited to share that I’ve joined SigmaSense to lead development of software-defined, physical AI sensing with high-fidelity, real-time…
Liked by Jason Gauci
-
I'm deeply sorry to hear about the recent layoffs at Amazon. My time there was filled with wonderful memories, and I truly believe that the greatest…
I'm deeply sorry to hear about the recent layoffs at Amazon. My time there was filled with wonderful memories, and I truly believe that the greatest…
Liked by Jason Gauci
-
🚨 Lessons from the AWS us-east-1 outage on Oct 19 🚨 A single low-level DNS automation bug in DynamoDB propagated into a massive multi-service…
🚨 Lessons from the AWS us-east-1 outage on Oct 19 🚨 A single low-level DNS automation bug in DynamoDB propagated into a massive multi-service…
Liked by Jason Gauci
-
We’re hiring! 👋 I’m looking for engineers — both early-career and experienced — to join me in the Search Serving Infrastructure team at OpenAI…
We’re hiring! 👋 I’m looking for engineers — both early-career and experienced — to join me in the Search Serving Infrastructure team at OpenAI…
Liked by Jason Gauci
-
Our paper with Yijia Wang, "Faster RL by Freezing Slow States," was recently accepted at Management Science. We explore a new RL approach…
Our paper with Yijia Wang, "Faster RL by Freezing Slow States," was recently accepted at Management Science. We explore a new RL approach…
Liked by Jason Gauci
-
I’m thrilled to share that I’ve joined Instagram as Head of the Relevance PM team. Instagram is one of those rare, iconic products that has…
I’m thrilled to share that I’ve joined Instagram as Head of the Relevance PM team. Instagram is one of those rare, iconic products that has…
Liked by Jason Gauci
-
Richard Sutton believes that true intelligence is not about behavior cloning but about learning from experience. LLMs predict what humans would say…
Richard Sutton believes that true intelligence is not about behavior cloning but about learning from experience. LLMs predict what humans would say…
Liked by Jason Gauci
-
we're up in times square 🙂 The future of AI depends on the systems it runs on -- and that’s what we’re building
we're up in times square 🙂 The future of AI depends on the systems it runs on -- and that’s what we’re building
Liked by Jason Gauci
Other similar profiles
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content