Jason Gauci

Jason Gauci

Austin, Texas, United States
7K followers 500+ connections

About

I bring machine learning to billions of people.

Most recently, I spent a few years…

Articles by Jason

  • Squashing bugs using AI and Machine Learning with Boris Paskalev

    The best part of hosting Programming Throwdown is reading emails from people who listened to this show before they had…

    3 Comments
  • Episode 94: Search at Etsy

    What actually happens when you type something in the search bar at the top of etsy.com and hit enter? This awesome…

    1 Comment
  • Episode 93: A Journey to Programming Mastery

    Every interview we do is such an exciting and unique experience. Patrick and I had great pleasure in hosting Andy and…

  • Basics of UI Design for Engineers with Erik Kennedy

    Surprise! Weekend episode :-D Every piece of code you write is either going to be for computer-to-computer interaction,…

  • Episode 91: Functional Programming with Adam Gordon Bell

    Hey all! Since episode 82, we received a ton of email asking for more info on functional programming (FP). To cover FP…

  • Episode 89: From Combat to Code

    Hey all!! Today we are sitting down with Jerome Hardaway. Jerome is an Air Force Veteran and the founder of Vets Who…

    1 Comment
  • Episode 88: Image Processing

    If you use ASCII encoding, the entire Oxford dictionary is about 5 million bytes. A single 4K image contains 25 million…

  • Episode 87: Typescript

    While the web is one of the easiest platforms for deploying software, it can also be one of the trickiest to debug…

    1 Comment
  • Episode 86: Wolfram Language and Mathematica

    Happy New Year! Today we are sitting down with Stephen Wolfram, inventor of Mathematica, Wolfram Alpha, and Wolfram…

  • Episode 85: Holiday Party

    Hey all! This is our annual holiday show! We give away prizes and talk about random news stories :-D. Thanks to…

Activity

Join now to see all activity

Experience

  • Circuit Graphic

    Circuit

    Austin, Texas, United States

  • -

    Austin, Texas, United States

  • -

    Austin, Texas Metropolitan Area

  • -

    Austin, Texas, United States

  • -

    Menlo Park, CA

  • -

  • -

    San Francisco Bay Area

  • -

    orlando, florida area

  • -

Education

  • University of Central Florida Graphic

    University of Central Florida

    -

    -

    Co-invented HyperNEAT, a novel method for evolving large artificial neural networks. Created first HyperNEAT implementation, now adapted by research institutions worldwide. Created a machine learning agent that is capable of mastering most board games without any knowledge of the rules.

  • -

    -

Publications

  • Evolving neural networks for geometric game-tree pruning

    GECCO 2011

    Game-tree search is the engine behind many computer game opponents. Traditional game-tree search algorithms decide which move to make based on simulating actions, evaluating future board states, and then applying the evaluations to estimate optimal play by all players. Yet the limiting factor of such algorithms is that the search space increases exponentially with the number of actions taken (i.e. the depth of the search). More recent research in game-tree search has revealed that even more…

    Game-tree search is the engine behind many computer game opponents. Traditional game-tree search algorithms decide which move to make based on simulating actions, evaluating future board states, and then applying the evaluations to estimate optimal play by all players. Yet the limiting factor of such algorithms is that the search space increases exponentially with the number of actions taken (i.e. the depth of the search). More recent research in game-tree search has revealed that even more important than evaluating future board states is effective pruning of the search space. Accordingly, this paper discusses Geometric Game-Tree Pruning (GGTP), a novel evolutionary method that learns to prune game trees based on geometric properties of the game board. The experiment compares Cake, a minimax-based game-tree search algorithm, with HyperNEAT-Cake, the original Cake algorithm combined with an indirectly encoded, evolved GGTP algorithm. The results show that HyperNEAT-Cake wins significantly more games than regular Cake playing against itself.

    Other authors
    See publication
  • Autonomous Evolution of Topographic Regularities in Artificial Neural Networks

    Neural Computation

    Looking to nature as inspiration, for at least the past 25 years, researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this letter shows that when geometry is introduced to evolved ANNs…

    Looking to nature as inspiration, for at least the past 25 years, researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains. That is, if the neurons in evolved ANNs are situated at locations in space (i.e., if they are given coordinates), then, as experiments in evolving checkers-playing ANNs in this letter show, topographic maps with symmetries and regularities can evolve spontaneously. The ability to evolve such maps is shown in this letter to provide an important advantage in generalization. In fact, the evolved maps are sufficiently informative that their analysis yields the novel insight that the geometry of the connectivity patterns of more general players is significantly smoother and more contiguous than less general ones. Thus, the results reveal a correlation between generality and smoothness in connectivity patterns. They also hint at the intriguing possibility that as NE matures as a field, its algorithms can evolve ANNs of increasing relevance to those who study neural computation in general.

    Other authors
    See publication
  • Indirect Encoding of Neural Networks for Scalable Go

    PPSN 2010

    he game of Go has attracted much attention from the artificial intelligence community. A key feature of Go is that humans begin to learn on a small board, and then incrementally learn advanced strategies on larger boards. While some machine learning methods can also scale the board, they generally only focus on a subset of the board at one time. Neuroevolution algorithms particularly struggle with scalable Go because they are often directly encoded (i.e. a single gene maps to a single…

    he game of Go has attracted much attention from the artificial intelligence community. A key feature of Go is that humans begin to learn on a small board, and then incrementally learn advanced strategies on larger boards. While some machine learning methods can also scale the board, they generally only focus on a subset of the board at one time. Neuroevolution algorithms particularly struggle with scalable Go because they are often directly encoded (i.e. a single gene maps to a single connection in the network). Thus this paper applies an indirect encoding to the problem of scalable Go that can evolve a solution to 5×5 Go and then extrapolate that solution to 7×7 Go and continue evolution. The scalable method is demonstrated to learn faster and ultimately discover better strategies than the same method trained on 7×7 Go directly from the start.

    Other authors
    See publication
  • A hypercube-based encoding for evolving large-scale neural networks

    MIT Press

    Research in neuroevolution—that is, evolving artificial neural networks (ANNs) through evolutionary algorithms—is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called…

    Research in neuroevolution—that is, evolving artificial neural networks (ANNs) through evolutionary algorithms—is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

    Other authors
    See publication
  • A Case Study on the Critical Role of Geometric Regularity in Machine Learning

    AAAI 2008

    An important feature of many problem domains in machine learning is their geometry. For example, adjacency relationships, symmetries, and Cartesian coordinates are essential to any complete description of board games, visual recognition, or vehicle control. Yet many approaches to learning ignore such information in their representations, instead inputting flat parameter vectors with no indication of how those parameters are situated geometrically. This paper argues that such geometric…

    An important feature of many problem domains in machine learning is their geometry. For example, adjacency relationships, symmetries, and Cartesian coordinates are essential to any complete description of board games, visual recognition, or vehicle control. Yet many approaches to learning ignore such information in their representations, instead inputting flat parameter vectors with no indication of how those parameters are situated geometrically. This paper argues that such geometric information is critical to the ability of any machine learning approach to effectively generalize; even a small shift in the configuration of the task in space from what was experienced in training can go wholly unrecognized unless the algorithm is able to learn the regularities in decision-making
    across the problem geometry. To demonstrate the importance of learning from geometry, three variants of the same evolutionary learning algorithm (NeuroEvolution of Augmenting Topologies), whose representations vary in their capacity to encode geometry, are compared in checkers. The result is that the variant that can learn geometric regularities produces a significantly more general solution. The conclusion is that it is important to enable machine learning to detect and thereby learn from the geometry of its problems.

    Other authors
    See publication
  • Generating large-scale neural networks through discovering geometric regularities

    GECCO 2007

    Connectivity patterns in biological brains exhibit many repeating motifs. This repetition mirrors inherent geometric regularities in the physical world. For example, stimuli that excite adjacent locations on the retina map to neurons that are similarly adjacent in the visual cortex. That way, neural connectivity can exploit geometric locality in the outside world by employing local connections in the brain. If such regularities could be discovered by methods that evolve artificial neural…

    Connectivity patterns in biological brains exhibit many repeating motifs. This repetition mirrors inherent geometric regularities in the physical world. For example, stimuli that excite adjacent locations on the retina map to neurons that are similarly adjacent in the visual cortex. That way, neural connectivity can exploit geometric locality in the outside world by employing local connections in the brain. If such regularities could be discovered by methods that evolve artificial neural networks (ANNs), then they could be similarly exploited to solve problems that would otherwise require optimizing too many dimensions to solve. This paper introduces such a method, called Hypercube-based Neuroevolution of Augmenting Topologies (HyperNEAT), which evolves a novel generative encoding called connective Compositional Pattern Producing Networks (connective CPPNs) to discover geometric regularities in the task domain. Connective CPPNs encode connectivity patterns as concepts that are independent of the number of inputs or outputs, allowing functional large-scale neural networks to be evolved. In this paper, this approach is tested in a simple visual task for which it effectively discovers the correct underlying regularity, allowing the solution to both generalize and scale without loss of function to an ANN of over eight million connections.

    Other authors
    See publication

Patents

  • TEXT TRANSCRIPT GENERATION FROM A COMMUNICATION SESSION

    Filed US 61/529,607

Projects

  • Programming Throwdown

    - Present

    Programming Throwdown attempts to educate Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.

    See project
  • Trivipedia

    -

    Trivia game using content extracted from wikipedia. Over 300,000 questions are generated from wikipedia text automatically.

    See project

Honors & Awards

  • Presidential Doctoral Fellowship

    University of Central Florida

    Two undergraduate students from each department of the university are selected annually to receive the Presidential Doctoral Fellowship. These awards provide multi-year support to the most qualified PhD students.

  • National Merit Scholar

    National Merit Scholarship Corporation

    The National Merit® Scholarship Program is an academic competition for recognition and scholarships that began in 1955. High school students enter the National Merit Program by taking the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT®)–a test which serves as an initial screen of approximately 1.5 million entrants each year–and by meeting published program entry/participation requirements. About 10,000 students go on to become National Merit Scholars.

Languages

  • English

    Native or bilingual proficiency

Organizations

  • Association for Computing Machinery

    Vice President, UCF Chapter

    - Present

More activity by Jason

View Jason’s full profile

  • See who you know in common
  • Get introduced
  • Contact Jason directly
Join to view full profile

Other similar profiles

Explore top content on LinkedIn

Find curated posts and insights for relevant topics all in one place.

View top content

Others named Jason Gauci

Add new skills with these courses