I had the pleasure of seeing Gary Marcus give his talk “Making AI More Human” at the NYC Machine Learning meetup last night. For those unaware, Marcus is a Professor of Cognitive Psychology at NYU and recently founded an AI startup, Geometric Intelligence, based on his research on how children learn. It was an entertaining talk, and I agreed with his assessments on deep learning and AI in general. His approach to solving aspects of learning in AI overlap with my own AI research for Pez.AI. Of course, I’m speculating on the specifics, since he didn’t provide any details. High-level it appears inspiration comes from childhood development, Bayesian reasoning, and probably some symbolic reasoning to boot.

Anyway, what’s the point of this post? Many in the audience were unhappy about this talk because it was mostly rehashing old arguments and offered near zero information on his research. For a technical talk Marcus failed miserably. However, as he said at the end, his goal was to recruit. And with this aim, the talk should be treated as a pitch. From this perspective he did rather well. If you are a pre-product startup or have a highly technical product, you can learn a lot from his approach.

Most startup playbooks say that investors focus on three things: market/problem, idea/solution, and team. From a startup maturity perspective, Geometric Intelligence is pre-product and pre-revenue. If you don’t have a demo to show, then the emphasis needs to be on selling the story. Let’s see how Marcus did that.

The Problem

Marcus has spent years honing his problem statement around AI. He’s published numerous articles, both academic and general on the subject. A good 90% of the talk was selling the problem. In a nutshell, current advances in AI are limited to what’s known as Narrow or Weak AI: domain-specific problems whose solutions are not readily generalizable. For example, DeepMind’s Alpha Go machine can’t play chess. Of course one could argue that most human go players can’t play chess either and would have to go through a similarly  long training process (okay not millions of games). That said, deep learning has numerous well-known limitations so the argument is not without merit.

Beware the sky that falls with robots

Marcus also presented an entertaining montage (with Charlie Chaplin-esque music no less) of anthropomorphic robots falling over. In short he was effective in bursting the AI bubble. I would consider this so effective that many probably missed the sleight of hand in the presentation: Marcus isn’t building robots and therefore isn’t fully addressing the Strong AI problem he meticulously presents.

The Solution

What is the solution that Marcus presents? Since there was no demo nor description of actual AI models, Marcus used his 2 year old son as a proxy for the solution. This is clever. Like sex and cute animals, babies always sell. The essential idea is that by mimicking childhood development, you can create an AI system that learns and is more adaptive on a smaller, “sparse” dataset. All good, right? Except that most of AI research is bio-inspired, from neural networks, to genetic algorithms, to swarm intelligence. Where techniques are not bio-inspired, they are still inspired by some aspect of nature, like simulated annealing.

A mature wetware computer interacting with the next generation model

Marcus suggested that their approach is based on probabilistic reasoning. This is reasonable on its own, but there is a fair amount of literature showing that humans are innately bad at probability. He gets around this by saying that we should only mimic/model the useful parts of humans. This doesn’t sound so different from the various approaches of other approaches that layer on statistical methods to improve models.

So what makes this approach better than all the others?


The team is what investors say is the most important of the three factors. The reasoning is that it takes a while to find product-market fit so the initial problem and solution is likely impermanent i.e. wrong. The team is responsible for both finding the correct product-market fit and also executing. The team thus trumps the market and idea since ostensibly they are permanent fixtures of the business. Both Marcus and his co-founder, Zoubin Ghahramani, are academics so they are unproven as entrepreneurs. So what do you do to counter this risk? First you casually mention how smart you are (PhD at 23) and then downplay it by calling yourself a slacker since your co-founder was recently inducted into the Royal Society. This establishes your credibility so that when you say being an academic is like being an entrepreneur everyone believes you.

Social Proof

At this point it’s time to deliver the coup-de-grace: social proof. This is a silly invention by otherwise smart, socially awkward people that popularity is a good indicator of success. Others might call this herd mentality and also recognize that entrepreneurs are mavericks going against the grain of convention. So by the time there’s enough social proof you’ve probably already missed the boat. Yet, this is an important “metric” for many investors, potential employees, and sometimes even potential customers and cannot be ignored. Marcus leverages this well by saying that they have a  investments from a number of prominent CEOs. But what do they know about AI? Are they a good proxy for due diligence or not?


At the end of the day it’s unclear what exactly geometric.ai has developed. What is clear is that good sales pitches can be passed off as technical talks. The real takeaway, to borrow for Peter Norvig is that Marcus has demonstrated the unreasonable effectiveness of good story telling.

Brian Lee Yung Rowe is Founder and Chief Pez Head of Pez.AI // Zato Novo, a conversational AI platform for guided data analysis and automated customer service. Learn more at Pez.AI.