The Data-Driven Weekly #1.7

Tags

, ,

0116-toptech16-cyborg-f1-620px-1450292659286

Photo: Nathaniel Welch

It turns out I’m not the only one who thinks AI alarmism is a bit out of hand. The ITIF Luddite Award nominations include “alarmists, even including respected luminaries such as Elon Musk and Stephen Hawking, touting an artificial intelligence apocalypse.” Opinions are stewing on both sides of the issue, with Gizmodo writer George Dvorsky saying it’s not right to be branded a Luddite for warning against potential perils. Like most controversies, the differences are smaller than the similarities, since both groups contend that they are promoting a better future for humanity.

The real question is from where does your faith in humanity stem? A recent prosaic example of banning AI is with the EU’s blocking of  Facebook’s Moments application that has integrated facial recognition technology. Is this a case of Luddite regulators being alarmist about AI? It’s not so clear. The EFF’s open letter advocates that “people should be able to walk down a public street without fear that companies they’ve never heard of are tracking their every movement — and identifying them by name — using facial recognition technology”. Hence, the issue is our distrust of others use of AI, and not AI itself. Will that change when Strong AI becomes a reality?

Deep Learning

All the publicity around AI has motivated more than alarmism. As AI transitions to a marketing term, it’s easy to get lost in our own imagination as opposed to the science tied to the state-of-the-art. Mosaic Ventures provides a nice overview of different types of “AI” and the challenges these businesses face, while Re/Code gives a layman’s introduction to deep learning.

Digging deeper, it’s worth listening to Greg Corrado’s discussion of Google’s Smart Reply and a brief description of seq2seq learning. Most of the interview is actually about management, and how to create healthy heterogeneous teams of researchers and engineers.

Chatbots

It’s hard to talk about chatbots without mentioning AI. In The Botification of News, Trushar Barot begins to explore how news and content delivery will change based if bots become the de facto curator of news. To a certain extent, this has already happened with Facebook’s Timeline, product and movie recommendations, etc. What’s different is that AI personal assistants will be acting more as agents of the consumer/user, as opposed to the platform. That said, if your personal assistant is Facebook M, I imagine that content recommendations will still be optimizing on Facebook revenue first and your interests second.

Exploring an alternate reality, Elise Hu writes that it’s Time to Get Serious About Chat Apps. Her point is that content producers should leverage chatbot technology to directly engage with users over chat/messaging platforms. It will be interesting to see whether publishers have enough R&D budget to develop personalized news curators or if they will be relegated to dumb syndicators.

Category Theory

Aside from Haskell users, category theory has largely been an esoteric branch of mathematics. Applications leveraging category theory have appeared on the scene, when I briefly mentioned Combinatorial Categorial Grammars. A great introduction to category theory is by Bartosz Milewski. I’ve day dreamed a bit about how to implement categories and ultimately CCGs in R, but I’m not sure how difficult it would be in base R. That said, leveraging the type system of my lambda.r package could produce something usable fairly quickly. If anyone is interested in exploring this with me, feel free to get in touch.

Something Wow

Perhaps more aligned with Elon Musk’s vision of AI being an extension of humans is the Cyborg Olympics. These games highlight the advances made in robotics to benefit disabled people, particularly those who are paralyzed. Due to the robotic augmentation, contestants are called “pilots” as opposed to “athletes” to again highlight the cooperation of man and machine.

Brian Lee Yung Rowe is Founder and Chief Pez Head of Pez.AI // Zato Novo, a conversational AI platform for guided data analysis and Q&A. Learn more at Pez.AI.

 

12 thoughts on “The Data-Driven Weekly #1.7”

  1. “Hence, the issue is our distrust of others use of AI, and not AI itself. ”

    sounds too much like, “guns don’t kill people, people kill people”. “others” get to use AI in ways peons can’t afford. it is asymmetric warfare, but with the Borg winning. by a lot.

    on R-bloggers, for example, there’s a tonne of posts from various folks extolling the wonder of using R to affect consumerism. while using data/R to influence consumers to buy Brand A rather than Brand X of some commodity, at what point does using psychological molding tools go too far?? is it ok to use AI tools to convince some group, the poor let’s say, to vote for candidates actually in thrall to the 1%? Reagan managed it 35 years ago, and without such tools. given the motivation and incentive, I suspect the K Street crowd are quite busy. it does not make me view AI more favorably. some examples.

    Like

    • There are two core issues: 1) as a species can humanity behave ethically. Tools are an extension of human desire; 2) are sentient AI a threat to humanity? Neither of these questions are easy to answer, and AI alarmism doesn’t help make things clearer.

      Like

      • It’s alarmist to be alarmist about alarmists.

        Alarms were raised in Hitler’s Germany. Those Jews (gypsies, gays, etc.) who failed to listen and respond suffered the consequences.

        Musk and Hawking are not alarmists. They are skeptics. Their skepticism is well founded: past is prologue.

        Like

  2. People are Complex Adaptive Systems (CAS) that contain and are contained within other CAS. One need not look far to find antagonists happily adapting themselves in opposition, e.g., viruses.

    Those who scoff at OpenAI – and the skeptical perspective it reflects – remind me of Market Fundamentalists and their idea of self-regulating markets. The Market may self-regulate – and exterminate us on the way to equilibrium (or some other disequilibrium). The recent financial debacle is an indication of that.

    Adam McKay’s film The Big Short – based on Michael Lewis’ book – does a good job of showing self-regulation in action. Gosh. It looks a lot like free-to-cheat. That’s a kind of self-regulation – an adaptation – I guess. Just not what was advertised – or expected by the likes of Alan Greenspan. But hey. For decades I’ve thought Greenspan shockingly naive.

    Like

    • — or expected by the likes of Alan Greenspan. But hey. For decades I’ve thought Greenspan shockingly naive.

      having followed the Monetarists for decades, they (modulo Laffer, etc.) are neither stupid nor naive. what they are is duplicitous. while their rhetoric is all about “the whole economy” their actions are for the few. Greenspan was not ignorant of the data, he simply chose to act on motivation and incentive otherwise.

      be careful what you wish for.

      Like

      • Duplicity indeed. The trust/distrust tipping point for me: Greenspan’s failure to follow through on his Irrational Exuberance speech. The signs of duplicity were there long before then, as you say.

        It’s possible to be duplicitous and naive. The concentration of wealth and power into ever fewer hands in the US was certainly intended – wished for – by its agents. The consequent instabilities – financial crisis, chaos in the Middle East, etc. – seem unintended. Simple models and simple minds.

        Contrast the “blow shit up” foreign policy of the US with the “invest and befriend” approach of China in recent years. There are – and will continue to be – real consequences.

        Like

  3. To bring the Great Recession bit of the comments back, a bit, more on topic. The connection of AI to a housing bubble, say, is: how would AI help the negatively affected (house buyers, regulators, and taxpayers to name a few) know this effect sooner rather than later? (Using traditional time series analyses only supported the house price movement.)

    The housing crisis was driven by some factors (you can look them up): 1) a global savings glut, 2) risk aversion (unwillingness to invest in private physical capital) on the part of those holding the glut, 3) the historical data showing (American) house mortgages to be a stable asset class, 4) (the poisonous bit) creation of enough mortgages by mortgage companies, and latterly banks, which could only be done by perverting historic underwriting norms, and 5) (more poison) opaque and false derivative instruments from said mortgages. That all amounts to: in order to sop up the savings glut thrown at the US housing market, the major actors in that market had to invent more mortgages than historic norms would permit; so they invented new rules for writing mortgages which were financially unsustainable.

    Sounds like a perfect use case for something called AI: find the fault before it cracks catastrophically. Use the data to walk back up to the rule set which generates that data, looking for changes to the rule set. In the Great Recession case, the AI researchers would have found the expansion of Liar Loans (and such) long before they exploded. That assumes, of course, that regulators would have acted to rescind the ability to create Liar Loans (and such) at the early confirmation of their existence.

    The cynic might ask the other question (which is still being asked, what with the savings glut still growing): how to pervert the rule set of some investment activity such as to absorb more than the historic level; the crash will be paid by government, of course.

    To close. There were some, Shiller in particular, whose natural intelligence divined the problem:
    “Mr. Shiller is sounding the same warning for real estate that he did for stocks.”

    That’s from a 2005 NYT article: http://www.nytimes.com/2005/08/21/business/yourmoney/be-warned-mr-bubbles-worried-again.html?_r=0

    This is precisely the sort of situation that something called AI would deal with before disaster strikes. Any notion of when, or if, that should come to pass?

    Like

    • Take a look at Ethereum and their ideas for prediction markets based on the block chain

      Like

    • In terms of an AI preempting a market crash, are you advocating an AI that behaves like the Chinese government?

      Like

      • I don’t think so.

        fact is, though, all data in the human realm is the direct consequence of human rules. there is no Newton, Heisenberg, or Einstein discovering the externally applied rules of conduct; there is no God, not even Adam Smith or Ayn Rand, making and enforcing the rules. some humans make the rules, and others act, creating a data trail. we can look at the data trail and deduce the rules which generated the data trail. when some humans change/bend/break the rules, usually in secret (more or less), then assuming the data trail which results conforms in some way to the pre-existing rule set leads to bad decisions.

        whether the Fed, ECB, Beijing, IMF, or any other rule maker in market activity is “smarter” depends entirely upon whether you are a Favorite Son of said rule maker. markets without rules, such as 19th century US, leads to mass hardship and autocracy. some, esp. today’s right wing political parties in the US and EU, want to take us back to such times. AI could be the instrument to support such regression. or it could be the weapon opposing. it’s up to you to decide how to use AI.

        first, of course, AI has to be shown to work, i.e. generate accurate answers for some definition of accurate, more efficiently than simple fiat decision making. that’s not likely.

        Like

  4. — In terms of an AI preempting a market crash, are you advocating an AI that behaves like the Chinese government?

    It seems unlikely the Chinese Government wanted the equities return profile – or the general distribution of those returns – it got in 2015. It seems another example of unintended consequences. Some Favorite Sons did well. A certain flexibility of the rules was evident during the collapse. Shocking, I know.

    — AI: find the fault before it cracks catastrophically

    Shiller was special as a Cassandra because of he spoke with people at the Highest Levels – and was ignored. The view was based on relatives (cheap vs dear) derived from the data. The motivation for investigation is see as more important and founded on a sense-of-system: Things feel out of wack. Are they? How much? I had the same sense at the time – as did friends of mine. We’ve been around markets for a long time and had see a lots ups – then downs.

    A willingness to distrust the Authorized Version differentiates those who were not surprised by the collapse and those who were. Would an AI have been willing to distrust the Authorized Version? Heresy is punished. That cuts to your point about “shown to work”. Forecasting complex systems (with options on chaotic behavior) is very hard.

    Back to OpenAI and CAS…. There are hosts, symbiotes and parasites in the world. I view OpenAI (symbiote) defending society (host) against attack by ClosedAI (parasite). Parasites will attack. That’s what they do. It’s who they are. I see that in your “change/bend/break” remark.

    Like

Leave a comment