Copy
Happy Weekend and welcome to the latest edition of Technically Sentient!  Based on feedback from the last newsletter, I'm going to experiment with a new format.  I am splitting the "industry links" section into two parts - a few "must read" links, which are the ones I think are really important and will come with a line or two of commentary, and then I'll keep the "industry links" section for links to a broad array of things you may want to read.

This week we have 5 sections:  Big Idea, Must Reads, Industry Links, Research Links, and Commentary.  Skip around as you feel necessary.  If you like what you read, please forward it to a friend so they can sign up.  Let's get to it...


-- BIG IDEA --
Last week I linked to Sam Harris' Ted talk on "Can We Build A.I. Without It Taking Over?".   I've been mulling this question a lot and the problem with crafting an answer is that we don't really understand - for neurobiology or A.I. - what "motivation" really is.  But, it is plausible to think of belief frameworks as a neuronal firing pattern of some kind.  This raises an interesting question - could we somehow take the neuronal firing patter of someone who is incredibly good and kind (The Pope?) and overlay it on the motivation networks (whatever those end up being) of an A.I.?  Benevolence is clearly evolvable, but is it programmable?  I think the answer to whether or not A.I.s ultimately destroy us may lie in whether or not we figure out how to instill motivations in an A.I. before it gets smarter than us, or if that comes afterwards.   I'm going to write a broader article on this at some point, so if you have ideas, I would love to hear them. 


-- MUST READ --
The Blind Spot in A.I. Research.  Link.
We already rely on A.I. more than we realize, even in cases where it doesn't perform well.  The best quote from this article is from Pedro Domingos "People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”  

Barack Obama and Joi Ito on A.I. and the Future of the World.  Link.
Fascinating piece that covers a broad array of topics and highlights some interesting problems for the future.  You've probably already seen this around the web but if you haven't, it's a must read.

The White House Has Released a Paper "Preparing For the Future of Artificial Intelligence."  Link
This is important because it's the White House, of course.  I haven't read the whole thing yet but, it's on my list for my next plane flight.

Enhancing the Reliability of Artificial Intelligence.  Link.
Not many people are talking about having machines understand their inner workings, and possibly changing behavior based on self-observation.  So I find this point really interesting.

-- INDUSTRY LINKS --

Microsoft sets a new record by reaching human parity in conversational speech recognition.  Link.


Stephen Hawking Opens a British A.I. Hub.  Link.

Fooling the Machine:  How to Deceive A.I. Link.

Swarms of small robots can do some interesting tasks.  Link.

How Google Uses Machine Learning In Search Algorithms.  Link.

Neurotechnology Could End Mental Illness for Good, But Should It?  Link.

MIT OpenCourseware on A.I.  Link.

If you don't already listen to it, the Talking Machines Podcast is my favorite podcast on A.I.  Link.

Smart Machines And The Future of Jobs.  Link.

There is now a robot suitcase that follows you through the airport.  Link

Artificial Intelligence Gets Smarter At Authentication.  Link.

If you run an A.I. startup, CBInsights is ranking the top 100 private AI companies.  Go apply.  Link.


-- RESEARCH LINKS --
Fairness as a Program Property.  Link.
Evolving the structure of Evolution Strategies.  Link.


-- COMMENTARY --
George Soros is a big proponent of a concept called "reflexivity."  It deals with circular relationships between cause and effect, and I think it is a very important concept in the development of new products in new markets.  The early attempts at products, the various technologies that rise to the top in an early market influence how potential customers, entrepreneurs, and investors see that market going forward.  

A.I. is a field where early products have sometimes been difficult to build, and everyone has been unsure of what the "killer apps" will be.  As a result, we've seen lots of platforms, which entrepreneurs built in hopes other entrepreneurs could figure out the real use cases, and we've seen lots of marginal products (existing product adding machine learning to make it slightly better).  And we've seen a few real use cases like self-driving cars and better predictive analytics.  But it still feels like something is missing.  

I think what is missing is clear market demand for "intelligence" built into everything we use.  What I mean is, everyone can nod their heads and say they want smarter software and appliances and whatever, but, when push comes to shove no one agrees on exactly what that should look like.  In most markets you can determine customer needs by simply talking to customers but, as we build intelligence into things, it's different.  

To be successful in these markets entrepreneurs need to embrace product reflexivity.  They need to accept the idea that customer development in brand new markets is a circular, partially self-referential process.  It starts with understanding some potential needs of some potential customers, and then showing them ideas to solve their needs but, also suggesting other applications of the same technology set.  Unfortunately it's also a more ambiguous and uncertain process than more direct forms of market entry.

I was in graduate school during web bubble 1.0, so, I didn't work directly in the space, but it seems to me the web 1.0 space went through a similar process.  What is possible?  What is useful?  What is actually likely?  The difference this time around is that A.I. as an industry has a very different set of properties and structure.  The A.I. industry is driven as much by new data sets as it is by new technologies.  Plus there is a flywheel effect around data acquisition, learning, and algorithm performance where they strengthen and reinforce each other in ways that build defensibility.  Your success isn't just a product of your approach to the problem you are solving, it's also a product of the data you have access to.

But all of this leads to a conclusion that is possibly counterintuitive for entrepreneurs and investors, which is, your reflexivity process should circle around the data sets you have more than anything else.  People always ask "what problem are you solving?"  And that's important to answer.  But in previous generations of startups, it was really the only question to worry about.  Now, it's one of two questions, the second being "what data do you have and how do you get more?"  So if you are starting an A.I. company, you have to show customers vision just as much as you ask them about their problems.  Customers don't understand yet what these new technologies are capable of.  And the process is reflexive because what you (and other) early startups do, impacts how customers perceive the early market and thus how they see the problems they have and the potential solutions A.I. can provide.  In other words - it's more complicated than before, but the payoffs could be bigger, so it is still worth pursuing.

But feel free to send me a note if you disagree.

Thanks for reading, and have a great weekend.
@robmay



-- ABOUT ME --
For new readers, I'm the co-founder and CEO of Talla, a Chatops platform that targets HR and other internal service teams.  I'm also an active angel investor.  My A.I. related investments include Netra, Simbe, LegalRobot, Greppy, Isocline, Hydra.aiSensay, and a few more that will be announced soon.   I live in Boston, and spend about 30% of my time in the Bay Area (Talla has a Palo Alto office) and 10% of my time in NYC.  
Inside is powered by Mailchimp. Also, we ❤️  Mailchimp.

Copyright © 2016 Inside.com, All rights reserved.



You're receiving this email because you are subscribed to Inside VR & AR. If you don't want to receive it anymore, go ahead and unsubscribe – or just hit reply and tell us how to make it better.