Toggle Navigation
Digital Imaging: Imagine the Possibilities
Teledyne Logo

Future Tech Review #1

Looking at the current state, and future intersection, of the Internet-of-Things and Artificial Intelligence


“Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”

Bertrand Russell

This is the first post in an ongoing series on some of the most hotly-debated and misunderstood terms in technology now. We’re told that artificial intelligence, machine learning, internet-of-things, deep learning and the like will totally transform — or perhaps destroy — the imaging industry. Or not. We think it’s worth taking a hard look at what people are saying and what’s going on. And then taking a second, and third look for good measure. In this post, we recommend some articles that aim to define AI and what it could mean.

First principles in AI

How Aristotle Created the Computer

While computing is often a discussion of hardware and software, it is rooted in the world of ideas. In this impressive article, Chris Dixon explores the fundamentals of philosophy that led to Leibniz’s mathematics, leading to George Boole’s (of “Boolean logic” fame) work and to the insight by Claude E. Shannon that binary concepts could be made reality in the form of circuit. This led to computers, and today underpins the idea of AI. Why do we think that computers can think? Because they are built according to how logic itself works. And as computers get faster and faster, they will be able to think faster than we can.

Debating Concepts or Just Words?

Elon Musk and Mark Zuckerberg can’t agree on what AI is, because nobody knows what the term really means

While the title seems to give away the whole article, it’s a little misleading. Some people know what they mean by “AI,” but that doesn’t mean they agree on that particular definition. In this article, the author does a cursory unpacking of Musk and Zuckerberg’s approaches. Author Dave Gershgorn considers Musk to be a “literalist,” where Artificial Intelligence is one, cohesive technology that has human-like reasoning ability – that it can understand, take action on its own, and then learn from results.  Zuckerberg, on the other hand, is considered a “generalist.” Artificial intelligence is a more user-friendly blanket term for any kind of system that can understand commands and complete tasks. It’s a definition that looks a lot like Facebook: user-friendly and accessible for those who don’t speak tech.

This isn’t just for Silicon Valley types

AI is the new electricity

There are people who debate, and there are people who do. Andrew Ng has already had a stellar career in the world of AI – first as the founding lead of the Google Brain Project and then as head of AI at Baidu. Why is Baidu significant? In Ng’s words, “Baidu is now one of the few companies with world-class expertise in every major AI area: speech, NLP (Neuro-linguistic programming), computer vision, machine learning, knowledge graph.” But now he’s leaving Baidu and sees that the potential of AI is far bigger than its impact on tech companies and will transform nearly every major industry — healthcare, transportation, entertainment, manufacturing — and enrich the lives of countless people.

Superintelligence: The Idea That Eats Smart People

Why might it be important for us to agree on what we’re talking about and how big of a deal it is? Maciej Cegłowski is a bit of a maverick in the tech world (His talk on “How to succeed beyond your mildest dreams” goes against a lot of accepted Silicon Valley wisdom), and he believes that vague terms and murky understanding could be really really bad for us all. “In the absence of effective leadership from those at the top of our industry, it’s up to us to make an effort, and to think through all of the ethical issues that AI—as it actually exists—is bringing into the world.”

Stay tuned for more instalments in our new Future Tech Review series.