Home Artificial Intelligence Take a tour of the Augmented Age

Take a tour of the Augmented Age


What do you get when you give a design tool a digital nervous system? Computers that improve our ability to think and imagine, and robotic systems that come up with (and build) radical new designs for bridges, cars, drones and much more — all by themselves. Take a tour of the Augmented Age with futurist Maurice Conti and preview a time when robots and humans will work side-by-side to accomplish things neither could do alone.

How many of you are creatives, designers, engineers, entrepreneurs, artists, or maybe you just have a really big imagination? Show of hands? (Cheers) 

That’s most of you. I have some news for us creatives. Over the course of the next 20 years, more will change around the way we do our work than has happened in the last 2,000. In fact, I think we’re at the dawn of a new age in human history. 

Now, there have been four major historical eras defined by the way we work. The Hunter-Gatherer Age lasted several million years. And then the Agricultural Age lasted several thousand years. The Industrial Age lasted a couple of centuries. And now the Information Age has lasted just a few decades. And now today, we’re on the cusp of our next great era as a species. 

Welcome to the Augmented Age. In this new era, your natural human capabilities are going to be augmented by computational systems that help you think, robotic systems that help you make, and a digital nervous system that connects you to the world far beyond your natural senses. Let’s start with cognitive augmentation. How many of you are augmented cyborgs? 


I would actually argue that we’re already augmented. Imagine you’re at a party, and somebody asks you a question that you don’t know the answer to. If you have one of these, in a few seconds, you can know the answer. But this is just a primitive beginning. Even Siri is just a passive tool. In fact, for the last three-and-a-half million years, the tools that we’ve had have been completely passive. They do exactly what we tell them and nothing more. Our very first tool only cut where we struck it. The chisel only carves where the artist points it. And even our most advanced tools do nothing without our explicit direction. In fact, to date, and this is something that frustrates me, we’ve always been limited by this need to manually push our wills into our tools — like, manual, literally using our hands, even with computers. But I’m more like Scotty in “Star Trek.” 


I want to have a conversation with a computer. I want to say, “Computer, let’s design a car,” and the computer shows me a car. And I say, “No, more fast-looking, and less German,” and bang, the computer shows me an option. 


That conversation might be a little ways off, probably less than many of us think, but right now, we’re working on it. Tools are making this leap from being passive to being generative. Generative design tools use a computer and algorithms to synthesize geometry to come up with new designs all by themselves. All it needs are your goals and your constraints. 

I’ll give you an example. In the case of this aerial drone chassis, all you would need to do is tell it something like, it has four propellers, you want it to be as lightweight as possible, and you need it to be aerodynamically efficient. Then what the computer does is it explores the entire solution space: every single possibility that solves and meets your criteria — millions of them. It takes big computers to do this. But it comes back to us with designs that we, by ourselves, never could’ve imagined. And the computer’s coming up with this stuff all by itself — no one ever drew anything, and it started completely from scratch. And by the way, it’s no accident that the drone body looks just like the pelvis of a flying squirrel. 


It’s because the algorithms are designed to work the same way evolution does. 

What’s exciting is we’re starting to see this technology out in the real world. We’ve been working with Airbus for a couple of years on this concept plane for the future. It’s a ways out still. But just recently we used a generative-design AI to come up with this. This is a 3D-printed cabin partition that’s been designed by a computer. It’s stronger than the original yet half the weight, and it will be flying in the Airbus A320 later this year. So computers can now generate; they can come up with their own solutions to our well-defined problems. But they’re not intuitive. They still have to start from scratch every single time, and that’s because they never learn. Unlike Maggie. 


Maggie’s actually smarter than our most advanced design tools. What do I mean by that? If her owner picks up that leash, Maggie knows with a fair degree of certainty it’s time to go for a walk. And how did she learn? Well, every time the owner picked up the leash, they went for a walk. And Maggie did three things: she had to pay attention, she had to remember what happened and she had to retain and create a pattern in her mind. 

Interestingly, that’s exactly what computer scientists have been trying to get AIs to do for the last 60 or so years. Back in 1952, they built this computer that could play Tic-Tac-Toe. Big deal. Then 45 years later, in 1997, Deep Blue beats Kasparov at chess. 2011, Watson beats these two humans at Jeopardy, which is much harder for a computer to play than chess is. In fact, rather than working from predefined recipes, Watson had to use reasoning to overcome his human opponents. And then a couple of weeks ago, DeepMind’s AlphaGo beats the world’s best human at Go, which is the most difficult game that we have. In fact, in Go, there are more possible moves than there are atoms in the universe. So in order to win, what AlphaGo had to do was develop intuition. And in fact, at some points, AlphaGo’s programmers didn’t understand why AlphaGo was doing what it was doing. 

And things are moving really fast. I mean, consider — in the space of a human lifetime, computers have gone from a child’s game to what’s recognized as the pinnacle of strategic thought. What’s basically happening is computers are going from being like Spock to being a lot more like Kirk. 


Right? From pure logic to intuition. Would you cross this bridge? Most of you are saying, “Oh, hell no!” 


And you arrived at that decision in a split second. You just sort of knew that bridge was unsafe. And that’s exactly the kind of intuition that our deep-learning systems are starting to develop right now. Very soon, you’ll literally be able to show something you’ve made, you’ve designed, to a computer, and it will look at it and say, “Sorry, homie, that’ll never work. You have to try again.” Or you could ask it if people are going to like your next song, or your next flavor of ice cream. Or, much more importantly, you could work with a computer to solve a problem that we’ve never faced before. For instance, climate change. We’re not doing a very good job on our own, we could certainly use all the help we can get. That’s what I’m talking about, technology amplifying our cognitive abilities so we can imagine and design things that were simply out of our reach as plain old un-augmented humans. 

Read More

Previous articleMegan McArthur on Fear and Confidence
Next articleNIO Inc. (NIO) On Way to $100
S. Jack Heffernan Ph.D. Economist at Knightsbridge holds a Ph.D. in Economics and brings with him over 25 years of trading experience in Asia and hands on experience in Venture Capital, he has been involved in several start ups that have seen market capitalization over $500m and 1 that reach a peak market cap of $15b. He has managed and overseen start ups in Crypto, Mining, Shipping, Technology and Financial Services.