Bell Labs: Harnessing the Internet of Things into a Digital Sixth Sense
Future Tech 2024: An Interview with Dr Marcus Weldon (Bell Labs and Alcatel-Lucent)
Marcus is considered one of the luminaries in the telecoms industry in terms of the clarity, depth and breadth of his vision, and he has a phenomenal track record in terms of picking the right technological disruptions and opportunities — from vectoring in access technologies, to the evolution of LTE overlay and small cells, and the emergence of virtualization and SDN as profound industry changing forces. He is now combining this vision with the power of Bell Labs, to create an unrivalled innovation engine for Alcatel-Lucent.
Shara Evans (SE): Today, I am absolutely delighted to be speaking with Dr Marcus Weldon, who is the President of Bell Labs and the Chief Technology Officer at Alcatel-Lucent Global.
Marcus, thank you so much for taking the time to speak with me this morning.
Marcus Weldon (MW): I am delighted to be speaking to you.
Bell Labs Prize — An Opportunity for Innovators
SE: Bell Labs has just announced an amazing innovation award, where you’re challenging global inventors to redefine the future as you launch a Bell Labs Prize that’s dedicated to inventing the future. Can you tell us a little bit about that?
MW: Yes. It’s a very interesting new prize. We obviously have a tremendous amount of innovation in Bell Labs and historically have invented many amazing things, like the transistor, the laser, the CCD device, solar panels, et cetera, et cetera, and we continue to innovate at speed for the industry.
However, we realised that the innovation landscape has changed a bit in that there’s a much more global array of innovators than there had been previously, so we’ve launched this prize to open our doors to the global innovation community. The prize is intended to couple ideas from the outside with Bell Labs researchers on the inside.
It’s a unique prize, as the top 50 ideas of those submitted will be paired up with a Bell Labs researcher to make the idea better, or to help realise a demo of the idea, and then the best of those joint proposals will be judged, and the prize will be awarded — USD $100,000 dollars for the first prize — in December.
It’s a very interesting ‘outside-in’ prize where individuals get the chance to work with the world-renowned Bell Labs researchers, and also win potentially USD $100,000 dollars for the best idea and best proposal.
SE: That is amazing. I don’t think I’ve ever heard of anything quite like this. I can just imagine being a young innovator and having an opportunity to work with some of the top scientists in the world, and potentially get a large prize out of it too.
What kind of people do you think will be applying? Will it be individuals, perhaps university students, or do you expect perhaps other research labs around the world.
MW: Yes, we’re open to anyone. Obviously, students and people who don’t yet have official employment have an advantage in that they’re not constrained by an employment contract, but frankly we’re open to anyone who was intellectual property rights to their idea, because of course they have to bring an idea that they own the rights to. We have got no other constraint on who could cooperate with us as long as the applicants are not in conflict with any employment terms or conditions, and abide by the prize terms and conditions.
We’re open to ideas, any age, any person. I think we do say 18 years or older, but that’s just sort of an ‘adult’ requirement. Other than that, it’s any of the registered countries of which there are 43, any status of the individual, any educational background. That doesn’t matter to us. We’re just looking for the best and brightest to come forward with their ideas and collaborate with us.
SE: Is this covering any particular science areas, or are you completely open to anything in the realm of science?
MW: We have a particular interest in information and communications technology, but that is an incredibly broad area of course. It’s not really science per se that we are looking for; it’s actually solving big problems in information and communications technology. If we uncover some interesting science along the way, that’s fantastic. That’s the thing that Bell Labs has always done — discovering unique scientific insights whilst attacking big industry challenges.
In fact if you want to find out more, you can go to www.bell-labs.com/prize, and you can see the entire scope and set of areas that we suggest, but even beyond what we’ve outlined there, if you’ve got an idea in another area we haven’t specifically called out, we’re even open to that. It’s a very expansive call for ideas in, broadly, the area of information and communications technology.
SE: I imagine that you’re going to get a huge swell of people who are interested in this prize and in this whole collaborative innovation process. Well done!
MW: Well, thanks. Now we have an internal bet on how many ideas will get submitted. It ranges by an order of magnitude. On the low-end people think 100 ideas, and the high-end 1,000 ideas, and some are even more optimistic than that. We will see what happens, and I’ll let you know.
SE: Well, I have a feeling it will be more towards the 1,000 than the 100, but let’s see what happens after you make some announcements.
Inventing the Future at Bell Labs
I’d like to turn now to what you’re actually doing in the labs, and perhaps you can give some insight. Bell Labs is definitely known for inventing the future, so I’d like to know what sort of cool things are cooking in Bell Labs right now that are likely to be commonplace in 10 years’ time. One of the things that you had talked to me about previously was this concept of connecting all kinds of objects to ourselves. Can you perhaps expand on that?
MW: Yes. Firstly, at a high level, the challenge in Bell Labs that I’ve given to all the researchers is to improve something, and it could be capacity, or latency, or scalability, or energy consumption, some dimension of the problem they’re looking at, by a factor of 10. That’s a fundamental goal in everything we do—is improve some dimension by a factor of 10, which is already a tough goal for any particular area.
Then in some specific areas, we combine ideas together to create what we call FutureX Projects. These are special projects that are the combinations of individual pieces of research to solve bigger problems. Think of it as taking an idea that might come from the wireless domain and applying it in the optical domain, or think of it as a math solution that we’re using for optimisation of IP and optical networks. These projects have a larger scope and are cross-disciplinary. We have about 10 of those so-called FutureX Projects.
Future Objects — Coupling Knowledge, Objects and People
But one of the more unique ones is called Future Objects, and it is about connecting people to their objects in interesting ways so that you can talk to all your critical objects by having a representation of those objects in the cloud. You could find things you owned, and the location of those things in the physical world by using associated functions in the digital world. Also, how you use those things could be tracked, and so the purpose or the connection between those things would be suggested to you. And then you could ask “Who has x?” “What do I know?” or “Who knows what I need to know?”
In the end, if you start coupling knowledge, objects and people, you can start having sort of a digital assistant or a network assistant, I would call it, that knows what you need, when you need it, by connecting you to those things and monitoring how you use them. You get your network essentially optimising your life for you, not just optimizing your connectivity. It’s a very grand concept we have, but it’s something we’re going after using the diverse talent in Bell Labs.
SE: Marcus, it sounds to me like something out of many of the science fiction books that I’ve read, where essentially you have a digital avatar which is smart software residing perhaps in the cloud — and in some of the sci-fi books, maybe even a chip in your head — that knows what you want before you even know what it is that you want, and is able to piece together the information. Is that the direction that this is heading in?
Developing a Digital Sixth Sense
MW: It could. I mean, obviously you’re going to start scaring people in terms of privacy aspects to that, but think of it as more a suggestion. The network enhances your senses. I like to think of it this way: if you’ve got five physical senses and you could extend beyond those we have developed through human evolution, and have a digital sense that provides you additional stimuli. This sixth sense would interact with you to enhance your physical life by helping you do things, find things, remember things, know and learn things, in a way that it’s really tied to you as a person.
It’s different than what exists today in search and recommendation, because it is unique to you and the intent would not be to try and understand your behaviour in a way that tries to commercialise your behaviour by advertising or promoting products.
It’s an assistant that is an altruistic engine that really tries to create a sixth sense that allows all your digital objects, all your physical objects to become digital objects, and those digital objects to be managed for you in a way that creates sort of a digital sense of where those things are, what they should be used for, how they should be presented to you. It’s really your personal thing, not a web service that tries to monitor or manipulate your behaviour in order to sell things to you.
So, you see it’s got a very different logic behind it, even though you could imagine some people thinking it sounds like Big Brother. In fact, you could call it Your Brother because it’s just an image of you and only available to you.
Privacy in the Cloud
SE: The whole concept of privacy is something that’s top of mind for a lot of people, even with today’s technology, as so much information shifts to the cloud. How could one go about protecting the privacy of this kind of information? It seems almost a little bit daunting, and perhaps scary.
MW: Yes. I mean, that’s always the question with these things, isn’t it? I think, clearly, we’re going to have the evolution in privacy infrastructure, where we have to secure the storage of information that we have. And we have secure transmission of that information to you. It will be a fundamental part of this.
But in this digital assistant paradigm there will be strict control of what you share with others, and what you don’t, because there’s no commercial aspect to this — meaning, it’s not intended to sell advertising or to serve a commercial purpose.
It’s really your own personal assistant, just as if it were your admin or your executive assistant or your wife in some ways, or your family. It’s a way of presenting you information that you need when you need it, with no commercial intent.
Now would someone offer this as a service and make some money from it? Potentially, but the intent is not that. It’s not to actually sell the value of you. It’s really to provide a life-enhancing and life-simplifying service to you. That’s very different. One of the fundamental criteria would be that information about you is only available to you, never shared with others, securely stored, securely transmitted. These would be some of the foundational principles of the service.
Of course you have to make sure there’s never any unlawful or unintended intrusion into that service by third-parties, and of course were looking at all sorts of interesting physical layer technologies to prevent anyone from intruding on optical communication or any digital communication. Again, if it’s end-to-end encrypted, even if there’s leakage or snooping of the traffic, it couldn’t be interpreted. That’s a fundamental principle of the security architecture that would have to be applied.
SE: Marcus, do you see biometrics fitting into this whole security paradigm, especially as we’re talking about individual-level information?
MW: Yes, for sure. I think it’s an area that’s really come a long way, hasn’t it, with the thumb scan or the thumbprint scanner now becoming commonplace on smartphones. But as we all found, people then cheated that system by taking imprints of people’s thumbprints and then putting it over the thumb scanner and getting access to your phone. In some ways, it was easier to break that than it was before with the 4-digit number unlock system, because the four numbers are only stored in your head, and you couldn’t actually capture that without the person telling you — but thumbprints, you could capture that off a coffee cup or whatever.
SE: Yes, you see that on TV shows all the time.
MW: Exactly. There we thought we’d advanced, and in fact we’d taken a step backward because humans were more ingenious in how to crack it, and we’ve already seen all the TV shows about how to capture fingerprints.
I think retinal scanning and facial recognition approaches make more sense and are increasingly used, for example at border controls. Maybe that will come to your phone and the camera on your phone will become a facial recognition engine plus retinal scanner, so you just have to look at the camera to know it’s you using it. I’m sure innovations like that will be part of the future of wearables and ever-evolving smart phones and tablets.
Of course, it is all very well to recognize and validate the user, but we then need to make sure that the information is securely transmitted to the network. We also need to make sure that the information stored about you in the cloud is not snooped by a third-party. There many levels at which security has to be enforced, and that’s going to be a hot topic for the next decade.
We work in those areas, but fundamentally we’re also working with services that will be enabled when these problems have been solved, and security and privacy solutions have been found that are satisfactory for users. You could obsess about the security and privacy implications and forget to implement novel platform services and capabilities in the meantime, so we’re focused on both.
SE: My mind is racing in a million different directions. In terms of biometrics, I’m thinking that one way of perhaps securing data is to use multiple biometric feeds. For instance, a retinal scan in combination with a thumbprint — that might be a lot harder to fake or pull off of the coffee cup or other object — rather than just one single biometric signal.
It’s certainly a challenging problem because people are becoming much more aware that their information is subject to abuse by third parties. As you so rightly pointed out, it’s not just in the input-to-the-network stage. It’s what happens to your data once it’s stored.
What kind of companies do you think would offer services of this type, personal digital assistant services or avatar services?
Who will offer Personal Digital Avatar Services?
MW: Yes, it’s a very good question. I actually do think it’s a new model that service providers could provide as they are already trusted to provide protection for your data. They have stored your payment information, your address, your family information, your services consumption information for decades, as you’ve been a subscriber to telephony service providers and wireless service providers.
And service providers have suffered very few security breaches relative to some of the newer companies out there, who perhaps haven’t treated their data stores with perhaps as much integrity as service providers.
So I think service providers have an opportunity here, which is frankly why we work in this area. That is our core market. For new service platforms where trust and security are important, service providers have a natural advantage because they have a fairly strong trust relationship with their end customers built over decades of providing service without loss of data or theft of data from their system.
I think that’s one way in which these things may start appearing — as new service provider services, perhaps personal avatar services, where trust and privacy guaranteed would be the services the service provider could offer.
SE: What kind of timeframe are we looking at, Marcus?
MW: You know, I think there are already applications out there that are optimised for one type of behaviour, that track how you do something.
I’ve recently been using this ‘ski tracks’ application that tracks how you ski during a day by just tracking your location as a function of time, as you are at a ski resort, and showing you how you ski, how fast you ski, which routes you ski, which trails down a mountain.
There are all sorts of individual service-based tracking apps. There are those for cycling and running and all sorts of activities that show your performance and where you have been and what you have done, and you can imagine that will continue.
Services that Track Everything
What we’re really looking at is something that spans across all those things.
Imagine if you have a wearable that reports location then if you get in a car and you’re driving around the personal assistant would know that you were moving at the speed characteristic of a car, so could infer you were driving and where you went, and how fast, and whether you encountered traffic, and whether this was a route you commonly followed. And if the destination location was known, it could understand what you had been doing, e.g. shopping, playing golf, visiting friends, going to work etc.
In addition, it would also track the objects that go with you in those places. When you’re in your car, your keys and your wallet are typically with you, and so it could know their location by inference. When you’re driving, it’s very likely your wallet is with you. If you drive to work, it’s very likely that your work materials are with you and so on.
That’s what we mean by connecting your physical and digital world. And if you’re clever about it, you can actually infer what might be together in the physical world by its (sensed) proximity to a digital object that can signal to this intelligent assistant.
It’s really a multidimensional multi-inference problem that’s going to make the difference, not an individual app for an individual service, which I think there are many of today. That’s the big shift.
When do I think this will be a reality? Five years from now we could imagine this future whatever you want to call it — definitional you, your network brain, your network assistant, whatever it is.
SE: That’s quite a relatively short timeframe. As you were talking, I was also imagining how the home automation world would fit into this as well, because a lot of the objects in our homes now have interfaces or could have interfaces that are tied to smartphones and wearables — like turning on lights as you’re in geographic proximity to your house, or turning on the heat, or turning on the television, or other simple things. What you are describing is in effect taking this and ratcheting it up 10 steps.
MW: Exactly. I think the hard part of it that we see is the math piece of it —meaning, the math of building a graph, where there’s all these objects, connected to each other and to people. Think of the number of objects.
We’re talking about hundreds of objects per person, and we have four billion people. You could have trillions of things that you need to create associations with in a scalable way. There’s a big math and computer-science problem there that fundamentally requires magical expertise to solve that problem. That’s a sort of classic Bell Labs problem to go after.
Many others will find ways of ingesting information about devices. Many others will find ways of presenting the data to you, but fundamentally at its core is a huge challenge in data association, data processing. That is a very interesting challenge for math and algorithmic experts. This is a core competency at Bell Labs, so that’s something that were very excited about working on and think we have a unique viewpoint on how to do that.
SE: Well, it’s certainly a massive correlation problem. As you say, that goes right down to mathematics.
One other interrelated area is sensor networks — sensors are being embedded in lots of things, but also being deployed in the agriculture industry, monitoring moisture, salinity and all sorts of environmental aspects, as well as in the automotive industry where there are trials for autonomous cars and automated highways. Does this fit into the vision of the digital avatar in any way?
MW: Yes, I think so. The digital avatar, or the digital assistant, or your sixth sense — those are all terms we’ve used to describe this thing because of course it has the aspects of all of those, and indeed can be used to understand your environmental ‘well-being’ by coupling to water, bridges, infrastructure, traffic sensors. This would be a good thing of course because it could send you alerts — a traffic jam notification is a simple one, or water that shouldn’t be drunk because it’s recently been contaminated is another.
And you wouldn’t have to wait for a web alert or a press release or other media message exposure — the alert would be personalised to you and sent before you were exposed to the situation! It would essentially be able to correlate the fact that you are entering an area where there has been a recent alert about water quality, and it sends you the information: “water quality compromised.”
Think of the type of thing we’re already doing today with those things in the US, which I see. They have AMBER Alerts where alerts are broadcast when someone has done something imminently threatening in a certain location. Those alerts instantly get advertised on the highway screens and even on cell phones. That’s one sort of application that is already of interest.
But I don’t think of our application as motivated by security alerts. It’s more alerts from your environment when things are not as they should be — this is a general class of problem that I think is very interesting and an enhancement of our lives. No one would say that knowing more about our environment and our current physiological interaction with that environment —drinking water, health, exposure to radiation etcetera, etcetera — wouldn’t be a good thing to have, so that we could more intelligently navigate our physical environment.
I think all of that is part of this, but again the key being you need an intelligent entity focused on you that prevents you from being overwhelmed by this extra information. Because in many ways we are already overwhelmed and we don’t need to be doubly, triply, tenfold overwhelmed with information because we can’t process it and make intelligent decisions. But if in fact it’s selectively provided to us only when it’s relevant to us, then in fact our lives are enhanced, and that’s the key.
A Digital Sixth Sense for Business
SE: I would imagine that there are business applications for this as well. For instance, we’ve been talking about how the information can be used in a personal sense, but why couldn’t that same wealth of information also consider the role of a knowledge worker or an employee of a particular company, and when we’re in work mode intelligently feed us information relevant to our job, as well as our personal life?
MW: You’re so clever. You get the bonus points for today.
SE: Thank you.
MW: That is exactly one of the hot areas I think that we see, where your work life is less sophisticated in terms of organisation than your consumer life. With the advent of Bring Your Own Device (BYOD), where you’re taking your home device to work with those apps that enhance various aspects of your life: ski-tracker, bike-tracker, running-tracker-type application, can you bring that concept into a work environment and have your work life automated. I do think that there’s an interesting opportunity in that space, and I won’t say any more than that.
SE: Well, I think we’ve been thinking along very similar lines there because even with the today’s devices, there are things that one can do in an enterprise much more intelligently by using these imbedded RFID chips and sensors that are out there in the field — and using it to know, for instance, what kind of spare parts you need before you actually make a truck roll and send service staff out in the field. Things like that have a real-world application today, and I’d imagine that a huge correlation engine like a digital consciousness or a digital avatar could up the ante there.
MW: You would imagine. It’s a good thought.
SE: Yes. Would you like to add anything else?
MW: We’ve covered a lot. The only thing else I’ll say is to make all this work requires a radical evolution in the network architecture.
It’s fun to talk about these things, but in fact the consequences to the network architecture are huge because you now need to build a massively scalable, flexible, adaptable ‘network of you’. It’s very nice having an assistant who says, “This is what you need,” but you actually need of course the network to be able to adapt to your needs so that your media, content, services and connectivity, are constantly optimised so that your life — your mobile life — can not only connect you to people, to friends, colleagues, relatives, but to all your devices and objects.
The network that we have to build to enable that is at least as big a problem as the intelligent personal assistant that would be using that network to deliver things to you. The network problem itself is harder because you’re talking about building physical things, building an infrastructure that is massively scalable, dynamically adaptive, and moving capacity around to meet your changing needs.
So we are looking at changing the radio landscape so that you can use all radio interfaces at once. We are talking about moving to a small cell architecture, where perhaps the small cells get as close as 30 metres from you, with in-building capacity being a new area where we see massive ‘small cell’ expansion.
We see a massive growth in metro network capacity needed to support distributed cloud architectures, and then of course we need a massive growth in the core to connect those distributed clouds together over a longer distance.
There’s a revolution in networking required to enable this revolution in connecting your physical world to your digital world.
We’re working across all those connections in Bell Labs, but clearly we can’t do it all on our own, so that’s frankly the reason for opening the doors and inviting external innovation through the Bell Labs Prize.
The Cloud of Clouds
SE: I think there’s of one other level of integration that needs to happen, and that’s a more tightly coupled networks of networks, or clouds of clouds because people will have different bits of information in different repositories, or connectivity through various mechanisms, through various service providers, perhaps in different parts of the world. Somehow all of these information repositories and physical access networks need to come together into a ubiquitous information repository.
MW: Yes, you are absolutely right. I mean, I think one of the disadvantages of service providers is that they remain national in extent, or multinational for some that have operating companies in different geographies. They have lost out to the web companies who’ve built truly global infrastructure and associated backbone networks.
I think what we need is indeed ways to connect server provider networks, ways to connect different clouds, so that we actually have the ability to move services and support infrastructure with us as we move from one place to another, and as people move from one location to another — for example, to take account of time of day variations or travel and things like that.
So it’s not just about the automated national network but the automated dynamically adaptable virtual-machine-enabled, anywhere-anytime-enabled network of networks, as you say.
SE: I think we’re starting to move towards that with cloud integration, but there’s still quite a way to go. It’s just in the infancy stages in terms of emerging services. Is this possible also within a five-year period?
MW: I think it is. I think it’s more a case of the economic and political desire to do this. But we’re seeing service providers actually start work on federation architectures between different networks, so that we can in fact create a global network because they need to do that within their own infrastructure to connect together different data centres, and to connect together different domains of their own networks.
And with multinational providers, they’re looking to get the efficiencies scale of cloud, which will allow them to centralise some functions. But this requires that you connect together cloud and network, and connect together cloud and cloud, and network and network, in order to get that efficiency and deliver the required service.
We’re beginning to see movement in that direction, and we’re very excited about the potential for that because I think it has the potential to revolutionise the communications landscape in a way that perhaps we haven’t seen until now.
Until now you had two worlds — the web world, and the connectivity world — and those two things have been somewhat disparate in technology and commercial connections. I think we’re seeing a unification of those worlds and a globalification, a global extent appearing that previously we didn’t have.
Marcus, it has been delightful speaking with you, and I am very much looking forward to learning more about the digital consciousness or avatar projects or sixth sense project, whatever the name ends up being as it evolves over time. It’s something that’s going to impact potentially every human on the planet.
MW: That’s what we’re after. We’re after changing your reality so that 10 years from now, we’ll all be marvelling at the magnitude of the change that we have driven. Just remember that you heard it first here today in this conversation. Thanks very much.
SE: Thank you so much.