Skip to main content

Unlock precision & productivity in Knowledge Management | Download our free GenAI Technical Guide now!

Podcast

AI in Aviation, Aerospace, and Autonomous Driving

Squirro's Podcast: Redefining AI - Season 1

Dr. Arslan is a well-recognised academic whose research has focused on planning, machine learning and computer vision for robotic systems and autonomous vehicles. He was a Staff Autopilot Software Engineer at Tesla Inc and contributed to the development of autopilot software; namely behaviour planning, path planning and controls for Tesla’s Model 3, S, X cars. He implemented the traffic-aware lane change and merging behaviours in navigate-on-autopilot feature, as well as the path planner for smart summon feature.

Full Transcript

Lauren:
Hi everyone. My name is Lauren Hawker Zafer, and this is Redefining AI. Redefining AI is a podcast hosted by Squirro and the Squirro Academy. The podcast focuses on key narratives and discussions that drive digital innovation and help people understand artificial intelligence, machine learning, insight engines, and the insight era.

Now today, I've been joined by Dr. Oktay Arslan. And I am as always enthusiastically excited about the discussion that's coming up. Oktay is currently an AI robotics research engineer at A³ in the Silicon Valley Innovation Center of Airbus, where he worked on various autonomy projects in the Way finder team, such as autonomous landing for commercial aircrafts, sensor fusion, and certification of autonomy software.

Now, Oktay’s academic research focused on planning, machine learning, computer vision for robotic systems, and autonomous vehicles. Previously, he was a staff autopilot, software engineer at Tesla Inc. And in this role, he contributed to the development of autopilot software, namely behavior planning, path planning, and controls for Tesla's model 3SX cars. He implemented the traffic aware lane change and merging behaviors in navigate-on-autopilot feature and path planner for smart summon feature.

Now, prior to his role at Tesla, Inc., Oktay worked as a robotics technologist at NASA Caltech Jet Propulsion Laboratory, where he contributed to several autonomy projects for various robotic platforms, such as Quadrotors, Mars Rovers, and Spacecrafts.

Now, Oktay, he obtained his PhD degree in robotics, a dual MS Degree or two MS Degrees in computer science and aerospace engineering, all from the Georgia Institute of Technology. He also earned an MS degree in defense technologies and dual BCS degrees in computer engineering and electrical engineering, all from the Istanbul Technical University.

Before we invite Oktay into the conversation, I want to start by setting the scene a little. Automation is the application of machines to tasks once performed by human beings are increasingly to tasks that would otherwise be impossible. Although the term mechanization is often used to refer to the simple replacement of human labor by machines, automation generally implies that integration of machines into a self-governing system. Automation has revolutionized those areas in which it has been introduced. And there is scarcely an aspect today of modern life that has been unaffected by it.

Welcome, Oktay.

Oktay:
Thank you, Lauren. Thank you for having me for today’s podcast.

Lauren:
Yeah, it's a real honor to have you here and to have you on the show. And I'm looking forward to what I hope will be a really interesting discussion.

Oktay:
Yeah, likewise.

Lauren:
And where are you just know?

Oktay:
I'm, right now, in California. My company has an office in Sunnyvale. I work in the office in Sunnyvale office of the company.

Lauren:
And do you enjoy it? What sort of weather are you having at the moment? In Europe, here we are obviously based in Zurich, and it’s a beautiful summer’s day.

Oktay:
I think it's great. We don't have winter’s here, I would say. It’s always sunny here.

Lauren:
Okay. And you come originally from Istanbul, that’s correct?

Oktay:
Yes. I'm originally from Istanbul, Turkey. Then, after graduating from my school, I moved to U.S. for my graduate level studies. Then after I finish my school here, I decided to stay here to work on autonomy and robotics projects.

Lauren:
Excellent. Now, I mean, as I introduced you at the start, let's see like NASA, Tesla, Airbus, robotics, computer science, defense technologies, computer engineering, electrical engineering. Now, my first comment I would say would be, “Wow.” I mean, your resume is powerfully impressive.

And my first question then would be, how? What drove you to this type of achievement? I mean, some people speak about patience, persistence, hard work, luck. And many, I think, successful professionals, they would define the foundational key to success to look differently. And also, I think, from divergent angles. So, Oktay, what’s your angle?

Oktay:
Well, I always want to learn how to implement things in real world. Like, that's why I wanted to study robotics. So, I first started my undergrad degree in electrical engineering, it was a good program. And I was learning how to understand electrical motor works, how to control them. But then, I realized all of these things are going to be controlled by software's, computer software. Then I said, maybe I should learn more about the programming, how to write good software. That’s why I decided to follow a double major program in computer engineering. So, I learned programming there.

So, then in 2005, I think there was a competition by DARPA, they tried to develop an autonomous vehicle that can drive itself. The first two competitions were about driving in a desert like place. And the last one, the third one, it was called Urban Challenge, and they were trying to drive in an urban city. So, I loved that project. And I said, “I'm gonna be working on these projects in the future.”

Whenever I was trying to work on autonomous driving, or robotics, I realized there are there are missing things that I don't know. For example, I don't know the mechanical system. I don't know how to program on these platforms. That's why I said, “Okay, maybe I should study this subject as well.” And I started to study this subject as well.

I think my accomplishments are based on how much I don't know about the subject, and I want to learn those subjects. Not like a simple way. Like, if you want to do something, develop something in real world, you have to know almost all details. That’s why you have to study to the finest details. And I thought that I could achieve that by taking classes and going to school. University is a very nice environment that gives you like, so much opportunity to learn things, if you want to work hard, I will say. And I think I went to two great schools. There were technical schools. And they had lots of opportunities to provide for anyone who wants to research in this field. And I just took my time, and I enjoyed that environment. I tried to learn as much as I could before I would leave academia, I would say.

Lauren:
I mean, that's inspiring itself. And I would maybe say that what I identify with is more of a really close-knit passion to actually do what you do well. I mean, I think it takes a lot for someone to identify components of their industry that they don’t know. And then try and pursue that and learn more. Would you classify yourself as a lifelong learner?

Oktay:
I can say that. But before-- We have to make a definition. Like when you say, I learned something or I know something, what do you mean by that? So, for some people, you buy a book, read a couple of pages, and then you put that in your resume. You can say, “Hey, I learned this subject.” But we know that is not the case most of the time.

I think, in my definition, in order to say that I know a subject or learned a subject very well, I think you have to take fundamental classes in that area, and you have to know them well. Then you have to work on a problem, and you have to implement something. And you have to make sure it works in your lab. And then, you have to deploy that thing in real world. And then maybe you can publish something about what you achieved, by what you kind of learn, in the academic community. And then, maybe you should be able to teach that subject to someone who doesn’t know. I think if you have all these things, I think you can say, I know a subject, I would say. That is my definition of knowing subject.

Other than that, if one of the items is missing, I cannot say I know the subject as well. I don't want to put something on my resume. I think, that's my definition of knowing something.

And throughout my life, I tried to do these things in different things. Sometimes I was in school, sometimes I was in industry. But I think learning the fundamentals and learning by hands on experience and being able to explain these things to someone who don't know, who doesn't know, I think that's my definition of saying, I know a subject very well.

Lauren:
That's quite a lot of success metrics and criteria, which is admirable.

Oktay:
That is a definition of being an expert in a field, I think.

Lauren:
Of course. And today, I mean, so now you’ve progressed so far, you are now embodying the role as the AI robotics research engineer at the A³. And this is obviously the Silicon Valley Innovation Center of Airbus. So, maybe you can start to tell us a little bit about what are you working on this role? And what then, would be my question, what lies within the scope or under the definition of innovation when it comes to Airbus?

Oktay:
Sure. We are in Silicon Valley. And this place is known for good software engineers and hardware engineers. And now, I think it’s the hub for AI talent. I think Airbus opened this office here, just to attract more AI researchers to work on autonomy projects for the future. There are some autonomy projects, like autonomous landing, autonomous taxi, autonomous takeoff. And we would like to leverage the state-of-the-art AI techniques to solve these problems whenever necessary.

My job is like, I work on mainly like something called perception. It's like when you try to develop these vehicles, they have to operate in an environment, and they have to understand what their surroundings look like. And in order to solve that problem, we use various sensors. Like this can be a camera, a GPS sensor, a radar, or whatever. So usually, we have a lot of information from the sensor and you have to kind of filter out and put them together to compute a meaningful information about the surroundings so that the vehicle can make a decision by itself. So that’s kind of my role here.

I work as a member of the perception team, and we usually use camera sensors to understand what's around the vehicle. And there are also other projects like, I think the company is investing to autonomous flight technologies a lot. They want to make things autonomous, or at least more automated for the future aircrafts. And that requires so many researchers, like me, like who have background in AI robotics to contribute this field. That's kind of a common trend in the Bay Area. I see more companies are trying to put automation or like autonomy in their products to achieve safety, productivity, more productivity, or like a profit in other businesses. But I think the trend is like, I have started to see this trend in like taxi business, agriculture, track automation, like there are so many different businesses that were previously done by human and now companies are investing to automate them in the future.

Lauren:
Okay. Well, I mean, like a driving question, and I can hear a lot of people outside of the podcast may be asking this. But what is spotting on innovative approaches to automation or more often a more automated approach? Like what is the necessity? What's the necessity for it to develop, more? And is it replacing something? Is it solely for a component of innovative product development and design? Like, what's pushing it? What's driving it?

Oktay:
Yeah, I think I can give my share for this answer. It may not be true, but that's my opinion. And because like this AI, or robotics, we are trying to do something that hasn't done before. So, it's okay to have different opinions on the subject. And I think like, at some point, we had robots, and they were developed in university laboratories. Then it went from there, to factory assembly lines, especially for the automotive industry. Now we see those robots are coming and becoming part of our daily life. We have Roomba, like the floor cleaner machine. And then, now we have self-driving cars driving on the streets. And probably we will see more vehicles in the sky. I mean, we already have some drones flying around. But maybe we have like passenger aircrafts or cargo aircrafts for some transportation requirements, I would say.

I ask this question to myself, what happens so that we start to see these vehicles on streets? And what happened when we start to see these vehicles doing operations very close to human, especially when you drive on streets. You have to be careful. There are pedestrians and you have to understand what they are doing. So, I think the biggest improvement there was about, we have lots of machine learning development approach. And by using this data driven approach, we can understand the surroundings of the vehicle. Previously, we didn't have this capability. We did not have these machine learning algorithms that can understand, oh, there’s a traffic light in the scene, or there is a pedestrian, or there is a vehicle. We did not have this capability. We had some like, I will say, classical vision. And it was helping us to solve some of these problems. So, whenever we couldn't achieve a human level accuracy, we will try to simplify the problem. And when I say that let's say you want to deploy a vehicle, but you don't understand the surrounding, maybe then you cannot control the surroundings so that nothing can intervene the operating environment, like factories.

I know when I was a student, I went to a car assembly factory, and I know that there were huge robots that were doing like welding, assembly, all these various tasks. But whenever a human was attending the assembly area, the entire robot process would stop. You had to wait until the person leaves the area so that the robot can return to the automated tasks. Whenever we can’t solve the perception problem, we tend to kind of restrict how to operate these things. If it is a moving thing, I don’t know what it is, and why it should be safe to stop. But then, we cannot do that for self-driving cars. So, when a car drives, you cannot say, “I see something. And I don’t understand, I should actually stop.” So, what happens is that we have tried to use a lot of data, label them, and say, “This is a pedestrian, a traffic light, or lane information.” So, we have used this labeled information and try to use these new machine learning models to train so we can understand what is in the scene. I think the key to achieve these levels of autonomy in robotics-- Before making your own decision, you have to understand what your surrounding looks like and what are the moving things.

Lauren:
Of course. And--

Oktay:
Yeah. You have to understand their like, intention, so that you can be a little bit smart about what to do for yourself. Otherwise, I mean, we cannot even walk in a dark night, we have to wait and use some light and shed the light around so we can see what’s around. That is very similar to robotics software. You have to understand your surroundings first before making anything smart.

Lauren:
And how are you-- I mean, you’ve mentioned a couple of times there that you are trying to implement the latest AI techniques. You’ve mentioned machine learning algorithms that are used to build models, etc. Are you using, and I’m just taking maybe from interpretation and trying to make my interpretation as to what you are conveying there, is there a lot of image classification especially in terms of perception? Are you using deep learning models? What sort of AI techniques and machine learning algorithms, what direction do they go in terms of capabilities?

Oktay:
I can talk about like-- I can talk about like self-driving cars because that’s more--

Lauren:
Yeah, sure. Yeah.

Oktay:
When you fly in the sky, there are not that many objects. So, it’s very, yeah-- We have problems. We have to think of problems. But I think driving a vehicle, I would say, in a busy San Francisco street is a challenging task.

Lauren:
Okay.

Oktay:
So, I think when you try to drive something on a highway, you have to have some information to do planning and control. The first thing, you have to understand what are your lane markings? And how many lanes do you have? And for each lane, how many vehicles are there? And what are their relative position with respect to yourself? Or what are the velocities? You have to have this minimal information. And I think, among all of this information, the most critical thing is there any vehicle in front of you? You have to maintain a safe distance.

So, you can use some classical, I will say, widely used sensors, like radar or ultrasonics. But I think the main trend is to use cameras. So, when you use cameras, then you have an image. And when you show the image to a human, it can identify those items very like, quickly. It’s easy to understand what the car looks like for a human.

But for computer software, that is a little bit challenging. They don’t see the way we see things. They just see a bunch of numbers, like zeros and ones, maybe 0255 or like some variables, red color, red, green, blue channels. So, you see from computer perspective, you see a bunch of numbers, and from this number you have to kind of look for a pattern that resembles the object you are trying to find. And that’s very time consuming where you have to compute power. Plus, you have to, we use something like supervised learning. So, we show a bunch of images, label them by human, and then train the model. And then, when you deploy that model on the car and show an image that it has not seen before, it tries to classify those objects. So, this is the general trend because we have more data nowadays. We also have more computes compared to 10-years ago, 15-years ago. Like the ideas in machine learning are not-- There are some key developments, but they have been used since the 1990s, there were like early papers. But at that time, we did not have fast computers, like hardware, and we did not have enough data. And in the last 10-years, we started to have faster, I’m not going to say CPUs, because we have GPUs, like graphical processing units. So, we started this new hardware and they have so much compute power. And then, we started to have cell phones and lots of cameras. We started to have lots of data. And these are the two prerequisites for developing a good ML model. You have to compute and you have to have data.

But then we have some algorithmic challenges, but then we start to have some key ideas in the AI community. And these key ideas came from, based on like loss of insight, loss of experience by very senior researchers. And when you have these three key developments, faster computers, more data, and smarter algorithms. We started training the ML models and they achieve accuracy better than human for some classification tasks. We have like some classical like challenges like image net, like in the AI community. So, whenever you say, “I have a new model.” You test your model on the standard database and try to benchmark how your model is. I think there was this Alexa Paper sometime in 2012, like they achieve an accuracy that's very, very high than the second one, I think. And that was the like, key moment, people, they say maybe we should use these deep learning models to solve this, like learn machine learning tasks.

Every year, there were more deep learning based approaches in the community, and they did much better than any other like counterparts. But then, gradually, we started to have these machine learning and people they try to collect more data, more compute, and make the model a little bit more deeper. And they just start to see, “Oh, there's a performance increase when they do that.” I think that was a key thing. We kind of developed a method, I would say.

In other techniques, we had this problem of overfitting--

Lauren:
Like overfitting with data and overfitting with--

Oktay:
No matter how much you change your model, you may not achieve some performance. But these deep learning models, I think they have these capabilities. Like you kind of try to make the model deeper and try to put more data. You see performance improvements.

Previously, it was more. But I think it’s a little bit slower. But I think it was a key thing.

Lauren:
In the development?

Oktay:
To the industry.

Lauren:
Okay. So, with respect to that, obviously, you’ve addressed a lot of improvements. I mean, you’ve addressed the computational aspect, the power capacity of the hardware, and smarter algorithms, also the component of maybe the sensors, the cameras, into the mix. What, is there anything then, that needs to be addressing or addressing to actually ensure that there is full scale autonomy when it comes to self-driving cars?

Oktay:
Achieving level is feasible or low 5s. It’s still hard.

Lauren:
Can you go over those, Oktay? Like obviously the levels of automation, just in case people are not familiar with it? I mean, I’ve always had a question in my own mind as well. Like, I mean, we spoke previously about the Airbus automation, what you are striving for, in terms of innovation there. And then, we’ve got self-driving cars. I mean, are there any similarities in that approach taken from a software development perspective and the tools used and what the differences are? Are we still talking five-level automation and cars and planes? Maybe you can sort of explain that five levels of automation and where cars and planes are?

Oktay:
I can give some information about that. I think there's a definition published by Society of Autonomy Engineers (SAE).

So, level zero, there is no automation. I think human is supposed to do all driving tasks. For level one, we have some basic like driver’s assistance capabilities, like lane keeping and ACC. And I think level two, is partial automation. So, driver must have hands on the wheel and be ready to take control at any given moment. And automation can give, controlling the speed and steering, I would say. Then there's level three, I think it's called, conditional automation. I think most of the task is done by the automation software. But then, whenever the software wants to deliver the driving task, the human must be ready to take over.

Lauren:
Okay.

Oktay:
Within a certain amount of time, I think. A level four, I think, in a given location, the car is supposed to drive by itself and human is not supposed to control the vehicle. And level five, I think this is the highest, it’s not going to work only with a specific location, it will work at any location, at any time of day, in any weather condition. I think the level five is the highest one. But I think if you have something at level four, that is also good.

I can try and explain what we need to solve to achieve these levels, I would say.

Lauren:
Yeah, that would be wonderful.

Oktay:
It is more of my perspective, subject, or opinion, it may not be true. I may have limited information on the subject.

So, when you try to develop an autonomous robotic platform, autonomous vehicle, you have to solve a couple of problems. And the first problem, I would say, so-called planning and control, maybe. Or maybe I should start with the hardest one.

So, the first problem you have is called perception. So, let’s say you have a car. You want to drive in the city. Perception is the task of trying to identify the geometry of the environment. And if there are like moving objects, trying to understand the types. Like is it a car, truck, or a motorcycle? Or try to understand like some other static objects that can be used for driving. Like traffic lights, which light it is on, or a stop sign. So, these are all perception problems. You also have sensors on the car, you may have lights, radar, camera, IMU, GPS, ultrasonics. There are a bunch of sensors, and they work on different physical principles.

So, you have all these raw measurements, and you try to fuse them and try to get a local map around the vehicle, then you want to place like geometric shapes, like road geometry. Then you want to place all these moving agents around other vehicles or pedestrians, whatever. And I think this is the subject of perception. This is a very difficult problem.

Lauren:
And what, I mean-- I’m going to interrupt you quickly. I mean, because you sort of started there by saying, let's begin with the most difficult one. So, why is perception the most difficult one?

Oktay:
Again it goes--

Lauren:
Is it the unpredictability?

Oktay:
When we look at the image, we can understand what is in that image, right? If it’s blurry, we will have troubles to interpret what is inside, right? But most of the time, when we get the camera image, human can identify things very well. But robotic software or autonomous software, they have so much noise in their measurements. When it takes a picture, it’s not clear. When you try to get measurements from radar, you understand-- You always have a noise because it’s hard to get to reality without noise in it. That’s a challenge. Mother nature is always against us. She doesn’t want us to learn-- Whatever you measure, it’s always noise. And you have to somehow develop algorithms to filter out this noise.

And then, we get this sensing capability. We see a bunch of, let’s say, when we have a lidar, we kind of shoot some beams to a certain direction and it hits and reflects back there. There is something there, I don’t know what, but there is something there. From that, they try to understand the shape of the object by just using some very raw measurements. That’s very hard. It’s very hard for computers to extract patterns. That is very easy for human. Like you don’t tell a human what an elephant looks like. You just show some pictures of an elephant. Whenever they go somewhere, they can figure out, “Okay, this is an elephant.” For vehicles or AI software, you have to show maybe hundred or thousands of same object with different angles so that they can be trained. That’s kind of the key challenge, computer software or AI models, they are different than how human brain works.

Lauren:
Yeah. I also think that it’s somewhat similar in some respect. Because obviously, if you think about a child and the development of like image to word and word association, and how you a teach a child or even an adult if an adult approaches a situation and they encounter an elephant. A lot of the time, it is contextualized that it’s their own schemata that’s been contextualized by previous experience or previous exposure to certain knowledge or experiences. That is why they interpret it as an elephant.

So, I think that the component of training a model as well to recognize, or a computer to recognize that it is an elephant, I think there are small similarities. No?

Oktay:
I mean, of course we are always taking inspiration from nature, that’s for sure.

Lauren:
Yeah.

Oktay:
But I always give this example like, birds, they fly by flapping their wings. But we have aircraft with fixed wings, and they don’t flap. We know the aerodynamics behind it. I think that’s kind of a similar thing. We know how to learn, how to make a decision, and try to interpret them. We have a different interpretation, like computers work on a different principle.

What I was trying to say for perception tasks, we never get branch root We always get measurements. First, we have to filter out all the noise from the measurements. And I think, the other thing is like a bunch of raw information, you have to fuse them to build a map, for example. That is very difficult when you have two sensors when they measure conflicting things. Like a camera may say there is nothing. But the lidar may say, “Oh, there is something.” So, whenever you have this conflicting information, you have to make a trade off from the trust or how to blend them. That is also a very difficult part. So, these are the things that make perception a little bit harder. Especially if you give me a map, everything is placed there, it is not hard to plan a path and drive a vehicle there. So, people know these perception tasks are super hard. They say, “What if we map an environment before we drive there?” And that is the something called high-definition maps.

There are companies that say, “Where do you want to drive?” And you say you want to do a little shuttle service. They just go there. They take lots of cameras and sensors. They drive in the environment. And they make a map before driving offline. That makes perception problem much tractable to solve. Because when you have a map, that means you have prior information. Like you have a kind of sense of what you should see in that environment.

Lauren:
Yeah, what you should see, yeah.

Oktay:
And whenever you have some measurements on your vehicle, you can compare, and you make a trade off what you expect to see and what you really see. Then you can get a sense of where you are in that map and what the objects are doing. So, like I said, perception is a little bit difficult. Some companies say, “We are not going to use maps because it’s costly to maintain a map. And construction, sites change all the time, and it’s hard to maintain.” Some people will say, “Well, we are going to map the entire world and deploy a vehicle to drive by itself.” So, these are different approaches based on different opinions.

So, let’s say that you have a good perception software. So, you understand your surroundings and other vehicles. And the next problem that I’m going to say and it’s much harder, it’s called prediction. In the prediction test, you will see a bunch of other vehicles moving with you. And you may want to predict their intention. Are they going to lane change? Cut in? Or make a full stop? And these are very challenging tasks because humans are very unpredictable with traffic rules. I know there are like lots of rules and regulations, they are even like penalty, they say, don’t drive under influence. But people are still driving under the influence. Yeah, or they say there is a speed limit here, don’t drive faster than the speed limit. But nonetheless, they do whatever they want.

Lauren:
Exactly. And that must be very different in terms of-- And I know we are not deviating dramatically from traffic rules and systems. But I’ve noticed myself, you go to one culture and there are certain individuals that don’t follow the rules. And making a prediction around how different cultures drive as well. That must really be quite complex.

Oktay:
I mean, I cannot even drive by myself when I go to different cities. I would say, like New York. It’s very difficult to drive there because some people-- I will say there are types of driving, there is defensive driving, aggressive driving, and there is road rage. There are certain, so many different problems there. Like I said, when you solve a perception problem, you try to understand that is out there at this moment. What is out there? And then, you say, I’m going to predict what’s going to happen in the next 5 to 10-seconds. Then you say, okay, I see these objects. They have motion. They are moving around. And you say, what are they going to do for the next 5 to 10-seconds? You try to predict their motion. Are they going to cut into my lane, slow down, or especially when you make a lane change for example, if you see a car in your destination lane and it’s driving towards you, you may say, it’s going to speed up or it’s going to yield. So, you have to predict these intentions.

Lauren:
How are they predicted at the moment? It is a rule based approach? Is it data collection models? How are you going about the whole prediction complexity?

Oktay:
Like, if you have so much data, you can try to build a model like an ML model on top of that. But the problem with ML models, like when they give a prediction and when you see another prediction for the next time cycle, they can be completely different than each other. They may not have the history. And most of the-- When it comes to like motion prediction, we know that these vehicles have to obey physical laws, you know. You cannot have at one moment, the vehicle is at this time, and then 0.1-second the vehicle is like 20-meters ahead, you know. So, they have to obey some physical law.

And I think people use both ML models and they use both physical models. There are some key variables to understand in motion, maybe you can say, is this a model cycle? Or this a truck? Or a is it down vehicle? There are key variables. And once you kind of figure out a model for example, then you can say, “Oh, I know how a truck can move.”

And then, you can kind of predict this motion. But there are also some other clues. The car is giving a signal to turn left, that maybe a clue. And you can put that clue in motion prediction as well. But usually, you can make advanced models, but you have to provide some data. And usually, we cannot measure the data without uncertainty.

Even when a person, driving on the highway, when they try to turn left, turn on the left signal, maybe they just forgot to turn off the signal light. That might not be their real intention. So, that’s why it is very challenging to understand the real intention. In that domain, I think people use the ML model, physic space models, and they try-- It’s very difficult to make it robust. Because if you go to a different city, you see a completely different driving pattern. And that’s why there is no one solution that can fit all. So, maybe you say, “I’m going to have a motion prediction model for this city, for drivers of this city.” And when you go to a different city, you can turn that.

So, that’s why prediction problem is a little bit harder than perception. Because you are trying to predict the future and the action of other human drivers. And that is very, very difficult.

Lauren:
Yeah. And I think that maybe that’s where a lot of the-- I mean, automation itself. I mean, it’s a controversially discussed topic from many different angles. I mean, there's the whole avenues of, you know, moral responsibility of safety, of predictability, which we've just have obviously been talking about. And I think that, you know, you've highlighted a lot of the complex considerations that need to go in to building full scale automation. Do you think that there will be in the near term? Or is that more of a longer term investment as to it happening and you've been able to 100% see that we are very happy?

Oktay:
I can say that like, some people they ask me why I switch from autonomous driving to autonomous flights. When I say, I believe autonomous flight is easier, they are surprised. Because they say, hey, I can’t fly an airplane. How can it be easier than driving a car? So, that’s why I was trying to give the key problems you have to solve at high level autonomy. First, you have to solve the perception. And then you have to solve prediction. And then, the next-- Okay, now, you know your operating environment and how the vehicles are going to move in the short time horizon. Then, based on your task, if you are driving a car or flying something, you have to go from one point to another point. So you have to find a path, that’s called path planning. And then, you have to track that path by using like vehicle control comments.

So, when you kind of separate autonomic into these tasks, like perception, prediction, planning and control, I would say that autonomous driving is easier when it comes to controlled tasks. It is easier to drive a car. Even if you don’t know what you’re doing, you can go somewhere. It’s relatively simpler than flying an airplane, I would say. Otherwise, it wouldn’t take lots of hours to train a pilot to fly an airplane. So, when it comes to control tasks, autonomous driving is easier. Autonomous flights are a little bit more challenging. But we already have some technologies over there. We have flight-controlled technologies. It is a challenging task. But we have a solution for it, let’s say, for the controlled tasks.

Lauren:
Makes sense.

Oktay:
Then, for planning, I would say, planning a path and trajectory for an airplane, I think airplane is a little bit harder. You have to consider the orientation of the airplane. You have to fly at high dimension because you have more constraints. But when it comes to perception, I would say, autonomous driving is harder because we don’t fly in an airplane forest.

Lauren:
Yeah, exactly.

Oktay:
You always fly in a very controlled environment. Like when you got in and take over in the air, there are no objects. You have predefined routes. So, perception problems are much easier in navigation compared to autonomous driving.

But there is another issue there. We usually require high level of accuracy for aviation, for safety reasons. And that maybe a different challenge. It’s a trackable problem. But the expectations are higher. It has to be accurate with a high degree of-- Like the performance is much higher than autonomous driving.

And the last piece I would say, is prediction. I think autonomous driving is very challenging for that task. No matter what you do, when you drive things-- When you see things that are driving, that are driven by human, it’s hard to predict what they are going to do.

Lauren:
Yeah. I think it's the emotional complexity of humans, as we touched on exactly, and that makes it so difficult.

Oktay:
And for aviation, I would say when you see a pilot, like, when you fly, when you see another flying things towards you, they have to declare their position and orient like location and intention by law. And that's why those intentions, prediction problems are much structured and much easier to handle. Of course, there were some cases when you have uncooperative objects. You don’t know what it is. But then you have some certain of like pre-defined plans, I would say. So, when you see these challenges, I think achieving level four or level five is very hard for autonomous driving, unless we change our infrastructure.

Lauren:
Okay. Yeah. And do you see that happening?

Oktay:
I mean, like I said, I'm trying to interpret the problem as an engineering way, like from an engineering perspective. If I cannot solve a problem, I can actually simplify the problem. And if you change the road infrastructure-- Because road infrastructures were designed for human drivers not self-driving cars. And they were there for 100-years.

So, you have to, maybe instead of traffic light, maybe you have to put some other sensors that can tell you who can drive or who cannot. And maybe you don't need to use computer vision to understand traffic lights. I know there was some videos where the moon or the sun was miss understood with traffic lights. Like the car was driving towards the sun and the sun was in like an orange color. And it was picking up as a traffic light or something. So, there are some challenges. And maybe the city should, or whoever the decision maker there, should make infrastructure simpler or more advanced so that the self-driving cars can drive. I think that will make perception problems easier. And for prediction problems, if all the cars are autonomous one day, and they communicate and exchange information with each other, I think that will also make the prediction problem easier to solve.

So, that’s kind of like-- I say, okay, it’s not hard to achieve level four or level five. But I think in level four, the problem is super challenging. You have to do some structural change in the operating environment. We are lucky in aviation because airports are controlled places. They have similar structure. They have like nice patterns. And not everybody can--

Lauren:
Yes. And I think that’s--

Oktay:
And also, we have a limited number of airports in the world. We can kind of quantify how many airports we have and these things.

Lauren:
Exactly, yeah.

Oktay:
We have more structure in aviation, I would say.

Lauren:
Yeah, definitely. I mean, I think it’s as you mentioned, and hit the nail on the head about the controlled environment and the opportunity to be able to bring in a little bit more structure and regulation around the environment that is being interpreted.

So, I still have a couple of questions that I just want to squeeze in before we end Oktay. So, obviously today you’ve communicated so many different interests and angles here about autonomy, about automation. And you can see that you were very passionate in all of your roles that you've embodied. I mean, I don't believe that I've ever spoken to someone who's occupied, who's been so occupied with automation and autonomous systems in such a vast mapping of ecosystems. I mean, you've been an air, you’ve been on land, and even the solar system, which we haven’t even had time to touch upon today. And we've spoken a lot about self-driving cars, about the challenges they are, and what you're striving to achieve in that space, both with the self-driving cars and aviation.

So, you worked for Elon Musk at Tesla. And I'm sure that a lot of people have this question. What's it like working at Tesla, with Elon Musk?

Oktay:
I mean, I was a member of the pilot team. And that was very like high profile, high performing fast paced team. I still enjoy my time, my experience there, I would say. Elon would attend personally the auto pilot meetings, bi-weekly schedule, I would say. He would give us high level guidance about what to do. And I think Tesla's approach to self-driving cars is different than other companies. Many companies, they want to use some other sensors. But Tesla wants to solve a vision-based solution. And I think the company is doing really great progress to achieve that because they have all the pieces to achieve such things. They have like bright engineers, a good leadership, and lots of infrastructure, data. And I mean, Tesla cars, they can collect data at anywhere in the world. So, that's a big thing. And I think he was like a very, he would set a goal that was very challenging, and he would expect the engineers to follow his leadership and achieve that. He was good in terms of finding bottlenecks in a big organization and try to fix them on the spot.

Lauren:
Excellent.

Oktay:
And I think when you tried to organize and lead, a big organization, these are the key things. Because in other places, whenever I went to do something, I had to send an email to my team leads. And then he had to send to his manager. And then the manager to VP. And VP to director. So, there is a long, long line. Tesla was a quite organization. It was engineers, my manager, and Elon. So, it was very like clear-- The information flow between management was very smooth and flow.

Lauren:
Did you think that that contributes to the success of the company?

Oktay:
I would say Tesla is more of like a startup at scale. Autopilot is one startup. Battery team is one startup. And I think, they are a talent magnet, I would say. They attract people who want to work for Tesla. And I think that’s the number on condition to achieve something-- You could have a good idea. You may have lots of money. But you have to build a good team. If you don't have a good team of engineers, then it's hard to deliver something. And Tesla was kind of one of the places that are desired to work for, number one, I think in recent days. So, that’s why, like, they had so many good engineers and they had so many good infrastructures. And also, people were very enthusiastic with your work, because they knew they are working on something quite like state of the art.

Lauren:
Yeah. The innovation that drives the vision and passion in teams, yeah.

Oktay:
And people, they were super young, to be honest. I was one of the older, older guys in a team.

Lauren:
You are not that old.

Oktay:
But thing is, like, the entire auto pilot team, maybe it was quite young. And most of the engineers, bright engineers, were people with one or two years of industry experience. But they are kind of success hungry people. And they will just work and work and work to solve the problem until it is done. Like, it wasn't the place where you just work nine to five, I mean. And many people, whenever they are given a task, they wouldn't let it go until they are done and solve it. That's really something unique I would say when you kind of like work for big companies. Like you cannot ask people to work more if you pay them more. I mean, like they have to focus on the problem, they really want to solve--

Lauren:
And want to solve that problem as well.

Oktay:
And I think Tesla is a good place to attract those people who are interested and enthusiastic about solving autonomy and put them in one place. And give them good leadership. I think good leadership is very important.

You know, in other companies, there were so many different opinions. Everybody’s saying something and it’s hard to reach a consensus. But in autopilot, the tasks, milestones, were always clear and we just needed to work towards those things, without looking what was around or what other companies were doing.

Lauren:
Well, I’m certainly really excited to see what it holds in the future. And I just want to thank you again so much for being here with us today on the podcast. I wish you the best of success in your own endeavors and also at Airbus.

Oktay:
Thank you, Lauren. I really enjoy our conversation. And I think this is a great time to be in, and I highly recommend any people who want to study engineering and want to go to AI robotics. I highly recommend them study these fields, because in the future, we will see more AI and robotics products in our daily lives. And I'm just looking forward to it.

Lauren:
Thanks. So, I want to thank everyone for listening today. And if you'd like to find out more about Squirro and the Insight Engine, then go to Squirro Academy on Learn.Squirro.com and access our educational material. Thank you

host-f
Lauren Hawker Zafer

Host

guest-m
Oktay Arslan, Ph.D.

Guest

Discover More from Squirro

Check out the latest Redefining AI Podcasts for candid
conversations, unspoken secrets, and industry truths

Anders Indset - Activation, Progress and Technology - The Viking Code
Podcast
Anders Indset - Activation, Progress and Technology - The Viking Code
Pedro Domingos: AI - 2040
Podcast
Pedro Domingos: AI - 2040
Kieran Gilmurray: The A-Z of Generative AI Leveraging AI for Business
Podcast
Kieran Gilmurray: The A-Z of Generative AI Leveraging AI for Business