Skip to main content

Discover How Squirro's AI & Semantic Search Is Revolutionizing Enterprise Search. Transform Your Data Strategy Now! Read Our Buyer's Guide

Podcast

Ethical Practices in Use - Responsible Disruptive Technologies

Squirro's Podcast: Redefining AI - Season 1

With Rasmus Hauch

Rasmus is the Chief Technology Officer at 2021.AI, an experienced technical leader and self-proclaimed geek with business drive. He is passionate about achieving business objectives and creating a better world using new technologies like AI, ML and IoT. He is skilled in relationship management and takes pride in people-development and company building. He has worked as a Program Manager, Project Manager, Product and Services Sales, Technical Sales, Product Manager, Product Owner, CTO, People Manager, DPO, Chief/Lead Architect, Senior Consultant and Software Engineer for various international Financial, Insurance, Healthcare, Pharma, Life Science, Industrial, Supply Chain, Energy and Telecom customers.

Full Transcript

Lauren:
The definition of disruptive technology is?

Hi, everyone, my name is Lauren Hawker Zafer and this is Redefining AI, a tech podcast that focuses on key narratives that drive digital innovation and help people understand artificial intelligence, machine learning, insight engines, and the insight era.

In this episode, I've been joined by Rasmus Hauch. Rasmus is the Chief Technology Officer at 2021.AI. He's an experienced technical leader and also a self-proclaimed geek with business drive. Now, Rasmus is passionate about achieving business objectives, and in general, creating a better world using new technologies like artificial intelligence, machine learning, and internet of things.

I'm really excited I have to say about today’s episode because it does cover a topic that we have placed focus on recently in our podcast series. And it’s titled, Ethical Practices in Use - Responsible Disruptive Technologies.

As usual, the title is luminary of the direction that we intend to go into today with our guest and technical subject matter expert, Rasmus. Rasmus, it’s a pleasure all the way from Denmark. Welcome. Let’s hear a little bit about you and find out why are you currently the CTO of 2021.AI.

Rasmus:
Thank you very much, Lauren. And thank you for being on the podcast today, really enjoyed it. So, first of all, I have done a lot of machine learning projects throughout my time and have been in the IT industry for 25 years. So, I know something about how the technology can sort of affect people or a group of people in various ways. And this is also what is very close to my heart in terms of teaching around organizations on sort of what this means for them, and how they should organize themselves around that topic.

You can say my background is that I’ve been with a lot of different IT companies throughout the time. And now, I am the CTO of 2021.AI. We are focusing on two things. The first is the software, delivering the software and the services. And then delivering the necessary consulting for that software to be workable for the organizations. From a CTO perspective, it's very much about working with the data scientist, working with the engineering team to create new and better software basically, right? So, that’s my main goal.

Lauren:
So, you've mentioned that you've been in the IT industry for 25 years now. And I'm sure that you've seen a lot of changes, a lot of breakthroughs, a lot of interesting developments. And if you were to describe maybe the sort of development in the IT industry in the last 25 years, what words would you use, or what would you say?

Rasmus:
I would say that it's a fast-moving industry and there's a lot of new trends that you have to follow all the time. I think what is really interesting around the machine learning and the data science community is that a lot of the work that has been done over the last 5 to 10 years has been primarily open source. There's a lot of open source things out there that you can connect and use too. And also, obviously, as an overall engineering and data science responsibility, it is also to carry back things into the open source organization, into the open source world basically. With that said, I think also the paradigm of intellectual property and stuff like that, it’s also been sort of changing over the years. We saw in the late 90s and the start of the century, we saw a lot of commercialization around the software. We have seen a more open ended way of developing software. It is more about keeping the pace, developing new stuff, getting it out into the market, rather than protecting everything from an intellectual property perspective.

Lauren:
Are you supportive of this movement?

Rasmus:
A lot, yes, for sure. I mean, for me, and for everybody else, I think it’s a very important movement that we need to continue with. So, yeah.

Lauren:
So, we're going to be talking today, Rasmus, about all things disruptive, ethical, and responsible. And before we sort of set off into our own discussion and exploring everything that I've mentioned there, I think that it's important to really ground the conversation. And it's something that I think that our listeners appreciate as well when we warm up and we sort of contextualize them a little bit into our own space and our own thoughts and where we're heading in this conversation.

So, to do this we sort of like to start our discussions with a quick-fire contextualization round. So, the aim of this is that I give you the start of a sentence and you have to complete it whilst trying to stick to a one to two sentence answer. It's quite difficult, but I think it really brings people to the point of emphasizing, what do we want to forefront with these terms or with this concept, per se? So, the first one is, the definition of disruptive technology is?

Rasmus:
Yeah, I mean, that's a really good question. And I think you can say disruptive technology, for me, is about what I can refer to as trustworthy, algorithmic outcome. So, it's really about technology that is on the forefront of what is happening in the market. But it's also something that has some sort of effect on people or a group of people. And I think the term trustworthy, algorithmic outcome is a really good one to sort of highlight what we mean by disruptive technology in this sector. Because disruptive technology is something that changes people or the way that people behave.

Lauren:
Ethical use of AI technologies include?

Rasmus:
Yes. I think that is a really good sentence. And if you think about it, what really transcends the sort of ethical use of AI technology I think is terms arounds transparency, around explain-ability, around fairness, and also around societal and environmental wellbeing. So, really when you look at it from an ethical perspective, it's around capturing these concepts and understanding them.

Lauren:
Okay, interesting. I'll come back to a point there.

So, legislation in ethical or responsible AI, these terms are used interchangeably, I think, is?

Rasmus:
Yeah, really, if you think about it in terms of ethical and responsible, and in terms of the legal things that are happening in the marketplace, you can say that over the last couple of years, there has been a lot of new legislation introduced in the marketplace. So, in terms of legislation, I would coin it as well, responsible AI is adapting to the UAI Act, to the MDR (the medical device regulation), the SR 11-7 in the financial sector. So, specific and concrete legislation that you relate too. Or that you relate to the principles that you have set in your company or organization that is specific to your organization. Things like, you want to be more transparent in the market. Well, you know, that means your AI needs to do that as well, right? So, I think those things are really what it means to be ethical. And the responsibility is to follow up on the principles that you have in your company and also obviously following the legal side of things.

Lauren:
Ethics "motioners" are?

Rasmus:
So, ethics "motioners" is-- I think it’s a difficult question. Because when an organization wants to move in a certain direction when it comes to ethics and responsible AI, I think it is mostly about having transparency in an organization, and also enabling some kind of digital trace, a way to sort of track a certain behavior within the company. And I think it really starts from there to start moving the ethical concerns in a company, in a specific direction, using things like digital choices. Like maybe conducting an AI audit, and maybe also to change the organization in various ways to fit that agenda.

Lauren:
The last one, mainstream alignment around ethical AI is?

Rasmus:
Yeah. You can say, as mainstream alignment is basically around in an organization for example. And I think in an organization, I think it's mainly about establishing some processes around how the business are using this algorithmic or how they are sort of forming and making the foundations for this algorithm impact to be in the company, right?

So, establish a set of control mechanisms, and some overview to ensure what has been done and what is going to be done in the future, I think is very important for any organization out there. For the user side of things, if I am a user of an AI system that I know is impacting me in certain ways, I can set requirements. I can set certain boundaries and certain things that I want to be a part of and hopefully have a choice in that matter, especially when dealing with the private sector. The public sector is a different story, because as a citizen in a country, I don't really have a choice. I need to do whatever. And I think that’s one of the reasons why the public sector needs to set higher boundaries, higher limits to what it means in a responsible and trustworthy AI context.

Lauren:
Thank you very much for that sort of quickfire introduction. There was a lot of useful information passed on there. And I think that it's maybe important to give you a word and say, is there anything from those five statements that we've explored, very briefly, that you want to elaborate on before I ask you a few questions?

Rasmus:
Well, I think there's a lot of things that would be interesting to elaborate on. But I think maybe we can just move on and do the rest of the questions. I think we will probably touch upon all of them in some way or form.

Lauren:
Sure. So, we know that you are the CTO at 2021.AI, a role that you've embodied for, I think, the past five years now. How has your role as the CTO developed in the last five years in alignment with the field as well and your own mission at 2021.AI?

Rasmus:
Yeah, I think you can say that when we started out in the industry, there was still a lot of hype. There still is around AI and machine learning and such. But I think, to a large extent over the last couple of years, there's been a lot of maturity with the customers that we're working with in terms of just taking the most exciting thing that is on the market, like some image recognition algorithm and just implementing that, because it seems really exciting and fun. But more on creating actual business value in the company and also, once you have that, bringing these AI models into product. As you’ve probably heard also, there is a lot of companies talking about developing these models but are not really bringing them into the business context, not bringing them into production. And I think one of the reasons for that is probably due to the ethical and data privacy concerns also that there are around there. But there is also the question of having the right tools and the right data sets. If you think about the last couple of years, there is a lot more focus on bringing the models into adoption and I think that’s more traditional IT project tools, software, and management capabilities that is needed, right? And I think that is what a lot of people or a lot of organizations don’t realize is that this is a machine learning project is typically like a traditional IT project with something on top of it. And if you look at it that way and you don’t know that this is going to be a traditional IT project from when you start, then it’s going to be difficult to get the right budget for it to get moving and so on. So, I think that’s an important factor that a lot of the organizations need to take in. And this is where our customers have matured a lot over the last couple of years.

Lauren:
And what do you think has contributed to this actual maturity?

Rasmus:
Well, we have contributed--

Lauren:
Where has there been the most contribution or what’s brought about this maturity?

Rasmus:
I think it probably started out with the movement around ML Ops, right? So, the ML Ops movement is about how to take the machine learning models that we have developed and then bring them into production. And then, most importantly, it’s how to maintain these models over time. The models are changing based on the behavior that the model sees and can detect. And most of the models that I know about out there are in some way or form need to be retrained on a regular basis. When you do these trainings, that is a different paradigm from a traditional IT project. And from that perspective, I think that we at 2021.AI have contributed a lot int the ML Ops space and sort of matured our clients to enable them to handle those kind of models and those kinds of IT projects.

Lauren:
And what were you using there, Rasmus, to provide that enablement for your clients or to support that when they are trying to maintain those models?

Rasmus:
We provide the necessary software to run a ML Ops process, all the way from data ingestion to developing a model and deploying it in product and then overweighting the model into production. With everything that is around that, it can be both in terms of retraining the model on a regular basis and can also be things like doing certain metrics around biases, explain-ability, or other areas like that.

Lauren:
So, you are looking at the ethical perspective, the ethical use of the models as you mentioned at the start for the transparency, the fairness, maybe societal and environment fairness, everything that comes from that direction?

Rasmus:
Yeah. And I think the important part to mention there is that this is about the technology, yes. But it’s also about the people. I think that a lot of companies are missing today is that, you know, you can take in a data set from somewhere, develop it, deploy it, and over weigh it. But you don’t really know how it’s going to be used out there. And those kinds of things are very hard to see from the code or from the models themselves. If you deploy in a way that is not ethical, well then that needs to be monitored and spotted also, right? So, there are parts of machine learning models that you cannot see from the code or the data. And that you need to have a process for one’s role in the organization.

Lauren:
I suppose that is a key question as well that you brought up with the term sort of non-ethical. I mean, how would an organization know that a model isn’t ethical?

Rasmus:
Yeah, that’s a good question. And that leads back to, I think, the quick answers in terms of establishing a process, right? What I think is number one is that the top-level leadership and large organizations need to realize that this is something that they need to care about and to start building a process on. So, that ownership can be taken by a number of people in the organization. So, the CDO, data officer is a good sort of candidate for somebody that can take that sort of responsibility. Or it can be CTO, CIO, or some of these kinds of roles where there is an overall ownership and responsibility within the organization. So, if there are models being developed in an organization, there is a process on how to establish this model and it doesn’t have to be something big. It can just be a registry somewhere that has the list of all the models within the company that have some sort of risk indication or some sort.

Lauren:
To what extent does this exist at the moment?

Rasmus:
Actually it doesn’t exist at all. I have met a few customers and companies that are doing that actively at the moment. But most of the companies are not and are not thinking about it in that way. And I think that’s going to be a big problem for them when the UAI gets sort of into the world. I think a lot of organizations need to think a little bit more about it in terms of the transparency, building these registries, and so on.

We see it although in the public organization. So, an example is, in Canada, there has been active legislation in the public space for quite a number of years. And there's a requirement for all public organizations in Canada to keep a registry of the models that they are using and working on that has some sort of impact of a person or a group of persons.

And I think that makes sense, right? That they do that again because as a citizen in a country, I don't really have a choice of, you know, whether I have to use this or not. And sometimes the citizens of a country doesn’t even know that it exists, right? So, bringing all the knowledge that this kind of model exists, and this is what it does, is a very important aspect of being transparent in a public organization.

Lauren:
Yeah, definitely. I mean, it's extremely important.

So, you sort of highlighted where we are at the moment, and you've had your own journey. And I'm sure that has been in the 25 years, but also in the last sort of five years an extremely exciting one. What would you say has been the most interesting challenge that you've had up until now that you didn't really expect to come up against when you started your role as the CTO? And obviously, you're engaging yourself and ensuring that there is ethical practices in use that you can encourage this? And you're obviously, as we mentioned, in the introduction you're passionate about creating a better world with the use of new technologies, like machine learning?

Rasmus:
Yeah, I think in terms of the challenges, I think maybe some of the biggest challenges has been that I am surprised by the number of people that are worried about this right now. When we started out, there wasn’t anything of this going on. But I think there was a more deeply rooted sort of suspicion with multiple people, maybe brought about by all of the Hollywood movies and so on around, you have this AI going crazy and so on. And I think that's maybe the biggest challenge has to been to educate the organizations, our customers, internally, everybody else around, about how to actually go about governing this, right? So, not so much the software or the services that we provide, but more of explaining why this is important, right? And I think that has been a big challenge for me to sort of be more outward going in terms of explaining why this is a problem.

Lauren:
I mean, education, it's an interesting component and angle as well. Because you can talk about educating today’s generation and you can talk about educating the future generation. Who are your educating? Are you educating so that people implement change now? Are you educating for the near future or the long term future?

Rasmus:
Well, I think, you know, to some extent, I wouldn't say it relates to the climate changes and so on. But it's kind of the same in that a lot of these changes that we are trying to implement now will probably not be, you know, a reality in some years. So, I think in that sense, you can say that the younger generation is probably the most important of these groups to educate. So, in that sense, I think the importance is mostly for the younger generations. But also, true that these worries and problems are to the older generation and in my case to also the leadership in the organizations.

Lauren:
You yourself, you are the Chief Technical Officer, and you mentioned at the start that obviously you have a lot of contact with data scientists and engineers. What is your contact in this respect as your position as the CTO at 2021.AI with technical teams? What are you doing there? And what are you trying to bring about if we look at the ecosystem and what you are motioning? What role are you playing in communication with the technical teams?

Rasmus:
So, you can say that there are three parts of it. There is one part that is around the product and the product development that we are doing. So, basically saying, what kind of new features would be useful for customers? The other part is also listening to our customers and hearing what they have to say and what kind of things that they would like to see and what their needs are and what our problems are? And then, finally to kind of say, inspire the data scientists and engineers to build new and innovative solutions, right?

Lauren:
Do you have those bridge a gap between maybe legislations and what is actually happening in technical terms externally?

Rasmus:
Yeah. And maybe that is also a big challenge, right? It is to make sure to follow up on the new legislation that is coming out. And then, sort of looking at it very carefully and making some conscious decisions on what kind of controls, what kind of things can we do to monitor this, to control these particular items in the legislation. And that's actually been, and is sort of a cornerstone of our company, is that we are able to take some of the legislation and implement it in an organization in an automated way so that they don’t have to use Excel sheets and a thousand meetings to agree on whether something is right or wrong. I think that's a very important point for us to be able to automate this journey. And for the organizations out there we want to enable them to do this much faster and quicker than they would have otherwise been able too. We are seeing this, you know, in a lot of organizations and companies out there. Especially the ones, obviously, that are rolling out large or many models across the organizations.

Lauren:
So, the title of the podcast is, Ethical Practices in Use. I think it would be good if maybe you could give us some examples of successful projects at 2021.AI that highlight this. And what do you want organizations to take and align on from these examples as well, Rasmus?

Rasmus:
Yeah, that's a good question. I think from my perspective, we have two projects that I think would be interesting. One is a project around AI governance in a public organization. So, that's basically a European based project. This is where we are sort of engaging with a lot of different use case owners, so different public organizations around in Europe that are developing and/or operating some sort of AI model or disruptive technology. And they are concerned that they are doing it correctly, that these technologies are being deployed and overweighted correctly. And they want to have some sort of assurance as to, am I okay with this particular product? What we have done there is basically say, well, apply the specific framework, not the legal framework but a number of different principles, and then take the principles and apply them to the specific technologies and projects. The way that was done was basically by collecting different metrics and also collecting different inputs from the stakeholders of this particular use case. That was is the public organization.

We are having another project on clinical imaging. So, that’s around x-rays, MRI scans, and so on. You have your use case owners who can be large organizations or smaller startups that are developing some sort of AI machine learning model for the healthcare industry where they are doing quite important stuff like, you know, strokes in the brain, or other very sort of high critical things. These kinds of companies are also looking for some sort of validation. They want to be sure when they roll out this model in a hospital, they want to be sure that it actually performs in the patients that they have. And that is one of the big problems around the certification process for these kinds of software packages out there. It doesn’t really take into account the differences across the different patients, across the world, right? So, the patients in Brazil are very different from the patients in the UK. And that needs to be taken into account when you are rolling out these AI models. So, this is a project where we are working with these different kinds of stakeholders, plus the hospitals, to make sure that the validation is done correctly, and that this model is sort of being monitored in the right way.

Lauren:
I mean, the sort of framework and principles that are you applying to the first project and the second you just mentioned, is there an alignment in the framework and principles? Like, could we say that it is possible to maybe globalize the frameworks that you're serving your clients with so that you can set a certain standard for maybe high-risk models or certain use case scenarios? Is that possible? Or is it really customizable depending on the use case that you are looking at and the type of model that has been implemented? Does it adhere to the type of technology? I mean, if we're looking at, you know, natural language processing, or if it's a model about, you know, image recognition, where does that alignment fall in the framework and the principles? Or is there none?

Rasmus:
Yeah, no, I think if you are going to globalize, can you say, something. I think the challenge is the political landscape across the globe. That is very different. So the Chinese government, for example, released their own set of AI principles and AI guidelines. Which, interestingly enough, has a lot of more focus on sort of society as a whole. It's not so much on the individual person, but more on the entire society. Whereas in Europe, and in the Western countries, we are much more focused on the individuality, and that the individual person should have, you know, all the rights. So, there is a very fundamental aspects and views on how this should be applied across the globe.

With that said, I would say that one of the keys for expanding this across the global is actually standardization. Because you have the legislation, right? It has terms like appropriate, and accurate, and other other things like that, which is like terms that need to be translated into something specific for a data scientist or a person, a product manager to really understand what it means in their product and in their model. And that translation, I think, it is very, very important. Establishing this vocabulary across the organization of what it means when we say appropriate level of accuracy or terms like that, which is really like, not very concrete, right? But into something that is at least more concrete or something that somebody in the product team can relate too. And I think that is a very, very important aspect.

In order to do that, we have done something-- And there are a lot of organizations out there that are doing something. OECD, all the different consulting companies, and so on are doing stuff in that area. But if this is rolled out, I think the only way to go is to standardization. So, really getting some ISO standards out there that are saying, well, if you need to do you know your risk assessment of your model, well, then this is the way you do it. And define sort of the translation between the legal text and then the, what is actually going on in the model software. So, I really think that the standardization organization has a really big role to play when it comes to rolling out in a large organization or across the globe.

And I think the good thing about that, we are seeing a lot of traction, both in the IC, sense analytics, and ISO communities. But also, we are seeing in the US on IEEE and SOC. They are sort of engaging and coming up with new standards in this area. And for that reason, we are also taking a very active part in the standardization work around AI governance in different areas. So, for me, and I think for our company, I think that is what we really think is the future. If you need to develop a model, then here are the standards that you need to comply too. And if you comply to that, then you are in the clear, right? You are okay. You are doing the right stuff. And that for me hopefully will be the scenario 5 to 10 years from now.

Lauren:
Yeah, hopefully. I mean, it's fascinating. And it's really great that you're contributing to that standardization, and you can say it's the sort of passion of your willingness to want to contribute to good standardization and standards. And as you've mentioned, if there's a good traction at the moment, then it does serve for potential movement of pushing things into better sort of more ethical and responsible AI scenarios for everyone.

Yeah, Rasmus, I’m afraid that we’re going to have to wrap up our conversation today. I had a few more questions up my sleeve but time has played its hand.

I want to thank you very much for taking the time to pass on your knowledge and share this with us today.

Rasmus:
Thank you very much, Lauren. I really appreciate being on this today. So, thank you very much.

Lauren:
Wonderful. So, I want to thank everyone else for listening today. It's been an insightful, passionate subject that’s close to my own heart.

If you want to learn more about AI and machine learning and search, then come and take one of our free courses at Learn.Squirro.com.

host-f
Lauren Hawker Zafer

Host

guest-m
Rasmus Hauch

Guest

Discover More from Squirro

Check out the latest Redefining AI Podcasts for candid
conversations, unspoken secrets, and industry truths

The Integration of AI in Life Sciences and Biotechnology Sectors
Podcast
The Integration of AI in Life Sciences and Biotechnology Sectors
You, and Artificial Intelligence
Podcast
You, and Artificial Intelligence
Digital Humans - Creepy or Cool?
Podcast
Digital Humans - Creepy or Cool?