Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
IBM’s Christina Montgomery discusses ethics and getting more women on board.
Europe is ahead of the US in implementing AI, IBM’s Vice President and Chief Privacy & Trust Officer told Euronews in the latest in our digital summer interviews series. Christina Montgomery said it’s critically important to address and prepare the workplace for the future of AI, and discussed how a global technology company deals with emerging AI rules in the EU and in the US.
IBM provides companies with AI and automated cloud solutions, so any incoming rules affect its business.
Montgomery is tasked with overseeing the company’s global privacy, data and AI governance program. She is a member of the United States’ National AI Advisory Committee (NAIAC), advising the President and the National AI Initiative office on a range of topics related to AI, and a member of the Board of Directors and the AI Governance Advisory Board for the International Association of Privacy Professionals (IAPP).
“Because we are a global company, I’ve also been very actively involved in the EU. Obviously, we want to comply with and help answer questions for policymakers in every jurisdiction. Our biggest concerns are not whether AI should be regulated, because we have said for the last four years, but how it is regulated and interoperability.
I think we are behind in terms of AI regulation [in the US] relative to yours. You obviously have a very different legal and regulatory system than the US. Here we really started having conversations around AI following ChatGPT at the federal level. And that resulted in the longest executive order in US history being enacted, last year.
We think from a risk perspective, that you shouldn’t be regulating the technology, but rather be regulating the use and looking at the risk as it evolves in different use cases, putting more regulation towards the high risk uses.”
“Yes, especially looking at things like focusing on the risk of a model, like deep fakes or the No Fakes Act. There’s also a great bill on accountability in government and we are very supportive of that. And it is sort of similar when you think about the EU, obviously the approach is different, but a lot of alignment around.
The fact that there’s no federal privacy law in the US, means that now we have, I think, 20 states with privacy laws. So states are now acting on AI too. Colorado is the first state to have passed a comprehensive piece of legislation similar to the EU AI Act, and states are talking about how they can align their regulations. Just like we want to see interoperability around the world, we absolutely want to see interoperability across the United States.”
“We’ve been supportive of the EU AI Act from the first day, because we absolutely believe that it takes the right approach in terms of addressing risk, and things like transparency. Overall we are ready, but the devil’s in the details on the EU AI Act.
Right now, for example, the EU is inviting companies who are in the space of general purpose AI for feedback. I like IBM to comment on what those rules will look like, which will go into effect next year.
Another thing is the templates for data transparency. We don’t have that yet, so we’re not exactly sure what the EU is going to be looking for.
We do have foundation models and large language models. They have been recognized by Stanford’s AI centre as number one in data transparency. So we think we’re ready for those types of requirements as well, as we’re already doing a lot of work in the data transparency space.”
“We’ve been on a journey at IBM to establish principles around AI, and those align to many of the OECD principles, for example. They were built by our AI ethics board, and they’re about transparency and explainability. It should preserve privacy. It should be robust.
It’s in the ethics by design playbook that our developers and designers use. If you’re building an AI model or a system using AI, it has to follow these rules. And we’ve aligned that program to things like our managed risk management framework.
And now looking at We’ve signed a lot of voluntary commitments around a lot of large language model providers as well. That becomes part of our program. The EU AI Act becomes part of our program. So we’re moving essentially from the principles that we’ve put in place five years ago to addressing regulation.”
“I think it’s critically important that we address and prepare the workplace for the future of AI, and that’s an area where IBM has also been very active. We recently announced commitments to scale up 30 million people in AI and new technologies by 2030 and to scale to educate 2 million by 2026.
So we’re very focused on education. And in part that’s also selfish because we need a workforce that can work with these new technologies in the future. So it’s something IBM is absolutely very focused on and is part of the conversation here in the US as well.”
“It is something I observe and it is something I’m very active in, in terms of advocating for inclusion, in particular for women, but also for people with different backgrounds. Our AI ethics board is co-chaired by two women and I’m really proud of that. So I’m active internally and externally to help ensure that the future of AI is an inclusive one and is recognizing many contributors to the table and not just, you know, the long standing list of men that are involved in AI.”