#3 Champions of Digital Trust <> Ryan Carrier

05 Nov 2023

Post Thumbnail

Summary:

In the third episode of Champions of Digital Trust, Ryan Carrier, the Executive Director of the nonprofit ForHumanity, explains their mission. ForHumanity's goal is to examine and analyze the downside risks associated with the ubiquitous advance of AI and automation, to engage in risk mitigation and to ensure the optimal outcome.

They do this by checking these technologies like a teacher checks students' work. They create rules to make sure these technologies are used in the right way and don't harm anyone. ForHumanity also teaches people about these rules and how to make sure AI systems follow them.

They want to build trust and safety in the digital world by making sure these technologies are used responsibly. People can learn about their work on their website and take free courses to understand more about their practises.

Watch the Interview HERE

Transcript:

Welcome to our new interview series where we speak to champions of digital trust who are you and why should we trust you?

My name is Ryan carrier I'm the executive director of a nonprofit public charity called for Humanity for Humanity uh has a mission and the mission statement is to examine and analyze downside risk associated with AI algorithmic and autonomous systems accept those systems where they are and try to engage in the maximum amount of risk mitigations the theory being that if we can minimize the risk from these tools then we maximize the benefit for Humanity and that's a great reason to trust us is that we are well aligned with the people who would be giving that trust we are aiming to mitigate risk for All Humans from these tools.

I think that's a simple and fast answer I love that and I'd love you to answer a bit more detail about that so how does your organization contribute to trust in the digital world?

Sure we so we are a as I mentioned a nonprofit public charity we've grown from just me in March of 2020 to be more than 1,700 people from 92 countries around the world and our primary work our primary mission is to the independent audit of AI systems and the theory behind independent audit of AI systems is to borrow from the 50-year track record the experience of financial audit and bring that to the world of artificial intelligence to create governance oversight and accountability in how these tools are developed used deployed and so on and independent audit gets into operational rules binary rules that that a third-party independent auditor can go in and verify compliance with those rules and they do that verification not on behalf of the company but on behalf of the public at large and create this infrastructure of trust that changes corporate Behavior many corporations would like to do the right thing until they get busy until they get distracted until it becomes maybe hard or or hurts or profits a little too much um we learned this Behavior centuries ago honestly when we put teachers to teach students right all of us could learn we could read a set of textbooks and say we learned something and maybe we did and maybe we read it and maybe we finished but eventually what we learned is that when we put a teacher in the classroom we have somebody who's providing oversight and it changes our very human nature we want to please that person we want to be found compliant we want to demonstrate and document our knowledge and our expertise and the same thing is true with independent audit of AI systems when you know that third-party independent auditor is coming in to check your work you know what you do you complete it you build everything to satisfaction to ensure that you have sufficient compliance with the law or the guidelines or the regulations or the best practices whatever they might be and that's there's two elements of how for Humanity participates in this system number one we are trying to support the system for all who would commercialize it so we don't have a commercial role in this ecosystem we want to support people who would be pre- audit service providers or Auditors or teachers or build technology to satisfy compliance all of this ecosystem is is designed to support them so in establishing that the role that we play is we aim to make the rules now people will hear that and they'll kind of be like oo who's for Humanity to make the rules well actually we do it with a humility that says it's not our job or not our role to make the rules we take the law guidelines best practices standards and we craft them into these auditable rules it's like a secretarial Rule and then we take those rules and we go back to the authorities and we say is this what you mean by gdpr is this what you mean by the eui act and so they have every right to no you didn't get it right or you didn't interpret it the way we wanted you to and then we fix it so we we support uh obviously democratically elected legislators and and lawmakers but where they haven't made laws where they haven't mandated independent audit we will also seek simple market adoption does the production of these rules help you to feel comfortable about the risks that you're taking with these tools so corporate adoption government adoption uh for for implementation and the the other role that we play is that we train individuals on how to be for Humanity certified Auditors the CPAs of this world we train them to be experts in what compliance looks like how to evaluate risk controls and treatments and so on to know that a company has met the required standard for compliance does that all make sense yeah incredible it does I like the infrastructure of trust

Here's the third and last question, if you had a magic wand what would be the one thing you would change in today's digital world?

that's an easy one we would mandate thirdparty independent audits on an annual basis so uh we believe that this the the governance oversight and accountability that this introduces is too critical to protect humans from from hars that run from ethics bias privacy violation ations trust failures cyber security insufficiencies that's the five different ways that we we look at risk to humans in this equation and when we build all of our audit criteria we know that some of it doesn't ever change right you you make a plan you you put let's say a a monitoring policy in place that might be static for five or six years because your system is static in terms of its monitoring for five or six years but we also know that these models learn and move and evolve and change and so there's elements of realtime evaluation that must go on that's impractical so what that is is a balance between long-term static stuff and things that are changing real time we settled on what we think would matches to the financial audit World an annual review and some of that review will be like oh same as last year same as last year same as last year you're good you're good you're good and some of it will be what's changed now and how have you reacted to it and have you abided by your own rules to determine that the model is still doing what it's meant to do so again just to reiterate we would call for a mandatory annual audit of a of high-risk AI systems if they're low risk they don't need to go through this it's a risk based approach that basically says if you have a reasonable amount of risk to humans then we want you to go through this independent audit process does that make sense it does absolutely.

Ryan how can people find out a bit more about what you're doing?

Well, there's two ways to get plugged into for Humanity uh one well first off there's our website which is forhumanity. Center and there's a lot of information there joining the community as more than 1700 people have done is a great way to be actively in the space of sort of AI governance a ethics there is no cost to you there is no requirement of you except you provide your email address and you agree to the code of conduct once you have done that that is 100% of what for Humanity demands of you now you are welcome into the community of more than 130 different working groups and it's a place where you can just exist and be and learn and watch and take in you can be the ultimate lurker if you want and that's okay right that's totally fine and it's all about transparency giving everyone access so once you're in there it's an all volunteer organization you may find I want to get involved I want to get plugged in you raise your hand and I guarantee we call it it's very simple the other way to plug in is for Humanity University nine different free courses you should start with foundations of independent audit of AI systems and that explains the Lexicon it explains the infrastructure of trust it explains the ecosystem in the world that we kind kind of live and work in um those courses are all free if you want to be certified in them there's a small fee associated with certification and once you take foundations then you can go on to become a for Humanity certified auditor which doesn't mean you have to be an auditor it just says you are an expert in what the compliance requirements look like so you can do any one of six different fhca courses or you can do risk management expert course or algorithm ethics which is designed for people who would sit on an Ethics Committee so lots of different ways to plug in hopefully they fit with your career path hopefully they fit with your your interest and your passion in the space and and I think there's something for everybody.

Ryan thank you so much it's been an absolute pleasure speaking to you. Lydia do you have anything to follow up with?

No, other than I really recommend the courses because I have been looking into them and the content is really really very high quality and very very interesting amazing we'll all be getting registered and certified as trust and I just want to say one last thank you very much for joining us it's been an absolute pleasure thank you thank you.

My pleasure thanks to both of you.