As fears of the potential misuse of artificial intelligence grow, Microsoft is joining other big players, like OpenAI, in a public commitment to the responsible use of the technology.
Antony Cook, Microsoft’s corporate vice president and deputy general counsel, wrote a statement pledging to make three ‘AI Customer Commitments’ as part of the company’s efforts to foster trust in its responsible development of AI.
Also: 6 harmful ways ChatGPT can be used by bad actors
Cook shared that Microsoft is also prepared to play an active role in cooperating with governments to promote effective AI regulation.
“Microsoft has been on a responsible AI journey since 2017, harnessing the skills of nearly 350 engineers, lawyers, and policy experts dedicated to implementing a robust governance process that guides the design, development, and deployment of AI in safe, secure, and transparent ways,” Cook added.
Also: Singapore identifies six generative AI risks, sets up foundation to guide adoption
The commitments include sharing Microsoft’s expertise while teaching others to develop AI safely, establishing a program to ensure AI applications are created to follow legal regulations, and pledging to support the company’s customers in implementing Microsoft’s AI systems responsibly within its partner ecosystem.
“Ultimately, we know that these commitments are only the start, and we will have to build on them as both the technology and regulatory conditions evolve,” Cook wrote in the statement shared by Microsoft.
Though the company only recently developed its Bing Chat generative AI tool, Microsoft will start by sharing key documents and methods that detail the company’s expertise and knowledge gained since beginning its journey into AI years ago.
Also: Most Americans think AI threatens humanity, according to a poll
The company will also share training curriculums and invest in resources to teach others how to create a culture of responsible AI use within organizations working with the technology.
Microsoft will establish an “AI Assurance Program” to leverage its own experiences and apply the financial services concept called “Know your customer” to AI development. The company is calling this “KY3C” and is committing to work with customers in applying the KY3C obligation to “know one’s cloud, one’s customers, and one’s content,” Cook noted.
Also: The 5 biggest risks of generative AI, according to an expert
In its pledge to support its partners and customers in developing and using their own AI systems responsibly, Microsoft is leveraging a team of legal and regulatory experts around the world and announced that PwC and EY are the first partners to be part of this program.
The commitment to support customers in Microsoft’s partner ecosystem will involve helping them evaluate, test, adopt, and commercialize AI solutions.