Experts provide criteria for safe artificial intelligence systems.

A global group of artificial intelligence professionals and data scientists has released a new voluntary framework for producing safe artificial intelligence products.

The World Ethical Data Foundation boasts 25,000 members, including employees from companies including Meta, Google, and Samsung.

The framework includes an 84-question checklist for developers to consider when starting an AI project.

The public is also invited to submit their own queries to the Foundation.

It claims they will all be taken into account at its upcoming annual meeting.

The framework was released in the form of an open letter, which appears to be the favored approach of the AI community. Hundreds of people have signed it.

AI allows a machine to act and respond almost like a person.

Computers can be fed massive volumes of data and trained to see patterns in it, allowing them to make predictions, solve problems, and even learn from their own mistakes.

AI relies on algorithms, which are sets of rules that must be followed in the correct order to finish a task.

The Foundation, which was founded in 2018, is a non-profit global organization that brings together professionals working in technology and academics to examine the development of new technologies.

Its issues for developers include how to avoid an AI product from integrating prejudice and how to cope with a circumstance in which a tool’s output leads in law-breaking.

Yvette Cooper, the Labour Party’s shadow home secretary, stated this week that people who intentionally utilize AI techniques for terrorist goals will face criminal charges.

Prime Minister Rishi Sunak has appointed tech entrepreneur and AI investor Ian Hogarth to oversee an AI taskforce. Mr. Hogarth told me this week that he wanted “to better understand the risks associated with these frontier AI systems” and hold the businesses that develop them accountable.

Other factors considered in the framework include data protection legislation in different countries, whether it is obvious to a user that they are engaging with AI, and whether human workers who input or tag data required to train the product were treated fairly.

The complete list is separated into three sections: questions for individual engineers, questions for teams to examine together, and questions for people testing the product.

Leave a Reply

Your email address will not be published. Required fields are marked *