U.S. military: guidelines for AI

The U.S. military leadership has developed ethical guidelines for the use of artificial intelligence.

They work worldwide military remember the Artificial intelligence in the armory to integrate. There are already some areas of application for this in the USA, and the company also has that Alphabet (google) a part. After protests by the workforce, this was sorted out ethical guidelines on.

Flugdrohne

drone

The United States Department of Defense has now developed this. It is about preventing unwanted behavior in technology. If an AI does not act properly, it should be able to be deactivated.

It took 15 months for the U.S. military to develop guidelines for dealing with artificial intelligence. They consulted with specialists from business, the authorities and from research. The result was five rules. One of the main rules is that the technology must always be under control.

The AI ​​must be designed in such a way that it acts according to the order. At all times, it must be ensured that the undesirable behavior is identified so that it can be avoided. To do this, the systems must be isolated and switched off.

In addition, the system should not display unintentional prejudices and make appropriate decisions. Desired discrimination is not excluded. The Joint Enterprise Defense Infrastructure Cloud, or JEDI for short, is intended to monitor the implementation of the guidelines.

The critical voices speak of a label of decency without content. That was stated in words like “appropriate”.

About David Fluhr

I have been writing about autonomous & connected driving since 2011 and also report on it on other sites, such as the Smart Mobility Hub. I studied social sciences at the HU Berlin and have been an independent journalist since 2012. Contact: mail@autonomes-fahren.de

Go to source