A cura di @suppamax.
Un articolo di TechCrunch spiega il funzionamento di un’AI studiata da Google e dall’Università di Stanford .
Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal. But in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.
Immagine da pixhere.
Commenta qui sotto e segui le linee guida del sito.