Governance of Human AI


How can we regulate digital technology via good governance for the benefit of humans?
When it comes to digital technology and AI, we must recognize that it is increasingly possible to place our knowledge and technological know-how into machines. This is a fundamental change because so far millions of IT managers have lived on this planet. They have probably spent on the order of one million person-years learning and teaching, so that our IT can be maintained and run. In the future, super-intelligent systems - in the sense how for example the Berkley professor Stuart Russel describes them -  will be able to run IT systems for us and even manage our lives. These kinds of systems are capable to comprehensively store and apply know-how, which is vital to maintain and develop our civilization. Therefore, we need to ask ourselves how to make best use of such super-intelligent systems and AI in the future.

AI is not supposed to hold authority over humans. Neither so, are single tech companies that own the globally leveraged AI platforms today. There is a chance to develop proper AI regulation, i.e., making sure AI benefits humans. Let`s call this “Human AI”. The current AI Act of the European Union is a good first step. The time is now as super-intelligent systems, which will provide Everything-as-a-Service with the potential to either dominate or benefit humans, significantly, are developing fast.
DSI outlines the gaps or white spaces with respect to Human AI regulation, the guiding principles and next steps to get there. 

 
 
 
 
E-Mail
LinkedIn