AI with a Human in the Loop
AI systems that do things together with humans, supporting and empowering the human collaborator, writes Hedvig Kjellström, Principal AI Scientist at Silo AI.
AI is a paradigm shift. When earlier computer systems were coded by people down to every line of code, AI systems get only a skeleton structure via programming and then fill this structure with experiences from seeing training examples (such in a learning system), or reasons about the given building blocks in new ways (such in an expert system). This new way of functioning means that some thought has to be put into the development of AI systems.
I claim that all technology development should aim at fulfilling the 17 UN goals for sustainable development.
I claim that all technology development (including AI) should aim at fulfilling the 17 UN goals for sustainable development. I also have profoundly positive expectations that AI gives us unique opportunities to take significant steps towards these goals, e.g. by helping us improve transportation, energy use, food production, medical drugs and treatments, gender equality, governance, and education. However, there are also major hurdles and pitfalls on the way, that can cause a development in the opposite direction.
Considering these opportunities and risks, I present three aspects of AI development that are important in our quest to develop intelligent systems to reach the UN goals.
Aspect 1. Does the computer have to do it all by itself?
Human intelligence is amazing, and it is no small task to try to model it on a computer. However, computers indeed surpass human capability in some tasks, e.g. complex board games and precise evaluations such as medical image classification.
As soon as something about the task is unclear, humans perform vastly better than technology.
But as soon as something about the task is unclear, humans are vastly better. The amazing human ability of intuition is nothing else than accessing knowledge very fast and unconsciously. We are also really good at compensating for uncertain and missing information, and at using things we learned in completely new situations.
A good strategy is thus to let the AI system collaborate with a human, each doing what they are best at. This is called Intelligence Augmentation.
But would not superhuman intelligence allow an opportunity to address major challenges such as the climate? Such a system would be bogged down by a human in the loop – it should rather work on its own, just like the system AlphaGo Zero learned how to play Go without a human teacher.
Aspect 2. What should we worry about?
There are of course dangers with developing a technology that we do not fully control. What happens when AI systems become more intelligent than humans, and we can no longer understand them? This is indeed an important philosophical discussion to maintain. However, there are more immediate challenges.
There is conscious misuse of AI in collision with the UN goals, e.g. extensive surveillance, fake news, and systematic disturbance of democratic processes.
Firstly, there exists conscious misuse of AI in collision with the UN goal of peace, justice, and strong institutions, e.g. extensive surveillance, fake news, and systematic disturbance of democratic processes.
Moreover, without any harmful intent, there are also unconscious side-effects of AI, contrary to the UN goals. Data bias has been found while training face recognition, text evaluation, and risk estimation systems, and small languages have a disadvantage in automatic news summarization since their language models are less accurate.
A third frequently debated risk with AI is that humans will be replaced by machines in the workforce. I strongly believe that AI development will generate more jobs than it removes, i.e., AI systems will not take jobs from people. However, there is a risk that people using AI will take jobs from people not using AI. This is a problem since the access to AI is unequally distributed, and this difference will grow, contrary to the UN goal of reduced inequality.
Aspect 3. Do we need to understand?
To what degree do we need to understand what is going on inside AI systems? Transparency, which means that a system can be inspected and understood, is important for two reasons relating to aspects 1 and 2 above. Firstly, a human collaborating with the system needs to understand how it reasons, in order to ”chip in” into the process. Secondly, to make humans able to trust the system, they must be able to inspect it.
But back to the idea of using superhuman intelligence to solve the climate crisis: A system that is forced to be transparent might lose some representation power. Part of the success recipe of Deep Learning is that its architectures are immensely complex – to the cost of losing control.
I claim that the right approach for AI development is to put a human in the loop and to ensure that the system is transparent.
To summarize, aspects 1, 2, 3 are highly intertwined. I claim that the right approach for AI development is to put a human in the loop and to ensure that the system is transparent.
But is that a blind alley on the way to solving the great challenges of our time – will we manage this without superhuman intelligence?