Algorithmic amplification, the black box problem, legal and moral responsibility and the concentration of the most advanced AI capabilities are key challenges that need to be addressed for AI to realise its potential benefits.

Applications of narrow artificial intelligence are already prevalent in our everyday lives. Global financial markets, militaries, police forces, medical providers, energy and industrial operations as well as service industry businesses utilise AI systems in different capacities.
Algorithmic amplification
The problem of algorithmic amplification is one that is closely linked to bias. Algorithmic amplification occurs when platforms provide content suggestions that are consistent with and reinforce things that one has previously viewed, often evoking emotion to keep us engaged for more content viewing. It reinforces views or prejudices that people already have and can lead to ideological radicalisation.
Techno-sociologist Zeynep Tufekci argues that the content shown on platforms such as YouTube gets more and more extreme until the viewer is not only shown content they agree with, but content that will amplify their racial, gender, and social class views and beliefs, often with misinformation.
The black box problem
In the case that we identify a certain bias or questionable output of an AI agent, with complex AI systems interpretability is not a simple matter. At this point in time we do not fully understand how AI neural networks’ deep learning intricacies take place. The better performing and more accurate the AI system, the more complex their computations and learning.
Understanding AI decision making processes is central to trust of AI systems by its users and those impacted by its outputs. Considering that we will be utilising AI in fields including medical diagnosis, judicial process, government service provision and in military combat, finding solutions to the black box problem is critical to trust and fairness in the use of this technology.
Responsibility
The use of AI systems so far has not been void of accidents and mistakes. There are many use cases where decisions are made by algorithms that affect us, but no human is involved in the decision unless the outcome is questioned. Whoever makes decisions has control. Those programming the algorithms that underpin AI define the parameters for how decisions will be made — providing great power to those with the capabilities to build AI systems.
AI systems pose challenging questions for negligence law and liability. So far, technologies and platforms that have entered markets and amassed huge user bases have experimented with society to build their businesses. When they have enabled real world social misconduct, personal tragedies or crimes, their makers promise better self-governance, but have not been legally responsible for any wrongdoing.
Monopolisation
The most advanced AI capabilities are limited to a handful of global companies. Whilst ‘the big 9’ are investing huge amounts of resources to develop AI systems, they do so as businesses seeking maximum economic return. This impending monopolisation of the most powerful AI systems would provide a small handful of corporations or governments with unprecedented potential for economic and societal influence.
To counter possible ill uses of AI technologies by a selected few, and to encourage more sustainable ecosystems of innovation, data access and AI development must be distributed around the world with diverse stakeholders driving AI progress forward for a range of outcomes, including those without profit maximisation aims.
Benefits outweigh risks
Artificial intelligent systems and related technology have potential to provide unprecedented gains in efficiency, safety and cost effectiveness compared to humans undertaking the same tasks. They will transform the nature of work, enable improvements to quality of life and find solutions to previously unresolved challenges for humanity.
Considering the potential gains AI could have for human progress, the question is not whether opportunities to develop the technology should be taken. Rather we must decipher how to ensure it lives up to moral imperatives to benefit humanity in a sustainable manner rather than primarily serving corporate profit maximisation or political regimes.
