Search
Close this search box.

Divinatory Computing: Artificial intelligence and the African continent

​First published on Sunday Independent

By Faeeza Ballim and Keith Breckenridge

We are, apparently, on the cusp of revolution. So significant is our impending transition that President Cyril Ramaphosa has established the Presidential Commission on the Fourth Industrial Revolution (4IR), after promising as much in his State of the Nation Address in February 2018. But beyond a murky image of intensified technological use, the actual meaning of 4IR is veiled in obscurity.

According to Google’s global search trends, South Africans are strangely obsessed by the fourth industrial revolution. South Koreans and Malaysians have some curiosity about it, but much less so than we do, and there is close to zero interest in the idea in Brazil, China, Germany, Japan and the US. Google also shows that Chinese interest in machine learning matches our distinctive interest in 4IR and, in truth, that’s a much better word to describe the technological novelty of the coming age. Machine learning is the currently dominant artificial intelligence technique and understanding its inner workings and its dissemination is an important step towards grasping its portents.

To start with, machine learning offers many opportunities for enthusiasts. There is currently an unusually generous imparting of techniques. Open, free networks and training workshops are matched by a broad suite of online resources that are freely accessible to anyone interested in studying it. In stark contrast to the other main areas of science and engineering research (which have long been bitterly under-resourced and unavailable on most of this continent) the tools of machine learning are all freely available online. This includes the main programming languages – Python and Numpy – and the platforms for developing applications like Kaggle and OpenML. Powerful computational platforms like Google’s TensorFlow Research Cloud and Amazon’s Machine Learning cloud, which are used by researchers at MIT and Cambridge, are available, for free to African researchers who have access to the Internet. This is something like a revolution – the opposite of what has been true about world leading scientific research for at least a century.

There are also now many programmes offering young, mathematically-inclined Africans training and exposure in the core skills of machine learning and artificial intelligence. The most influential is a network of training projects that was started by Neil Turok – who, at the time, was a professor of physics at Cambridge — called the African Institute for Mathematical Sciences (AIMS). It draws on the enthusiasm of global experts to support the activities of six schools – in Cameroon, Ghana, Rwanda, Senegal, Tanzania and South Africa – that offer free training in the mathematics and computer science of data science and, increasingly, machine learning.

Half a dozen loosely related projects, offering similar kinds of exposure and connections are now operating on the continent, including the Deep Learning Indaba, which is supported by young South African researchers based at Google’s famous Deep Mind Artificial Intelligence lab in London. The Indaba held large training workshops in Stellenbosch and Johannesburg over the last year, and plans a third next year in Nairobi. Perhaps the most startlingly elitist of these projects, which aims to leverage the enormous and growing demography of talent on the continent, is the Next Einstein Forum, which draws on the AIMS network to anoint a handful of young researchers from across the continent each year. This creates, in science and mathematics research, an economy of talent similar to what has long been true in football and music.

In this brave new world of freely disseminated technology, data is the kingmaker. Corporate developers do not jealously guard machine learning techniques because the real treasure lies in the control of data that is generated each second on ubiquitous internet platforms, with Facebook the most notorious of these, by ordinary users. The abundance of data has enabled the supremacy of machine learning over other approaches to artificial intelligence, in what has historically been a fraught and contested field. Scientists have dreamed of endowing machines with human-like intelligence since at least the 1950s. But the intractable difficulty of mimicking a core component of this intelligence – the capacity to learn – meant that it remained a pipedream for much of the twentieth century.

The way forward was muddied by deep disagreement among designers about the nature of knowledge and the way in which humans learn. Machine learning is but one subset of the vast field of artificial intelligence and at its heart lies the neural network, which aims to simulate the functioning of the human brain. While biological in name, the neural network is a statistical model which uses data to build complex mathematical functions that describe the underlying relationships in the data. It is rooted in principles of behavioural psychology, introduced by a New York-based psychologist, Frank Rosenblatt, in the 1950s. Rosenblatt taught the network to correctly respond to certain stimuli through a process of trial and error. While rudimentary by contemporary standards, the same principle of training the neural network by adjusting the weights from some random initial distribution of weights persists in a popular optimisation technique called gradient descent.

Gradient descent draws on statistical methods that draw patterns through scattered data in order to predict the future. While the term artificial intelligence conjures the image of a robot with human capabilities a far less dramatic, but no less significant, function of artificial intelligence is this ability to predict the future. In 2016, the British-based political consultancy firm Cambridge Analytica allegedly used machine learning technology to assist Donald Trump’s election campaign. This involved gathering data on the mundane behaviour of ordinary Facebook users, such as their likes and dislikes, and matching this against a host of established behavioural patterns to determine the person they most closely resemble and predict their likely behaviour at the polls.

The project of categorising uses machine-created stereotypes and prizes conformity above all else. The place for individual autonomy and expression is thus called into question. It is important to note that the generation of patterns that best fit the data through gradient descent is one riddled with uncertainty for practitioners. Over thousands of iterations, in a process of trial and error, it is not always clear that the most accurate pattern has been established and this uncertainty led some of the leading machine learning practitioners at the Conference on Neural Information and Processing Systems in 2017 to suggest that the current models more closely resemble alchemy than an exact science.

The opportunities that machine learning allows for authoritarian-like surveillance and control are concerning. It is worrying that certain Chinese corporations have found easy accommodation in the African countries — including Ethiopia, Tanzania, Uganda and Rwanda — that share a common vision of bureaucratic control and surveillance and weak privacy laws. In April, 2018, the Zimbabwean government announced that it had selected a Chinese firm, CloudWalk, to provide an AI-based facial recognition system using the national identity database. Only later did the fact emerge that the state, which is notoriously short of foreign currency and in the middle of a contested military seizure of power, gave the Chinese company millions of records from the national population register –including photographs and real name and identity numbers — in exchange for the new surveillance tools.

The companies involved benefit from a wide expansion of the training data at their disposal through the inclusion of millions of African faces and names, which could potentially correct the racist bias that has long plagued facial recognition systems. The potential for the dependency of African states on corporations outside of the continent as well as the prospects for greater authoritarian control are reasons to pause if we are to embrace the fourth industrial revolution with open arms.

Faeeza Ballim is a Senior Lecturer at the University of Johannesburg.

Keith Breckenridge is Professor and Deputy Director of Wiser, Wits University.

Dr Faeeza Ballim
Dr Faeeza Ballim
Share this

Latest News

All News