Artificial intelligence

With the technological advances made in computing over the last decades the subject of Artificial Intelligence (AI) has increasingly gained traction. With developments in computing power going faster than expected, computer programs were able to beat expert human players in games like chess, jeopardy and go. However, the question of whether humans can create artefacts that exhibit some form of (human-like) intelligence is much older than the computer.

What is meant by AI? In defining AI, the ‘artificial’ component of the term is usually not so much the issue, rather the crux is in ‘intelligent’. Alan Turing suggested that the question whether computers can think is not so useful, and proposed that we can say something about the intelligence of computers without answering the question what thinking is: he labeled this ‘the imitation game’ – it is now widely known as the ‘Turing Test’ (TT). Everyone that uses the internet will have participated in a type of TT at some point: Captcha. The human imput for this human-response test is used to further train automated recognition systems. With machine learning systems becoming more specialised in recognising text images and so on, the challenge is to find something that all humans can easily and quickly do but computers struggle with.

Turings method for determining AI remains controversial. One reason relates to the fact that it determines intelligence by reference to human capacities, whereas artefacts can also exhibit behaviour that goes beyond human intelligent behaviour. We might therefore alternatively look for a way to understand AI as rationality that does not necessarily have to be human-like. Wider scope what counts as intelligent doing the right thing in the right context at the right time, then a simple thermometer can be seen as intelligent. One further question here is if intelligence is something that has to express itself in behaviour, or if a form of rationality is central in determinating the intelligence of a system. It is difficult to see a thermostate as something that reasons, rationality seems to require some form of reflection or understanding of the process itself. A similar point was made in 1980 by John Searle, who argued against ‘strong AI’, which holds that the right the right inputs and outputs suffice to label AI as ‘having a mind’, in the sense that humans have a mind.

Apart from these different viewpoints on what artificial intelligence would look like and how we could recognise it, there is also the technical matter of actually creating it, the artificial part of AI. Different processes, like machine learning are used to establish progress on this front.

Luciano Floridi has argued that while AI has been very successful as a means of augmenting our own intelligence, as a branch of cognitive science interested in intelligence production, AI has been a dismal disappointment. Other have also pointed out that current AI is very good at performing tasks with well-defined parameters, that allows it to work through all the different possibilities, but it performs a lot less good when presented with something new. The opposite opinion has also sometimes been argued for, for example by those who claim that the Turing Test has already been passed or at least that programmers are on the verge of doing so.

Filed under: