Characterisations of
Artificial Intelligence

Artificial Intelligence is not an easy science to describe, as it has fuzzy borders with mathematics, computer science, philosophy, psychology, statistics, physics, biology and other disciplines. It is often characterised in various ways, some of which are given below. I'll use these categorisations to introduce various important issues in AI.

1.1 Long Term Goals

Just what is the science of Artificial Intelligence trying to achieve? At a very high level, you will hear AI researchers categorised as either 'weak' or 'strong'. The 'strong' AI people think that computers can achieve consciousness (although they may not be working on consciousness issues). The 'weak' AI people don't go that far. Other people talk of the difference between 'Big AI' and 'Small AI'. Big AI is the attempt to build robots of intelligence equalling that of humans, such as Lieutenant Commander Data from Star Trek. Small AI is all about getting programs to work for small problems and trying to generalise the techniques to work on larger problems. Most AI researchers don't worry about things like consciousness and concentrate on some of the following long term goals.
Firstly, many researchers want to:
  • Produce machines which exhibit intelligent behaviour.
Machines in this sense could simply be personal computers, or they could be robots with embedded systems, or a mixture of both. Why would we want to build intelligent systems? One answer appeals to the reasons why we use computers in general: to accomplish tasks which, if we did them by hand would be error prone. For instance how many of us would not reach for our calculator if required to multiply two six digit numbers together? If we scale this up to more intelligent tasks, then it should be possible to use computers to do some fairly complicated things reliably. This reliability may be very useful if the task is beyond some cognitive limitation of the brain, or when human intuition is counter-constructive, such as in the Monty Hall problem described below, which many people - some of whom call themselves mathematicians - get wrong.

The Monty Hall Problem
Imagine you're on a TV game show called 'Let's Make a Deal', hosted by Monty Hall. You're shown three doors and Monty says: "Behind one is the big cash prize, behind the others is nothing. Please choose a door". After you choose a door, Monty says "Now I know where that prize is...", and he opens a door, behind which there is nothing. Monty does this every week (he's very dramatic). This leaves only two doors shut, the one you chose and another one. Finally Monty asks: "OK, do you want to change your mind and choose the other door?"
Should you?
Q. Would an AI program, sufficiently programmed with probability theory and given a correct specification of the problem get the answer wrong?
Click HERE for links to the Monty Hall problem.
Another reason we might want to construct intelligent machines is to enable us to do things we couldn't do before. A large part of science is dependent on the use of computers already, and more intelligent applications are increasingly being employed. The ability for intelligent software to increase our abilities is not limited to science, of course, and people are working on AI programs which can have a creative input to human activities such as composing, painting and writing.
Finally, in constructing intelligent machines, we may learn something about intelligence in humanity and other species. This deserves a category of its own. Another reason to study Artificial Intelligence is to help us to:
  • Understand human intelligence in society.
AI can be seen as just the latest tool in the philosopher's toolbox for answering questions about the nature of human intelligence, following in the footsteps of mathematics, logic, biology, psychology, cognitive science and others. Some obvious questions that philosophy has wrangled with are: "We know that we are more 'intelligent' than the other animals, but what does this actually mean?" and "How many of the activities which we call intelligent can be replicated by computation (e.g., algorithmically)?"
For example, the ELIZA program discussed below is a classic example from the sixties where a very simple program raised some serious questions about the nature of human intelligence. Amongst other things, ELIZA helped philosophers and psychologists to question the notion of what it means to 'understand' in natural language (e.g., English) conversations.

The ELIZA Program
ELIZA was one of the earliest AI programs, and implemented some notions of Rogerian psychotherapy (whereby the patient is never given answers to questions, only more questions based on previous ones and prompts for more information). ELIZA worked by simple linguistic translation of statements such as "I'm feeling ill" into questions such as "Are you really feeling ill" and statements such as "I'm sorry you are feeling ill". It was remarkably successful, and did fool some people into believing they were conversing with a human psychotherapist.
Have a go with an online version of ELIZA HERE. Read the original paper HERE.
By stating that AI helps us understand the nature of human intelligence in society, we should note that AI researchers are increasingly studying multi-agent systems, which are, roughly speaking, collections of AI programs able to communicate and cooperate/compete on small tasks towards the completion of larger tasks. This means that the social, rather than individual, nature of intelligence is now a subject within range of computational studies in Artificial Intelligence.
Of course, humans are not the only life-forms, and the questions of life (including intelligent life) poses even bigger questions. Indeed, some Artificial Life (ALife) researchers have grand plans for their software. They want to use it to:
  • Give birth to new life forms.
A study of Artificial Life will certainly throw light on what it means for a complex system to be 'alive'. Moreover, ALife researchers hope that, in creating artificial life-forms, given time, intelligent behaviour will emerge, much like it did in human evolution. Hence, there may be practical applications of an ALife approach. In particular, evolutionary algorithms (where programs and parameters are evolved to perform a particular task, rather than to exhibit signs of life) are becoming fairly mainstream in AI.
A less obvious long term goal of AI research is to:
  • Add to scientific knowledge.
This is not to be confused with the applications of AI programs to other sciences, discussed later. Rather, it is worth pointing out that some AI researchers don't write intelligent programs and are certainly not interested in human intelligence or breathing life into programs. They are really interested in the various scientific problems that arise in the study of AI. One example is the question of algorithmic complexity - how bad will a particular algorithm get at solving a particular problem (in terms of the time taken to find the solution) as the problem instances get bigger. These kinds of studies certainly have an impact on the other long term goals, but the pursuit of knowledge itself is often overlooked as a reason for AI to exist as a scientific discipline. We won't be covering issues such as algorithmic complexity in this course, however.

1.2 Inspirations

Artificial Intelligence research can be characterised in terms of how the following question has been answered:
"Just how are we going to get a computer to perform intelligent tasks?"
One way to answer the question is to say that:
  • Logic makes a science out of various forms of reasoning, which play their part in intelligence. So, let's build our programs as implementations of logical theories.
This has led to the use of logic - drawing on mathematics and philosophy - in a great deal of AI research. This means that we can be very precise about the algorithms we implement, write our programs in very clear ways using logic programming languages, and even prove things about the programs we produce.
However, while it's theoretically possible to do certain intelligent things (such as prove some easy mathematics theorems) with programs based on logic alone, such methods are held back by the very large search spaces involved. People began to think about heuristics - rules of thumb - which they could use to enable their programs to get jobs done in a reasonable time. They answered the question like this:
  • We're not sure that humans reason with perfect logic all the time, but we are certainly intelligent. So, let's use introspection and tell our AI programs how to think like us.
In answering this question, AI researchers started building expert systems, which encapsulated factual, procedural and heuristic knowledge about particular domains. A good example of an early expert system is MYCIN, described below.

The MYCIN Program
MYCIN was an expert system designed to diagnose bacterial blood diseases. Using extensive interviewing of medics who were expert in this domain, the MYCIN team extracted around 450 rules for determining the cause of a disease. Using uncertainty reasoning to model these rules, MYCIN performed as well as some experts and outperformed junior doctors on the whole.
A criticism of the lack of common sense in expert systems by John McCarthy - using MYCIN as a case study - is HERE.
Using introspection also led to the development of planning programs (we plan our actions), reasoning programs (we make inferences about the world), learning programs (we learn and discover new things and adapt to our situations), and natural language processing (we communicate our ideas).
If we stop thinking about how we behave intelligently, and think about why we have intelligent abilities, then we can answer the AI question in the following manner:
  • Our brains make us intelligent, so we can try to mimic the biological activity of the brain to produce intelligent programs.
Given that biologists have told us that the brain is made up of networks of billions of neurons, work on simulating this environment started the subject of artificial neural networks (ANNs) (often shortened to just neural networks). It has many overlaps with statistics and has had a stop-start history, as described below.

Trends in AI Research
As in all sciences, funding can greatly affect trends. Funding of AI research has had a turbulent history. For instance, the Lighthill report in Britain was highly critical of AI research, and as a result, funding for AI projects in the UK was almost wiped out for a decade. In general, if people make claims either about how good AI is, or how bad AI is, this often ends in reduced funding. People pointing out so-called fundamental limits of AI research can get funding agencies to withhold money. On the other hand, when people make wild claims about how well their system will perform, if their project fails, then funding may be slashed across the board.
In some cases, trends in AI are affected as much internally as externally. In particular, in a book on perceptrons (simple neural networks), Minsky and Papert showed some fundamental limitations of the learning ability of perceptrons. This was unfairly projected to the rest of neural network research and led to a 'winter' in neural network research for roughly all of the 1970s and early 80s, until the subject was rejuvenated with the application of research done by physicists and psychologists who were less affected by the trend.
A web page on the history and limitations of perceptrons is HERE.
Noting that our brains didn't just leap into existence, we could suggest another answer to the question:
  • Our genes, which build our intelligent brains, have evolved through natural selection. So, let's try to evolve intelligent programs.
Clearly, evolution has solved the myriad of problems about how to best build an organism to survive in a particular environment. Hence, evolutionary approaches to problem solving, including genetic algorithms and genetic programming have been studied in AI. Also, mimicking evolution has been an important tool for some researchers interested in various aspects of life (ALife).
Our intelligent brains evolved due to survival of the fittest in the physical system known as Earth. Some argue that our true intelligence is borne out in our ability to survive in a dynamic, dangerous world, not our ability to add up or play chess. Certainly, in terms of evolutionary epochs, we took longer to stand up than we did to learn how to translate French to German.
Hence, maybe we should answer the question like this:
  • We should build robots able to survive in the real world, from which intelligent behaviour will emerge over time.
Hence robotics and vision have featured in AI research. In particular, behaviour-based robotics, as proposed by Rodney Brooks of MIT aims to see what intelligent behaviours arise from the implementation of basic behaviours such as: following walls, avoiding obstacles, etc. rather than higher functions such as perception, learning and reasoning.
We may live in the physical world, but we are not alone. We build organisations and work in teams to achieve much more than we could do individually. Hence, another way to answer the question is like this:
  • We should write individual intelligent agents able to interact with each other cooperatively and competitively in order to achieve intelligent tasks.
This has led to the introduction of a relatively new aspect of Artificial Intelligence research, namely the study of multi-agent systems. As mentioned previously, these comprise numerous agents each with their own abilities, beliefs and tasks to perform, which interact with each other. The agents cooperate (or compete) towards the completion of a larger overall goal.
While we've been worrying about how to get computers to perform intelligently like humans, computer science has advanced and computer hardware has become very fast. Hence, to some extent, we can stop thinking too much about human intelligence and start playing to the advantages of the computer, answering the question thus:
  • We should design computational models for brute force approaches to intelligence, and fine tune our programs to be ever more efficient.
Nowhere has the effect of brute force computation on performance been seen more than in computer chess. The latest episode in this story took place in Bahrain in 2002, as described below.

The Brains in Bahrain
Forget Deep Blue, Deep Fritz is currently the number one chess program, and in 2002 it took on Vladimir Kramnik the world (human) number one, in a competition which the organisers nicknamed "the Brains in Bahrain". The match was drawn 4-4 and Kramnik gained a lot of respect for the program. To quote the organisers: "Well, the last time an opponent escaped from Kramnik with a 21-move draw with the black pieces, it was Garry Kasparov!"
A website on the latest competition is HERE.
Computer chess still manages to stir up emotions in Artificial Intelligence with many people saying that it isn't AI at all (because it's just brute force), and others disagreeing strongly. For instance, Drew McDermott has stated that: "Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings."

1.3 Methodology Employed

If you're reading some fairly old articles on AI, you may come across the notion of 'neats' and 'scruffies', which refer to two different methodologies in AI.

  • Neats approach AI by grounding their programs firmly in mathematical rigour: their programs can often be described in logics and things can even be proved about them.

  • Scruffies approach AI by writing some programs and seeing which methods work best computationally.
It has long been accepted that both approaches are valid, and indeed AI researchers should use both methodologies. It is worth bearing in mind that if you claim that your AI program does intelligent things, but it has very little theory behind it, then it will most likely be viewed by in the AI community as a 'big hack'. Ideally, the program should be based on a well established technique, but if your techniques are new, they should be explained theoretically, away from details of the implementation. On the other hand, however, if you have only theoretical results about proposed AI algorithms, then it is difficult to justify their validity without implementing the techniques and trying them out on some test problems. People learned very early on in AI research that something being theoretically possible doesn't mean that an implementation will succeed in practice.
I prefer to describe approaches to AI as 'scientific' and 'technological'. The scientific approach is to write down a hypothesis, such as: "this algorithm performs faster than this one on these problems", perform some experimentation and demonstrate that the hypothesis is true or false. Good science like this is difficult to fault, but progress like this can take a long time to automate tasks requiring lots of intelligence. Having said that, if AI techniques are going to be employed in industry and other sciences, particularly in safety critical circumstances, then the scientific approach is ultimately the only acceptable one.
The technological approach is to take on larger, less well defined problems and to combine systems possibly in terms of multiple programs running at once (e.g., multi-agent systems). Often, as the intelligent task is so complicated, it is a success just to show that it can be automated at all. This was the case in the 'look-ma-no-hands' period of AI research, where researchers picked off many intelligent tasks (such as mathematical integration problems). This captures the imagination in terms of AI programs able to do things only humans could before, but such technological approaches have to be robust (so that they are shown to work in a variety of situations) and the theory behind them has to be understandable.
Many early (and some current) technology attempts suffered from two problems: firstly, they were restricted to microworlds, where the only problems they could solve were 'toy problems' and the methods didn't scale up at all to larger problems which may occur in a real application of the program. Secondly, the methods behind the intelligence were highly fine-tuned and difficult to understand or generalise. When people find out that the 'intelligent' task can only be accomplished on small examples, and that the methods employed are ad-hoc and fine-tuned, this tends to lessen enthusiasm for the technological approach. Having said that, projects such as the Interactive Museum Tour-Guide Robot described below show how technological projects which capture the imagination can both scale up and be clearly described (their write-up won a best paper award at the National Conference on AI (AAAI) in 1998), and be rigorously grounded in established technology.

The Interactive Museum Tour-Guide Robot
This robot was demonstrated in the Deutsches Museum in Bonn, where it guided hundreds of visitors through the museum over a period of six days. It navigated at high speeds, reliably avoiding objects and coping with dense crowds. Once guided to the exhibits, museum visitors could choose to hear and see a variety of mixed-media descriptions of the objects on show and interact with the robot's on-board panel. Using a variety of probabilistic reasoning, planning, and high level first order problem solving abilities, the robot travelled for over 18 kilometres around the museum during the six days, and guided more than 2000 people. Museum attendance rose by at least 50%.
Read the paper about this project HERE.
See this article for Alan Bundy's 1981 note on how "smart, but casual" should be the norm in AI education.

1.4 General Tasks to Accomplish

Once you've worried about why you're doing AI, what has inspired you and how you're going to approach the job, then you can start to think about what task it is that you want to automate. AI is so often portrayed as a set of problem-solving techniques, but I think the relentless shoe-horning of intelligent tasks into one problem formulation or another is holding AI back. That said, we have determined a number of problem solving tasks in AI - most of which have been hinted at previously - which can be used as a characterisation. The categories overlap a little because of the generality of the techniques. For instance, planning could be found in many categories, as this is a fundamental part of solving many types of problem.
This characterisation is important, because after some preliminary material, the characterisation into general problems to solve will be used as the format for the rest of the lecture course.
  • Getting a program to reason rationally.
    General techniques here include automated theorem proving, proof planning, constraint solving and case-based reasoning.

  • Getting a program to learn and discover.
    General techniques here include machine learning, data mining and scientific knowledge discovery.

  • Getting a program to play games.
    General techniques here include minimax search and alpha-beta pruning.

  • Getting a program to communicate with humans.
    General techniques here include speech recognition, natural language understanding and generation (taken together as Natural Language Processing, NLP) and speech synthesis.

  • Getting a program to exhibit signs of life.
    General techniques here include genetic programming and genetic algorithms.

  • Enabling a machine to manoeuvre intelligently in the real world.
    General robotics techniques here include planning, vision and locomotion.

1.5 Generic Techniques Developed

In the pursuit of solutions to various problems in the above categories, various individual techniques have sprung up which have been shown to be useful for solving a range of problems (usually within the general problem category). These techniques are established enough now to have a name and provide at least a partial characterisation of AI. The following list is not intended to be complete, but rather to introduce some techniques you will learn later in the course. Note that some of these overlap with the general techniques above.
  • Forward/backward chaining (reasoning)
  • Resolution theorem proving (reasoning)
  • Proof planning (reasoning)
  • Constraint satisfaction (reasoning)
  • Davis-Putnam method (reasoning)
  • Minimax search (games)
  • Alpha-Beta pruning (games)
  • Case-based reasoning (expert systems)
  • Knowledge elicitation (expert systems)
  • Neural networks (learning)
  • Bayesian methods (learning)
  • Explanation based (learning)
  • Inductive logic programming (learning)
  • Decision tree (learning)
  • Reinforcement (learning)
  • Genetic algorithms (learning)
  • Genetic programming (learning)
  • Strips (planning)
  • N-grams (NLP)
  • Parsing (NLP)
  • Behaviour based (robotics)
  • Cell decomposition (robotics)

1.6 Representations/Languages Used

Many people are taught AI with the opening line: "The three most important things in AI are representation, representation and representation". While choosing the way of representing knowledge in AI programs will always be a key concern, many techniques now have well-chosen ways to represent data which have been shown to be useful for that technique. Along the way, much research has been undertaken into discovering the best ways to represent certain types of knowledge. The way in which knowledge can be represented is often taken as another way to characterise Artificial Intelligence. Some general representation schemes include:

  • First order logic
  • Higher order logic
  • Logic programs
  • Frames
  • Production Rules
  • Semantic Networks
  • Fuzzy logic
  • Bayes nets
  • Hidden Markov models
  • Neural networks
  • Strips
Some standard AI programming languages have been developed in order to build intelligent programs efficiently and robustly. These include:

  • Prolog
  • Lisp
  • ML
Note that other languages are used extensively to build AI programs, including:

  • Perl
  • C++
  • Java
  • C

1.7 Application Areas

Individual applications often drive AI research much more than the long term goals described above. Much of AI literature is grouped into application areas, some of which are:
  • Agriculture
  • Architecture
  • Art
  • Astronomy
  • Bioinformatics
  • Email classification
  • Engineering
  • Finance
  • Fraud detection
  • Information retrieval
  • Law
  • Mathematics
  • Military
  • Music
  • Scientific discovery
  • Story writing
  • Telecommunications
  • Telephone services
  • Transportaion
  • Tutoring systems
  • Video games
  • Web search engines

1.8 Products

We can make a final characterisation of AI into the successful products (both software and hardware) that is currently in use both in academia and in industry.
Software
  • Otter (resolution theorem prover)
  • LambdaClam (proof planner)
  • Progol (inductive logic programming)
  • C4.5 (decision tree learner)
  • HR (automated theory formation)
Hardware
  • Rod Brook's vacuum cleaner
  • Museum Tour guide
Some older programs/robots include
Software
  • ELIZA (Natural Language Generation)
  • AM (mathematical theory formation)
  • SHRDLU (Natural Language Understanding)
Hardware
  • SHAKEY

The EQP Theorem Prover
It's not very often that an AI program (other than Deep Blue) makes the pages of the New York Times. When the EQP program solved a tricky mathematical problem known as the Robbins Algebra problem in 1996, the New York Times reported that: "Computers are whizzes when it comes to the grunt work of mathematics. But for creative and elegant solutions to hard mathematical problems, nothing has been able to beat the human mind. That is, perhaps, until now."
They were excited because the problem had remained unresolved since the 1930s, even though various mathematicians had attempted to solve the problem. After over 10 years of concerted development of the theorem using automated tools such as EQP, eventually, after a run lasting around 8 days, a fairly succinct solution was found. The solution of the Robbins algebra problem remains one of the most important achievements by any AI program. "Isn't that marvellous" said Herbert Robbins, who originally formulated the conjecture.
Read more about EQP's success HERE.


© Simon Colton 2006.

0 comments: