An old discussion cropped up at the IEEE SMC UK&RI 6th Conference on Cybernetic Systems today when the chair of the session, J. S. Quinton, posed the question, can we make intelligent machines? The question was then double-barrelled by asking whether we actually want to make machines intelligent.

The audience was almost evenly split on both topics. Our boat of discussion wasn’t about to sink. The first question quickly turned into being largely relative to each individuals definition of intelligence, which, as I gather, is often the route that the debate takes. One participant reckons that any living thing that can manipulate its environment is intelligent. Ants therefore are intelligent; we have already realised machine intelligence without knowing it through Swarm Intelligence. Another participant recognised that we had already realised machine intelligence in computer games—his definition was also satisfied. Why are we still researching the area? Apparently, intelligence seems to be that which we cannot make a machine do.

Others, including myself, weren’t so optimistic about the prospect of machine intelligence. What about conciousness or free will? Can we make machines concious? Our chair responded with a nice analogy. Conciousness is similar to the notion of “wetness”. It is a property of the molecule created by bonding two atoms of hydrodgen and one atom of oxegen, water. Neither of the separate molecules posess this property separately. The property of conciousness is similarly created by fusing cells together. Can we mimic conciousness in a machine then? Well, can we mimic “wetness” using digital circuitry?

With one cartridge smoking on the floor, we reloaded to fire the second shot: supposing we can achieve machine intelligence, do we really want it? Again, the audience was split. Some say “yes”. It can help us in dangerous situations and do all of the things that we don’t want to do. It allows us to “play God”. We could ask intellectual questions and get answers derived through processing power that we could never hope to equal.

If machines do everything we don’t want to do, what will we do? A few thousand people may create the intelligent machines, what will the other six billion do? Reproduce?! What if machine intelligence gets into the wrong hands (politicians), which it would, would we have a society where machines tell us what we can and cannot do? Suppose our defintion of intelligence includes the ability to replicate and evolve—will machines introduce a new type of evolution, and where will they stop? Will there be a conference far into the future consisting of a bunch of machines trying to answer where those bloody nuisances of humans ever disappeared to?

Something that wasn’t mentioned, but that came up during lunch, was responsibility. For a machine to be intelligent does it have to have responsibility? Do we even want to go there?

This was a surprisingly fun debate experienced by a Phd student who’s happy for now to keep machine intelligence at the “peripherum” of his research area.