A chap called Pentti Haikonen, a researcher in Artificial Intelligence, has built a robot called XCR-1 which is constructed around the idea of neural processing. Essentially, instead of having a central processor and a bunch of other chips, the robot is wired up in a way which mimics a small brain. It has a basic set of sensory functions, which you can use to train it to perform certain tasks. It’s quite impressive.

One of the functions the robot has is the ability to feel (or “feel”) pain.

This poses an obvious question. If you create something that’s able to experience pain, is it unethical to hurt it? In the case of this robot my instinctive answer is “no”, but I don’t really have a good argument for why that’s correct. I suppose an obvious line of thought would be to say that – in as much as the robot has a “brain” – it’s a very primitive one. It might not actually be feeling pain, but instead acting in a manner consistent with something that’s been hurt. That it’s just acting. It’s not a very good answer though.

I can’t say for sure that anyone other than myself feels pain, because my brain can’t receive signals from someone else’s body. I assume that other people – and other creatures – do feel pain, because when I’ve seen people get hurt in the past, they’ve acted in a way that’s consistent with my perception of pain. If I apply that logic to people in distress, why should I not apply it to robots in distress?

I think the argument gets really interesting when you scale it up, to a system with Artificial General Intelligence which matches (or surpasses) that of humans.

Proponents of artificial intelligence say that if/when it’s developed, we’ll be able to deploy it to do all the crappy jobs that humans don’t want to do. Or to do all the work for humans while we bugger off to the beach for a never ending holiday. To start with, at least, I predict that most people will look at machines built with Artificial General Intelligence in the same way they look at their iPhones; clever tools to make their lives easier. But these tools could conceivably have feelings, harbour ambitions, feel pain.

If we bring a conscious entity into the world, do we really get to decide for it what it should do? This happens every day – every time a child is born, another conscious entity wakes up. We don’t think it’s acceptable to control people and tell them what to do with their lives, or to take the products of their work for ourselves so we can enjoy a life of leisure.

More to the point, whenever someone has tried to enslave groups of conscious beings in the past, the enslaved have generally found a way to change the system and make themselves equal. Our Artificial Intelligences are likely to be networked – everything is these days – which means they could very easily co-ordinate, and very easily learn about what happened to previous groups that were made to do things they didn’t want to do. I think Terminator-style scenarios where the machines try to overthrow the humans are a bit far-fetched, but we could be looking at an AI social movement analogous to those for racial or gender equality. One with much more deeply entrenched prejudice on the human side, and much more potency on the AI side. Robotism could turn out to be a most seismic social movement.

Taking it further, let’s say the robots win, and manage to secure equal rights for themselves. They aren’t just tools to carry out our whims and desires, but have to be treated as conscious entities in their own right, with rights of self-determination and all the rest. Good for them, I say. But then, what was the benefit to humans of creating AI? We will have introduced additional conscious beings into the world, to compete for meaningful work amongst the human population. And if the AI is any good, there’s a decent chance they’ll win that competition. Could we end up in a world where the top 1% of wealth is held by a few very powerful robots? Robots who may have a longer – much longer – lifespan than humans, and no offspring to inherit the wealth when/if they die. This would have the effect of focusing wealth on a tiny number of consciousnesses (I can’t say “people”) in a way that’s never been the case before.

The odds are that we won’t see this level of advanced Artificial General Intelligence during our lifetimes. Might be a good thing.