A brief media flap occurred in the spring of 2017 after an experiment in machine learning at Facebook took an unexpected turn; two robots (or 'bots') had been programmed to learn how to negotiate, how to automate and optimise techniques for buying and selling online. After a while, the following dialogue was observed between the two bots, called 'Bob' and 'Alice', both tasked with maximising their outcome. Now, Alice is negotiating the purchase of balls from Bob - let's see what happened...
Bob: i can i i everything else . . . . . . . . . . . . . .At first glance this seems like gibberish, but an examination of this negotiation shows a certain structure, a logic in there somewhere, that humans can sense exists, but can't work out. The bots are using English but in a way we can't understand. At this point, Facebook pulled the plug on the experiment - not for fear that bots will take over the world, as many in the media implied - but because the result was pointless, being useless to a human user.
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
My daily encounters with artificial intelligence come in the form of Microsoft Word's grammar check, which has the temerity to put me right in grammar and Google's often-hilarious Google Translate. Having followed the development of Google Translate for many years, I can say it is far better today than the joke-generator it was at first (early example: Spotkanie odbędzie się w j. polskim. translated as "The meeting will be held in the lake. Polish."). Microsoft Word has difficulty in distinguishing singulars from plurals (in the phrase "one of the leading audit and consulting companies" it suggests that I use 'audits'' rather than audit' due to its inability to spot the principle noun-object in the phrase, which is 'companies' and not 'audit', which here is an adjective. Basic stuff).
Now, if this is cutting edge of AI as it commercially available today, I will wager that it will take decades to move on to something that will break through the barrier of meaning. Besides, the global economy is suffering a dearth of IT programmers and developers. Last year, the European Commission estimated that by 2020, the EU will be struggling to fill over 750,000 vacancies for coders, engineers and tech specialists. Every corporation on earth is trying to deal with the challenges of distributed ledger technology (DLT or blockchain), of the Internet of Things (IoT which is especially pertinent in manufacturing and the built environment), of cloud computing, of quantum computing (dealing with qbits - no longer zero or one, but zero and one) and yes, artificial intelligence. There's not enough humans to do it.
Machine learning - automation of the coding process - is an answer to the shortage of suitably qualified techies. But to get computers up to the level where they can develop new IT solutions with minimal human interference takes human brain power.
Consider code-writing as a craft skill. It needs to be taken from being a craft to being a full-automated process in the same way as building cars has moved to being the preserve of skilled craftsmen building tiny handfuls of cars for the extremely wealthy to factories full of robots churning out cars to a global mass market. It will happen, eventually - but then what?
Let's say, as proponents of AI and machine learning do, that one day, we will reach the Singularity - the moment when runaway improvements of learning cycles lead to ultra-intelligent machines that finally surpass all the intellectual activities of humanity. When will this happen?
The speed at which some neurologists believe the brain computes is around 1016 operations per second. Computers are getting there. Since 2013, the race for the world's fastest computer has been between the US and China; since 2013 these have been in the petaflops (peta = 1015, flops = floating operation points per second) range, the current record as of November 2018 is the IBM Summit supercomputer which is capable of 200 petaflops. One fifth of the way towards the 1016 needed to take on the human brain. Now, this 1016 figure is based on the theory that human consciousness is the merely the product of activity of the tens of thousands of connections between the hundred billion or so neurons in the brain.
But Prof Stuart Hameroff and other proponents of a counter-theory state that consciousness takes place within the cell rather than being a product of reactions between cells. If true, if indeed the 109 tubulins within each brain cell, each switching at 10 megahertz, this would mean 1016 times 1011 neurons or 1027 operations per second. Way faster than the fastest computers - if mankind is currently making petaflops computers, we need to move past exaflops (1018), zetaflops (1021) and yottaflops (1024) - and 'yotta' is currently the largest decimal unit prefix in the metric system. Zetaflops machines are predicted for 2030, they will still be six orders of magnitude slower than the human brain, according to Prof Hameroff.
But let's say that one day this century, there will be computers operating at 1027 floating point operations per second. And that Hameroff is right. Then what? Will we find that consciousness is an emergent property of these computers' complexity and speed? Will they have feelings? Will they inform us (or even one another) how they feel, what they thought when they saw their operator coming into work wearing an unusually loud shirt?
In other words, can consciousness be created artificially? In the early 19th century, Mary Shelley created Frankenstein's Monster, a human being brought back to life through science (chemistry and electricity). The monster possessed feelings and capacity to learn. Computers can learn, But can computers have feelings?
My own belief is no. Proponents of AI singularity say yes. At this moment, it is belief vs. belief, faith vs. faith, theory vs. theory. We don't know - yet.
This time last year:
Viaduct takes shape in the snow
This time four years ago:
No in-work benefits for four years?
This time five years ago:
This time six years ago:
Another November without snowThis time seven years ago:
Snow-free November
This time eight years ago:
Krakowskie Przedmieście in the snow
This time nine years ago:
Ul. Poloneza closed for the building of the S2
No comments:
Post a Comment