Angela Natividad's Live & Uncensored!

08 March 2011

Making the Singularity

A technological singularity is a hypothetical event occurring when technological progress becomes so rapid that it makes the future after the singularity qualitatively different and harder to predict. Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence, and allege that a post-singularity world would be unpredictable to humans due to an inability of human beings to imagine the intentions or capabilities of superintelligent entities.
Although Claude Shannon made inroads, in general we're finding that it's incredibly difficult to teach robots how to learn. The process involves a number of factors: recognising mistakes, determining what elements of a situation are worth incorporating into the refinement of a method, recognising the real-time value of a factor whose value may change from one situation to the next, etc.

But we've also discovered that the best way to teach a robot how to learn is to expose it to as many humans as possible - humans willing to teach it how to be better by giving the robot naked insight into their behaviour.

The rock-paper-scissors-playing computer that The New York Times has built is a perfect example of this. Instead of randomly generating moves, it gathers data based on your decisions, to "exploit a person’s tendencies and patterns to gain an advantage over its opponent." Throughout the game, the computer will tell you what it is "learning."



Even if you don't want to play ro-sham-bo with a machine whose ultimate goal is to be better than you are and will ever be (if it isn't already), you're participating in the intelligence-building of machines all the time. Every time you run a Google search or browse on Chrome, you are teaching Google's "smart" algorithms what you like, what you don't, and how to serve you results faster and more efficiently. You teach them your interests, your compulsive shopping habits. You teach them how you operate in private.

These machines are already smarter than we are if you judge solely by specialisation. (Could you serve search results efficiently and quickly to your best friend? She'd probably have you out on your ass after 8 minutes of your faffing.) The trick is making that intelligence more comprehensive. It is only natural that this challenges us; one of the ideas behind the technological singularity legend is that technology will no longer be able to advance at its natural pace without help from artificially enhanced means that are stronger than organic human minds alone, at which point we won't be able to predict what happens next. It will be bigger than us.

But sometimes mythology becomes reality in a way that is so banal we don't even realise it's happened. At Le Web, Salim Ismael told me that the singularity has already happened. Every time man built something to advance his capabilities beyond his own mind or body (the wheel, harnessing fire, telescopes, graphing calculators), he gave himself superhuman capabilities in the purest sense of the definition.

A supporting anecdote claims that our standard of living is equivalent to that of a medieval inhabitant presiding over 200+ servants. If you had servants to begin with, your iPhone alone would probably have robbed them of most of their jobs.

We are already part machine, now more than ever. Doesn't that warm you to the idea of playing rock-paper-scissors with NYT's baby beastie?

Cyborg image credit, Singularity Symposium. NYT robot image/story credit, Design Taxi.

No comments: