Wednesday, March 16, 2011

Plug-and-play machine learning

To belabor a point, science fiction is fun in part because of the big-picture concepts that are its province. Things like the emergence of synthetic minds, alternative mentalities, and alien modes of thoughts. It’s also an intriguing genre because when it’s done right it’s a preview of potential coming attractions for our reality. We might not have sentient software at the moment, but software is growing smarter all the time. That and it’s not just moving out into our flesh-and-blood world in the form of armed combat drones and vehicles that drive themselves, but the very platforms that run our software continue to grow increasingly capable. By some predictions we will have laptops in another twenty years that each posses the raw processing power of a human brain.

In my story “Lisa with Child” I look at a world in which the difference between biological life and machinery is growing increasingly blurry, and in which a decommissioned cyberwarfare platform lives out her existence in an intimate mental relationship with the woman who is the center of her being. So how might we get from here to there, to an era in which machines have not replaced humanity but are very much an extension of being human? What are the some of the actual steps already moving us forward into a future of symbiotic dreams or invasive nightmares?

I came across one such component of the future while do research for a Cyber Stride blog post on Microsoft’s Infer.Net library of algorithms earlier this year. That particular element of the world to come is the field of machine learning: The development of software that can compare, make judgments, and reach conclusions with incomplete information and do so in situations in which there is more than one possible interpretation of the existing data.

How does software go about doing this? One way is machine learning software that uses the same basic epistemological operation that the human mind does: Inference. In other words, it compares two probabilities to in a multi-variable situation to determine likely outcomes. Such an inference problem could be something as arcane and grim as comparing the known rates of a type of cancer to the rates of reliability for cancer tests to figure out the odds of actually having that cancer in the event of a positive test result. Or the odds of actually having it despite a negative result. Or inferencing can be employed in more delightful and intuitive deliberations such as attempting to determine how likely it is that a friend might enjoy album A, and given that probability what’s the likely-hood if her also enjoying album B.

The mathematical formulation of this process of conditional probabilities is known as Bayesian inference. The algorithms for working out these operations are complex and bulky. The only reason the human brain can rapidly power through such inference judgments is because it is a massively parallel data processing system.

Needless to say writing code for silicon-based computers to carryout such procedures is daunting. Especially when it comes to internet-scale problems like trying to determine relationships or shared tastes within social network groups with millions of members, hundreds of millions of possible connections between them, and who knows how many potential communities or sub-networks with common interests and likes. A similar level of difficulty is faced by online game hosting companies who’d like to match players of similar ability in groups that are based on multiple performance variables, or in detecting spam as part of junk mail email filters. Other large-scale inference problems also arise in the studies of complex ecologies and economies.

Traditionally, creating an analytical engine capable of taking on such problems has involved software developers coding thousands of lines of code in order to carryout millions of statistical analysis runs. But programmers are nothing if not clever as a group, and so inference engines have gotten modular.

Or to put it differently: The ability for machines to infer has now become a plug and play application.

Microsoft Research has created a rather impressive algorithm library that allows a developer working with on of its .Net programing languages to graft in twenty to thirty lines of customizable code that makes use of the most suitable algorithm in the library to tackle the problem. And it does so by using Bayesian inference to calculate statistical probability distributions for the entire system being modeled in a single operation, rather than running millions of time and process power-consuming brute force analytics to analyze the contingent probabilities of each data point set in the system.

Rather than needing weeks or months, a developer can now create and integrate a powerful machine learning function for large-scale systems analysis in just a few hours.

Neat!

2 comments:

Paul said...

Very cool, thanks for the article and link to original!

Alex Black said...

Not a problem. Glad you liked it!