Machine learning is, regrettably, not one of the day-to-day chores assigned to most programmers. However, with data volumes exploding, and high-profile successes such as IBM’s Jeopardy-beating Watson and the recommendation engines of Amazon and Netflix, the odds are increasing that ML’s opportunity might knock on your door one day.

From the 1960s to the 1980s, the emphasis of artificial intelligence was in “top-down” approaches in which expertise from domain experts was somehow transcribed into a fixed set of rules and their relations. Often, these would be a series of small “if-then” rules, and the “magic sauce” of expert systems was that they could draw conclusions by automatically chaining together the execution of those rules whose “if” parameters were known. The technology for inferencing worked well enough, but it turned out that very large rulebases were hard to debug and maintain, while not very large rulebases didn’t produce many compelling applications (for instance, my expert system for identifying seabirds failed to make me a billionaire).

The late 1980s saw a shift toward algorithms influenced by biological processes, and the rebirth of artificial neural networks (which were actually developed in the early 1960s), genetic algorithms, and such things as Ant and flocking algorithms. There was a flurry of interest in fuzzy logic, which was particularly well suited for control systems, as they provided continuous response curves.

The 1990s saw increasingly sophisticated algorithms in all these areas and began the march toward today’s world of machine learning, with its emphasis on statistical techniques used against large datasets. Perhaps the most significant development was the invention of support vector machines, which provide a robust way to determine hyperplanes that effectively bisect high-dimensional solution spaces.

As that last sentence demonstrated, it doesn’t take long before the techniques of AI and ML flirt with becoming unintelligible. A sentence can often be a confusing mashup of mathematics, jargon from AI’s five-decade history, and flawed metaphor (artificial neural networks aren’t a great deal like real-world neurons, and genetic algorithms don’t have much in common with meiosis and DNA recombination). But while there is a temptation to use a technique as a black box, I strongly believe that sustained success requires gaining an intuition into the underlying technique. That intuition doesn’t need to be an academic-level understanding of the mathematics, but it does need to be at a level where you can make reasonable guesses as to what type and volume of data you need, why and what kind of preprocessing you might need, and what problems are likely to crop up during processing.

Neural nets and genetic algorithms were hot topics when I was editing “AI Expert,” but 20 years later, the most common StackOverflow.com questions about these techniques treat them as black boxes and often reveal misguided use cases. Genetic algorithms, in particular, seem to be woefully misunderstood. If you’re thinking about solving a problem with a GA, please buy a copy of Goldberg’s “Genetic Algorithms in Search, Optimization, and Machine Learning” or Mitchell’s “An Introduction to Genetic Algorithms,” and spend two days reading before you begin coding. I guarantee your results will come faster than if you don’t understand the model.

I think it’s fair to say that support vector machines are the most complex of the techniques in standard AI/ML (whatever you want to call it). Peter Harrington’s recent book “Machine Learning in Action” does a good job of promoting understanding and intuition while having a good deal of the cookbook-style format that has become popular. Of course it doesn’t start with SVMs, but it’s no coincidence that the techniques of the initial chapters introduce the terminology and domain (at the expense of some of the other techniques I’ve mentioned). The code is in Python, which is one of the clearest languages even if it’s not your day-to-day work language; no matter what language you program in, you can understand an algorithm written in Python, although you naturally lose access to powerful libraries such as NumPy and SciPy. (Python is increasingly popular among scientists, but that’s a topic for another column.)

The simple heuristics for choosing among the major AI algorithms is, if it’s a set of rules that change over time with many interdependencies, consider an expert system (but it’s probably easier just to write the code). If it’s finding the best parameters to a function, consider a genetic algorithm. If it’s finding a winning move in a complex sequential game, consider tree search with pruning (or the infuriatingly successful Monte Carlo Tree Search). If it’s pattern-recognition and classification, try an SVM or, if you’re feeling nostalgic, a neural network.

One way or the other, if you have the opportunity, seize it. It’s always a thrill to write a program that solves a user’s problem, but it is pure joy to write a program that solves new ones.

Larry O’Brien is a technology consultant, analyst and writer. Read his blog at www.knowing.net.