HomeMachine LearningMachine Learning EducationUnderstanding Inductive Machine Learning

Understanding Inductive Machine Learning

On what grounds can we assume the future will resemble the past?

Machine learning is based on inductive inference. Unlike deductive inference, where the truth of the premises guarantees the truth of the conclusion, a conclusion reached via induction cannot be guaranteed to be true. Inductive inferences are therefore inherently probabilistic.

In the context of classification, we use training data, collected in the past, and extrapolate from the patterns we find into the future. For instance, if 90% of people with profile X defaulted on their loans during the training period, we’ll assume that in the future roughly 90% will default in the future. We can check the validity of this inference using an independent test set and verify the accuracy of our inferential jump. These kinds of inferences are quite parsimonious. Occam would be proud of this razor.

But What Justifies This Logical Jump?

On what grounds or principles can we base this leap from observed events in the past to unobserved events in the future? Logic suggests one natural principle for grounding this jump is the belief that the future will resemble the past. David Hume called this important assumption the “Principle of the Uniformity of Nature.” Hume was the first philosopher to grapple with the so-called problem of induction, all the way back in the 1740s. Not much has changed since then, however. Philosophers today still struggle with providing logical justifications for inductive inference.

Induction Without Circularity?

So inductive inference gets its power from the uniformity of nature. But why assume the uniformity of nature? How might we ground this assumption?

Because in the past, the past resembled the future. But wait, this argument uses inductive inference to prove the validity of inductive inference itself! We are justifying the practice of inductive inference by inductionThis is clearly circular. Hume famously concluded we had no good way of justifying this inferential leap. There are no good reasons for believing inductive conclusions from true (or probabilistic) conclusions. You can also probably guess that Hume was skeptical about such “metaphysical” notions as cause and effect, but that’s another blog post.

We Haven’t Come Too Far Since Hume

Writing on the problem of induction in 1953, the philosopher of science Wesley Salmon concluded, the “admission of unjustified and unjustifiable postulates to deal with the problem [of induction] is tantamount to making scientific method a matter of faith.” If the problem of induction is based on a matter of faith, then what does that mean for machine learning, a set of ingenious methods for automating the practice of inductive inference? Are the logical and scientific bases of machine learning also based on a matter of faith? Or might there be deeper logical principles to underpin our reliance on inductive inference?

Nelson Goodman’s Challenge for Induction: The Grue Example

Nelson Goodman wrote a short but influential book in the 1950s called Fact Fiction, and Forecast. In the book, he gave several “riddles” designed to highlight some of the logical issues surrounding inductive inference and its scientific application.

Goodman imagines there exists a predicate (a property we could ascribe to an object) called “grue.” An object is grue just in case it is green up until time t and blue thereafter. Let’s say we observe that in the past, up to time t,1000 emeralds were green. Then we state, on the basis of this observational evidence, a hypothesis like “in the future all emeralds will be green.”

The interesting thing that Nelson pointed out is that our evidence could support two equally-valid yet contradictory hypotheses: that in the future all emeralds will be green, and that all emeralds in the future will be grue. Which is the right one? On the face of it, Nelson’s riddle suggests that we could imagine any arbitrary number of inductively-justified hypotheses about objects and each would be as valid as any other. As the riddle cleverly illustrates, grounding scientific knowledge on inductive inference may not be such a wise choice after all.

Goodman’s solution to his own riddle is to suggest we be more careful about what kind of predicates we ascribe to objects. He says that predicates might be more usefully grouped into two classes: projectible and non-projectible. For describing objects, a predicate like green is better “entrenched” than a predicate like “grue,” and so we might consider grue an example of a non-projectible predicate.

What Does This Mean for Machine Learning?

Assuming the uniformity of nature doesn’t seem too problematic when we’re talking about atomic particles, molecules, or sensible objects like rocks and trees. We feel fairly certain projecting various attributes and properties to these long into the future, given our experience of them in the past. But the principle seems clearly misguided when applied to people. This is important because the recorded behavior of persons is the lifeblood of machine learning, otherwise known as behavioral big data (BBD). Without BBD, the automated recommendations we so often receive in our daily lives would cease to function.

I’ve written about some conceptual issues surrounding machine learning personalization in another article entitled Putting the Person Back into Machine Learning. Basically, my point there is that if we truly wished to infer a person’s preferences, needs, desires, and interests, we’d need to take into account their social and moral identities, self-narratives, and other aspects of their rich inner life. Currently, ML has no way of doing this and philosophers have argued this might be impossible anyway, due to an Explanatory Gap between subjective experience (my feeling of pain) and objective, physical facts (C-fibers firing).

Why Induction is Problematic For Machine Learning Personalization (MLP)

People change and grow morally and socially in non-transitive, non-linear ways. These kinds of abrupt structural changes to our moral identities are not well-accounted for with inductive inference. When we realize there are both inner (subjective) and outer (objective) descriptions of persons, MLP runs into further problems. The person I see myself as at period A, and B, and C may be different. Yet, from the point of view of an outsider, if A=B and B=C then we might assume, by transitivity, that A=C. But this doesn’t work in the case of moral progress and growth. This is especially problematic because MLP ostensibly works by leveraging our observed behaviors to infer our interests, preferences, and desires. But our interests, preferences, and desires are a smaller part of a much more general set of moral values we share. When our moral values evolve, our interests, preferences, and desires evolve as well.

Unfortunately for MLP, our values tend to change in abrupt, non-predictable ways, like when important events or tragedies occur in our lives. Even more difficult for the science of prediction is that moral progress seems inherently unpredictable. Looking back, moral progress — such as the abolition of human slavery — seems inevitable. But seen from the point of view of those in the past, moral practices may seem to reflect basic “laws of nature.”

Where to Go From Here?

Researchers working on recommender systems have begun to understand the problem of induction and the assumption behind the uniformity of nature. Recommender systems now often introduce serendipity into their predictions — essentially adding an element of randomness into predictions in order to avoid a stale recycling of recommendations of the same content or user accounts. Instagram’s Explore does this, for example. Serendipity is a small first step in mirroring the non-linear and non-transitive process of moral growth in persons.

I would also like to see more social scientists involved in data science, especially where they might use their knowledge of qualitative data collection to provide new inputs for personalized predictions and recommendations. Until machines can learn to distinguish the difference between a mere objective chronicle of events and a personal narrative, true personalization will be a pipe dream.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular