Concept Learning Search Algorithms (ML)

See Concept Learning As Search First.

Find-S

Algorithm

  1. Initialise h to be the most specific hypothesis in H (as in, most specific set of values that we know equates to true in the target function).
  2. For each positive training instance d:
    • For each attribute constraint $a_i \text{ in } h_i$:
      • If it's satisfied by d (as in, if$a_i = d_i \text { or } a_i = ?$)
        • Do nothing (it's correct, as it should be)
      • Else
        • Replace $a_i \text{ in } h$ by the next more general constraint satisfied by d (perhaps '?')

Example

d1 = <Sunny, Warm, Normal, Strong, Warm, Same> +
d2 = <Sunny, Warm, High, Strong, Warm, Same> +
d3 = <Rainy, Cold, High, Strong, Warm, Change> -
d4 = <Sunny, Warm, High, Strong, Cool, Change> +

Hence we start at:

  • h1 = <Sunny, Warm, Normal, Strong, Warm, Same> (the first most specific positive test case)

Then we hit d2, and realise that the 3rd attribute, Normal vs High, must be irrelevant (as both are positive). So:

  • h2 = <Sunny, Warm, ?, Strong, Warm, Same>

Then we hit d4 (ignoring d3), and realise that the 5th and 6th attributes must also be irrelevant, so we get:

  • h3 = <Sunny, Warm, ?, Strong, ?, ?>

And then we stop, having analysed all the positive test cases.

Does it Work?

If we assume that hc is a hypothesis within H that describes the target function c, and that the training data is error-free, then by definition hc is consistent with all the positive training data, and can never cover a negative example.

For each h that Find-S generates, hc must be more general than or equal to it, in order to fulfil all of them.

Hence yes, by definition it does.

Problems with Find-S

  • Cannot handle noisy/incorrect data
  • Can't tell whether it's learned something
  • Picks maximally specific hypotheses (maybe we want a general one).

List-Then-Eliminate

  1. Build the VS (i.e. add all h in H to it)
  2. For each training example <x, c(x)>:
    • Remove from VS any h for which $h(x) \neq c(x)$
  3. Output the list of hypotheses in VS

Example

Eventually you end up with something like:

Specific: {Sunny, Warm, ?, Strong, ?, ?}
Middle: {Sunny, ?, ?, Strong, ?, ?} {Sunny, Warm, ?, ?, ?, ?} {?, Warm, ?, Strong, ?, ?}
General: {Sunny, ?, ?, ?, ?, ?} {?,Warm,?,?,?,?}

Candidate Elimination

G: Maximally General Hypotheses in H
S: Maximally Specific Hypotheses in H

  1. For each training example d:
    • If d is positive:
      • Remove from G any h inconsistent with d
      • For each hypothesis s in S that is inconsistent with d:
        • Remove s from S
        • Add to S all next-level-down (children, basically) generalisations hof s such that:
          1. h is consistent with d
          2. Some member of G is more general than h
        • Remove from S any h that is more general than another hypothesis in S

For negative examples do the opposite of positive ones (i.e. swap S and G)

  • If d is negative:
    • Remove from S any hypothesis inconsistent with d
    • For each hypothesis g in G that is inconsistent with d:
      • Remove g from G
      • Add to G the next level up (more specialised) h's of g such that
        1. h is consistent with d
        2. Some member of S is more specific than h
      • Remove from G any hypothesis that is less general than another hypothesis in G

Essentially remove any inconsistent hypotheses from the two sets, and:

  • (if positive) in removing them from the Specific Set, add its more-general children.
  • (if negative) in removing them from the General Set, add its more-specific children.

Then ensure the sets are still maximally general and maximally specific.

The reason we add the specific children for positive examples is because we're broadening down towards a general example from the top (i.e. some things obviously don't matter if this example is positive).
In negative examples we're narrowing up from the bottom to a more specific example (i.e. some things obviously do matter if this example is negative).

Example

Screen%20Shot%202012-03-31%20at%208.33.45%20PM.png