CS7637 - Knowledge-based AI

Lesson 01 ~ 12 (Intro to KBAI ~ Logic)

Lesson 01 - Intro to KBAI

Fundamental Conundrums of AI

  • Intelligent agents have limited resources.
  • Computation is local but problems have global constraints.
  • Logic is deductive, but many problems are not.
  • The world is dynamic, but knowledge is limited.
  • Problem solving, reasoning, and learning are complex, but explanation and justification are even more complex.

Characteristics of AI problems

  • Knowledge often arrives incrementally.
  • Problems exhibit recurring patterns.
  • Problems have multiple levels of granularity.
  • Many problems are computationally intractable.
  • The world is dynamic but the knowledge of the world is static.
  • The world is open-ended, but knowledge is limited.

Characteristics of AI agents

  • Agnets have limited computing cower.
  • Agents have limited sensors (limited perception).
  • Agents have limited attention.
  • Computational logic is fundamentally deductive (vs inductive).
  • AI agent's knowledge is incomplete relative to the world.

Case-study: Waston AI

What it does, given a Jeopardy answer:

  1. Read the clue
  2. Search its knowledge base
  3. Decide on answer
  4. Phrase the answer in the form of a question

AI knowledge components:

  1. Reasoning: Understanding the question, generating response.
  2. Learning: Learning, error-correction.
  3. Memory: Storage of knowledge to be accessed again later.

Put together, we will call it deliberation

Overall architecture of KBAI

arch

Spectrum of AI schools of thought

4thoughts

4thoughts examples

What are cognitive systems?

  • Cognitive: dealing with human-like intelligence
  • Systems: multiple interacting components such as learning, reasoning and memory
  • Cognitive systems: Systems that exhibit human-like intelligence through processes like learning, reasoning and memory.

Cognitive system and the world

Reaction

  • Can get input from the world, and also act on it.
  • Multiple cognitive systems can exist.

Metacognition

  • Thinking about your own actions on the world, and reconfigure/fix the deliberation that led to a suboptimal action.

Intelligence is the ability to taken in input signals, use reaction/deliberation/metacognition and enact and output action from it.

Lesson 02 - Introduction to CS7637

Learning Goals

  1. Core methods of KBAI (structuring knowledge, memory organization, reasoning, learning, meta-reasoning)
  2. Tasks addressed by KBAI (classification, understanding, diagnostics, design)
  3. How KBAI use these methods to address these tasks
  4. Relationship between AI and human cognition.

Computational psychometrics

Computational psychometrics: AI that can perform intelligence tests thata human would take.

Raven's Progressive Matrices

  • Test written in the 1930s to examine general intelligence.
  • Consists of 60 multiple choice visual analogy problems.
  • Problems are strictly visual.

2 kinds of problems: 2x2 and 3x3 matrix problems, plus 2x1 matrix problems.

Principles of CS7637

  1. KBAI agents represent and organize knowledge into knowledge structures to guide and support reasoning.
  2. Learning in KBAI agents is often incremental.
  3. Reasoning in KBAI agents is topo-down as well as bottom-up.
  4. KBAI agents match methods to tasks.
  5. KBAI agents use heuristics to find solutions that are good enough, though not necessarily optimal.
  6. KBAI agents make use of recurring patterns in the problems they solve
  7. The architecture of KBAI agents enables reasoning, learning and memory to support and constrain each other.

Lesson 03 - Semantic Networks

Intro to Semantic Networks

sn

  • Represent each shape for image A and B
  • Show relationship within each image, "inside, above"
  • Show relationship across image, "unchanged, expanded"

Structure of Semantic Networks

smstructure

A knowledge representation will contain:

  • lexicon: Tells us the vocabulary of the representation language.
  • structure: tells about how the words in the lexicon can be composed together into complex representations.
  • semantic: tells how representations allow us to draw inference so we can reason.

The image above contain nodes (lexicon), directional links (structure), and application-specific labels, e.g. "expanded/deleted" (semantics)

Characteristics of good representations

  • Makes relationships explicit: Semantic network made relationship of shapes explicit.
  • Expose natural constraints
  • Brings objects and relations together
  • Exclude extraneous details
  • Transparent, concise, complete, fast, computable

Guards & Prisoners problem

Semantic network of the guards & prisoners problem:

  • Each image containing people and boat is a node
  • Arrow represents structure

gp

Each image shows a possible move, but not all moves are legal. Only 2 of the 5 possible moves are legal and productive (ie. doesn't go back to original state).

gp2

Solution: Most efficient is 11 moves

gp3

Represent & Reason for Raven's + Weighted Matches

If all lexicon, structure and representation match between two images, we can assume that the image is the correct one. But sometimes there could be multiple correct answers. In these cases, we can weight each answer with regard to numbers of transitions necessary.

An AI agent can assign weights to each transformation method (e.g. "expand", "rotate", etc). In example, the easier the image transformation the greater the weight. You can use these weights to get a total weight of both valid answers.

weights

Connections from RPM exercise

  1. Memory: The existing shapes (A, B, C) will be stored into memory, and answers 1-6 are probed using these memories. Task to solve is which answer is most similar to what's stored in memory. We can decide on that answer by using a similarity metric.
  2. Reasoning: Correspondence problem ("which shape in A corresponds to B?"). Given two situations, which object in the first situation corresponds to which object in the second situation.
  3. Cognition: In KBAI, focus is on the relationships between entities.

Lesson 04 - Generate & Test (GT)

Context: guards & prisoners problem

  • GT has two functions: generate solutions, and test the solution's validity.
  • Dumb GT isn't as efficient as smart GTs, but can solve problem by brute computing force. It will create duplicate states from the past, though.
  • Smart tester should have a memory functionality to dismiss new states that are duplicates of past states.
  • Also, identical states in different paths can also be merged.

  • (aside) If you had complete (correct) knowledge of the world, infinite compute power, and method of reasoning guaranteed to be correct - You don't need to test your hypotheses.

Smart generators & testers

Instead of "generate all possible states then check output" method, we can prevent the generator from creating any invalid states in the future.

Semantic Networks for GT

  • GT struggles in an unconstrained domain.
  • Semantic network representation provides a level of abstraction at which the problem is represented and analyzed.
  • SN simplifies the problem and removes lower level of detail that would make the problem harder to solve.

Generate & Test in RPM context

We can use SN and GT for the RPM test, either by:

  1. Generating multiple answers and testing against the answer options, or
  2. Generating one answer and testing it more intelligently against the different options available.

Generate & Test and Human Cognition

  • Humans don't have complete knowledge of the world, compute resources, or method of reasoning that always gives us the right answer.
  • Thus, we need to generate solutions and test whether they are valid, and iterate.
  • Genetic algorithms are related to GT.

Lesson 05 - Means-Ends Analysis

Core Idea:

Means = Operators
Ends = End State

Topics:

  • State spaces
  • Means-ends analysis
  • Problem solving with means-ends analysis

Exercise: The Block Problem

Goal: Move blocks from initial state to the goal state while obeying rules.

  1. Only move 1 block at a time.
  2. Only move blocks that have nothing on top of them.
  3. Available operator is move(object, location).

block

Answer:

Move(C, Table)
Move(B, C)
Move(A, B)

State Spaces

A state space is a space where all possible states of the problem is laid out. The goal of the AI agent is to find a path that starts from the initial state and reaches the end state while following rules.

Differences in State Spaces

Q: Given a state, how does an AI agent know what state to go next? One way to think about it is to think in term of differences between states.

  • Each state in a state space can be compared to the goal state (see red line in image).

states

For each possible operation:

  • Apply the operator to the current state.
  • Calculate difference (ie. edit distance) between new state and goal state.
  • Prefer state that minimizes distance between new state and goal state.
Cons of Means-end Analysis
  • Important: Means-end analysis by itself doesn't help an AI agent choose between states with the same number of edit distances.
  • Given a state, there may not be any valid operations that will reduce the difference. All operations may increase difference.
  • Means-end analysis can get stuck in loops.

Takeaway:

  • Means-end analysis, like generate & test, is another example of universal AI methods.
  • Applicable to large classes of problems.
  • On the other hand, they come with few guarantees of success (optimality) and have high compute costs.

states

Example 2

The example below shows a given state and possible moves from it. For each state prime, we count the number of differences between state prime and end goal. For the top state prime, edit distance is 2 because we need1) b on top of c and 2) a on top of b

states

Example 3

Below is another, more complex example. In this given state, option 3 would be chosen in an means-ends analysis. states

Below is another example.

states

How many possible states are there?

match = 1, no match = 0

state_1 = [(B on C, 1) (C on table, 0) (A on D, 0) (D on table, 1)] # 2 match

state_2 = [(B on A, 0), (C on table, 0), (A on table, 0), (D on table, 1)] # 1 match

state_3 = [(B on C, 1) (A on B, 1) (C on table, 0) (D on table, 1)] # 3 match

state_4 = [(B on C, 1) (D on A, 0) (C on table, 0) (A on table, 0)] # 1 match

state_5 = [(B on D, 0) (C on table, 0) (A on table, 0) (D on table, 1)] # 1 match

state_6 = [(D on B, 0) (B on C, 1) (C on table, 0) (A on table, 0) ] # 1 match

state_7 = [(B on table, 0) (C on table, 0) (A on table, 0) (D on table, 1)] # 2 match

Problem Reduction

Problem reduction = Deconstructing a large problem into smaller problems, then combining the solutions of these problems to solve the large problem.

Block problem, revisited

  • Problem reduction (in the context of block problem) works backwards from the end state.
  • Among the 4 states at the end state, choose C on D as a subgoal. (Note that the act of choosing a subgoal is not a part of problem reduction).
  • Given subgoal, apply operations and calculate distance of each from the subgoal
  • Choose state prime that has the least edit distance. (Note that the act of choosing the most optimal states among those with same edit distance is not covered in means-end analysis)

subgoal

Applying Means-end Analysis and Problem Reduction with RPM

Means-end Analysis:

  • How would you apply "distance" to the RPM problem?
  • What are the possible individual operators that will get you closer to the goal.
  • Can combine with semantic networks to achieve analogical reasoning ("square inside circle in A, circle inside triangle in C").
    • Semantic-nets describe similarity + correspondence
    • Combine it with means-end analysis to do a systematic analysis of transformations.

Problem reduction:

  • From image A to B, we can reduce the transformations to smaller subgoals (seen in image).
    • Reduced to 3 subgoals: 1) Find transform from A to B, 2) Transfer the transformation information to C and find candidate states for D, 3) Compare candidates states for D to each possible answer.

Generate and test: Generating solutions (predicted transforms) and testing it on the possible answers.

Semantic network: The same semantic network supports all three of above strategies.

Takeaway:

  • Complex problems require a combo of AI techniques.
  • The coupling between these AI techniques and semantic net is weak (technique uses only a little bit of knowledge)

Cognitive Connection: Mean-end Analysis, Problem Reduction

  • Weak vs strong methods: Weak doesn't need as much knowledge, strong relies on a lot of available knowledge.
  • Humans, as experts, use knowledge-intensive methods. As novices, they use weak methods because they lack knowledge.

subgoalrpm

Lesson 6 - Production Systems

How to use knowledge and its architecture to make complex decisions?

Topics:

  • Cognitive architectures
  • Production systems
  • Chunking

Example problem used throughout:

baseball

Function of a cognitive architecture

The congnitive agent is a function that maps a perceptual history into an action.

The function looks like below:

P = percepts
* = history of percepts
A = action

function: P* -> A

Levels of cognitive architectures

From High to Low:

  1. Task / knowledge level: Knowledge required to perform a task.
    • e.g. Selecting a pitch, playing baseball
  2. Algorithm / symbol level: Symbolic structure that serves as the foundation of knowledge.
    • e.g. ME-analysis, semantic networks.
  3. Hardware / implementation level: Lowest level, provides architecture for algorithm/symbol level.
    • e.g. brain, transistor

Example: Watson's layers

From high to low:

  1. Answering the inputted cue
  2. Searching and decision-making
  3. The physical computer

Assumptions of a cognitive architecture

  1. Goal-oriented: AI they take actions in pursuit of goals.
  2. Rich environemnt: AI lives in a rich, complex environment with many signals.
  3. Knowledge: Uses knowledge from this environment to pursue their goal.
  4. Symbol/abstractions: Knowledge is abstracted to remove extraneous signals.
  5. Learning: Agents constantly learn as they interact with the world.

Basic equation of basic intuition behind cognitive architectures:

Behavior = Architecture + Context
  • Important: If the knowledge architecture is fixed, we can replace just the contents to get different behavior.
  • What is good architecture?

Cognitive architecture for production systems

Soar

What is it:

  • Originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University
  • Goal: Develop a fixed computational building blocks necessary for general intelligent agents.

Architecture:

  • Long-term memory: 3 kinds of knowledge
    • Procedural: How to do things (e.g. how to pour water into a cup)
    • Semantic: Generalizations in the form of concepts/models of the world.
    • Episodic: Specific events (e.g. what you ate for dinner yesterday)
  • All three of above feeds into a working memory

Action selection

Decision-making, conceptualized. Left is qualitative and right is this turned into a state space. S0 is the intial state and S101 is the end goal state.

actionselection

Putting content into the architecture

Applying baseball situation into a knowledge representation

content

Invoking memory

  • Production rules captures procedural knowledge and stores it in long term memory.
  • Procedural knowledge are "if..then" statements, querying a state and taking an action if it is true.
    • Some may activate other rules (e.g. suggest goal) and some may activate an operator (e.g. intentional walk operator).
    • May include multiple conditions (boolean and, or) Each rule (in red) in the image below are considered procedurally.
  • Working memory is the immediate state.
    • Working memory changes more rapidly than the long-term memory.

memory

Summary of Soar:

  1. Based on the contents of the working memory, some rules get activated in the long-term memory.
  2. As rules get activated, some consequences are established.
  3. As consequences are established, they are written into the working memory.
  4. As working memory changes, we repeat from 1 again.

Chunking

Conflicting rules:

  • One pitfall is in working/long-term memory strategy is to assume that we will always have one rule which tells us what action to take.
  • There are situations where more than one rule is valid. If there is no disambiguation policy, we may get multiple operators. This is called an impasse.
  • Soar breaks impasses by invoking episodic knowledge.

Chunking: Learning technique that Solar uses to learn rules that can break an impasse.

  • Only triggered when impasse occurs.
  • Searches working memory to find some knowledge that might break the impasse.
  • From the episodic memory, finds characteristics similar to working memory and suggests the opposite (I think?)

RPM & Production Systems

2 levels:

  1. Production system that can address any incoming problem with its set of pre-designed rules.
  2. An agent (with no preconceived production rules) receives a new problem, induces production rules that govern the transformation between certain figures and then transfer that to other rows/columns.
    • How is it going to write production rules from a given problem?
    • What is the benefit of doing it this way?

Summary of Foundation Unit

  • Cognitive architectures
  • Production systems (specifically those implemented in Soar framework)
  • Action selection: Enabled by production system. Helps us map percepts in the world into actions.
  • Chunking: Used when production system hits an impasse. First instance of "learning".

Cognitive Connection - Production Systems

  • Working memory is analogous to short term memory in human cognition.
    • Only has capacity of 7 +/- 2 elements.
  • When comparing human vs AI agent (Soar) with arithmetic problems, the behavior between the two are very similar. But this is only partial cognition in a closed world.

Lesson 07 - Frames

Frames: A knowledge structure. Bigger than production rules.

  • Represent stereotypes.
  • Slots can have default values, or empty
  • Exhibit inheritance (Animal class -> Ant class)
  • Each frames contain slots. They can be populated with fillers.
  • Structure is similar to OOP
  • Frame representations are equivalent to semantic nets
  • Allows story understanding by creating complex frame systems

"Production systems are atoms (ie. smallest) of knowledge representation, while frames are the molecules"

Example: Earthquake - making sense of a sentence through frames

earthquake

Complex Frame Systems

complexframes

Frames and RPM

  • Record object relations by referencing each other in slots within each frame.

framesrpm

Qs to consider:

  • Is the agent going to receive the problem in terms of frames initially, or is it going to generate frames based on its own reasoning.
  • Should frames represent each object, but what about frames representing the problem as a whole?

Frames - Cognitive Connection

How frames are connected to cognition:

  1. Frames are structured knowledge representation, larger than production systems.
  2. Enable a theory of cognitive processing that's totally bottom-up (?)
  3. Frames capture the notion of stereotypes (?). Exists for cognitive efficiency by assigning default attributes.

Lesson 08 - Learning by Recording Cases

Topics:

  • learning by recording cases
  • nearest neighbor method
  • k-nearest neighbor
  • cases irl

Learning by Recording Cases

General idea:

  • Given new problem a, retrieve most simlar prior problem, b, from memory and apply b's solution to problem a.
  • Memory directly gives us the answer, we don't need to think about it.
  • Example: Given symptoms a, b, c, from previous cases disease X had same symptoms so a doctor diagnoses disease X to patient.

Block world problem:

  • 6 cases recorded in memory (shape and color)
  • Problem gives shape, not color
  • Agent browses memory, retries closest shape.
  • Agent chooses color attribute that nearest shape has.

Application: Retrieving Cases by K-Nearest Neighbor (KNN)

We can choose "nearest" answer by calculating the euclidean distance of new problem with past examples using shared attributes (e.g. width, height).

  • Problems can be multi-dimensional, ie. more than 2 attributes.

euclidean

Example: Automating directions for cars

Problem: Given new start and destination, find the closest example from memory by calculating euclidean distance.

  • You can calculate euclidean distance for 2 attributes, origin and destination separately. However this can choose different routes for start and origin, as some routes are a closer match at the origin and others closer at the destination.

routes

KNN Formula

You can calculate euclidean distance across multiple dimensions.

Given existing case at (c1, c2 ... ck) and new problem at (p1, p2 .. pk):

knn

The result from the multidimensional calculation shows that D has the smallest distance.

knnresult

Limitations
  1. KNN might lead to a very high dimensional space. Computationally expensive.
  2. Nearest neighbor does not definitively mean that it can be applied to the new problem. We're just choosing the closest thinga available.
  3. We might need to store cases as qualitative labels rather than quantitative, which makes calculating distance trickier.

RPM and Recording Cases

  • What should you consider to be "cases" (each figure, each transformation between figures, each problem, etc)
  • How to evaluate similarity (between figures, between transformations, or previous problems and current problem)?

Cognitive Connection - Recording Cases

  • Humans learn from routines, and require less thinking to do it.
  • Recording cases is related to the memory component of intelligence (reasoning, learning, memory)

Lesson 09 - Case-based Reasoning (CBR)

4 steps of case based reasoning:

  1. Retrieval: Retrieving a case from memory similar to the current problem.
  2. Adaptation: Adapting the solution to that case to fit the current problem.
  3. Evaluation: Evaluate how well the adapted solution addresses the current problem.
  4. Storage: Storing the new problem and solution as a case.

Assumptions of CBR:

  1. Patterns exist in the world.
  2. Similar problems have similar solutions.

CBR Step 2: Case Adaptation

Memory provides a similar example, so we don't have reason (ie. compute) from scratch. We can "tweak" the memory we retrieve to reach the solution. "All designers redesign".

How do we adapt old cases to meet the requirements of a new problem?

  1. Model-based method
  2. Recursive-case method
  3. Rules method

1) Model-based Method

Definition: Adapting an earlier case, using some model of the world.

Example: Map to new location, nearby a place you've visited before.

  1. Retrieve from memory a case that is similar to current problem (nearby store).
  2. From memory location, search from that location to the final destination.

2) Recursive Method

Definition: Use case based reasoning recursively. Each case provides a partial solution. Takes remaining part which wasn't solve, makes it a new problem, and send it back into case memory. This case memory finds a new case and the agent composes it with the previous case to get a full solution.

Compound analogy: Specific type of adaptation by recursive reasoning.

3) Rules Method

Definition: Heuristics in the form of rules.

Example: To get to downtown walk towards where all the big buildings are located.

CBR Step 3: Evaluation

Definition: Executing or simulating our adapted case to determine whether it is successful or not.

CBR Step 4: Storage

Definition: Once an adapted case is successful, we store the problem and solution as a case.

Types of Storage: Index and Discriminating Trees

Index: Each individual node is stored in an index.

  • As node size grows, the search grows linear.

Discriminating Trees: Building of a decision tree. With each new node, we create more branches to discriminate between overlapping nodes. There should be no overlapping leaf nodes.

  • Example of incremental learning. With each case, a new knowledge structure is learned.
  • By asking the right questions (ie. decision points) we can make the search process more efficient.

Big O Notation: With indexing, the efficiency of searching the case library is linear O(n), while decision trees are logarithmic O(log n).

Advanced Case-based Reasoning

Things don't always go linear from Retrieval -> Adaptation -> Eval -> Storage.

  • Adaptation created, but eval failed. Go back and adapt again.
  • Modified adaptation failed, try retrieving a different solution.
  • Retrieved case perfectly matches current problem; No adaptation needed.
  • We can store failures in case memory, can help anticipate future failures that might occur with new problems.

RPM and Case-based Reasoning

  • How is this different from learning by recording cases?
  • What's the adaptation phase?
  • How are you adapting past solutions to the new problem?
  • How is evaluation done?
  • Are you going to record the cases that your agent encounters as they're being solved, or will they be equipped with past cases beforehand.

Cognitive Connection - Case-based Reasoning

  • Analogical reasoning is a core part of human cognition.
  • Depends upon a spectrum of similarity. At one end, there are problems that are exactly the same as previous ones. At the other, there are problems that is completely novel. The middle of this spectrum is the most common in human cognition. We retrieve past cases, tweak it, then apply it to the problem.

Lesson 10 - Incremental Concept Learning

  • Learning is often incremental. We get one label at a time, and try to generalize based on limited data.
  • Most learning we do don't rely on millions of examples, nor do we have labels for the correct answer (ie. supervised learning).
  • Different from case-based reasoning, because we don't have multiple examples stored in their raw form in memory. In this case, we're abstracting concepts from them.
  • The number of examples we are given is very small.
  • When abstracting from small set of examples, knowing what to abstract becomes a very hard problem. There's a tendency to often overgeneralize or to overspecialize.
    • Overgeneralization: Seeing a furry dog, deduce any furry animal is a dog.
    • Overspecialization: Seeing a black cat, deduce all cats must have black fur.
  • We variabilize our current understanding to arrive at a more abstract notion of the concept.

inclearn

Heuristics for specializing and generalizing

incheur

droplink

  • If an agent sees a new, positive example of a concept (e.g. "arch") that has less "links" than its current conceptualization of it, it can "drop" these links and make a new conceptualization (e.g. "new concept").
  • Useful when structure of original concept definition and new example have a lot of overlap.

requirelink

  • If an agent sees a new, negative example of a concept that has less links than its current conceptualization, it can learn that this concept requires the links not seen in the new example but exists in the current concept.

forbidlink

  • If an agent sees a new, negative example of a concept that has additional than its current conceptualization, it can learn that this concept forbids the links seen in the new example but doesn't exist in current concept.

Enlarge-set heuristic - Generalization of abstract features

enlargeset

  • If an agent sees a new, positive example of a concept that has a different entity from one of its current roles (e.g. "brick" vs "wedge"), it can add an "either or" condition to that entity to accomodate both.

Climb-tree heuristic - Generalization with background knowledge

climbtree

  • If agent has background knowledge that two distinct entities can be generalized ("block" or "wedge" can be "block"), it can use that to generalize the current concept's entity to a more generalized one.

Close-interval - Generalization using new example

  • If agent sees same entity with different attribute (e.g. new block but red color whereas all others were green), it can expand the range of values to be positive examples of the concept

Gotchas - Non-examples

Remember, if new example doesn't fit our concept we can do nothing. For example below, we don't have to adjust our concept.

notfoo

Aside: Visualizing incremental learning

  • We can view the concept as having borders that constantly change as new positive and negative examples appear. It will change its border shape to include/exclude new examples.

Final concept of a foo: semantic network

finalconcept

Takeaways:

  • Incremental concept learning is dependent on the agent's background knowledge. We can't generalize if no background knowledge exists.
  • ICL is different from ML because it doesn't require large number of examples.

ICL and RPM

  • What are the concepts you're trying to learn?
  • ICL at the level of individual problem, or the whole problem set?

Cognitive Connection and ICL

  • Humans rarely encounter millions of examples to learn (like ML)
  • We abstract our concepts by iterating on limited examples.

Lesson 11 - Classification

Look at other notes PDF

Lesson 12 - Logic

Look at other notes PDF

Lesson 13 ~ 25 (Planning ~ Advanced Topics)

Lesson 13 - Planning

Block Problem Revisited

  • approaches like means-end analysis is a weak learning method. There are more systematic ways to approach this.

Setting Goals

States

  • In order to specify the goal fully we need to not only specify goal state but also the initial state.
  • In general, if there's info about the initial state, then those assertions about the world can be propagated to the subsequent state provided that no operator in the middle of the initial state negates or deletes any of these issues.

Operators

  • All conditions in Precondition are positive. Postcondition may contain negative literals.
    • This stems from old AI "scripts", a planner developed in late 60s at Stanford. Made computation more efficient.
  • The operators in the propositional logic can be applied if and only if the preconditions are true.

Planning and State Spaces

  • Knowledge about how to do the operator selection - this knowledge is tacit and sometimes called control knowledge. Goals provide us with control knowledge, ie. deciding how to select between different operators.
  • Problem becomes, how to do operator selection (which is same as "how to do action selection?")

planning

  • Plan: Initial state followed by a set of successor states.
  • Operators are placed between the two states.
  • Operators contains a precondition and postcondition.

Partial Order Planning

Problem: MEA and conflicting goals

  • MEA can't handle cases where there are conflicting goals. This requires more pre-planning.
  • Partial order plannning (POP) allow us to forecast conflicting goals and plan accordingly to avoid them.
  • Example: Buy couch, have it assembled, doesn't fit entrance, disassemble, fit through entrance, reassemble. Plan for assembling the couch clobbers the final goal of getting the couch inside the home.

Partial Order Planning ideas:

  • In POP, goal becomes the control knowledge.
  • Work backwards, find which operator has a post condition the meets end goal.
  • If an operator reaching the goal has precondition, we must find another operator that will get resulting in the precondtion as its postcondition. These become subgoals.

Example of partial planning (right side):

partial

Control knowledge: Painted(Ceiling)

  1. climb-ladder operator
  2. paint-ceiling operator

Detecting Conflicts

For each precondition in current plan:

  • If precondition for an operator in the current plan is clobbered by a state in another plan:
  • Promote current plan's goal above other plan's goal

  • In example below, climb-ladder operator conflicts with paint-ladder operator's postcondition (Dry(Ladder) vs !Dry(Ladder)).

  • To avoid conflict, we must promote the Painted(Ceiling) goal over the Painted(Ladder) goal.

Open Preconditions

  • When there are gaps between a postcondition of one goal with the precondition of another goal, these are called open preconditions.
  • We need to fill these by finding an operator that gets from the postcondition state to the precondition state.
  • In example below, to get from paint-ceiling postcondition to paint-ladder precondition we need to fill the open precondition with a descent-ladder operator.

finalplan

Multiple Agents

What does all this tell us about intelligence?

  1. Knowledge is not just about the world, knowledge is also control knowledge that is often tacit. Helps us select between operators.
  2. Goals provide control knowledge. Goals can be used to decide between different operators. We select an operator that helps us move closer to the goal.
  3. POP is an interaction between several different kinds of agents. Each agent represents a small "micro-ability".
    • One agent plans for each of the goals independently
    • Another agent detects conflict
    • Another agent resolves thes conflicts
    • Minsky's "society of mind": Society of intelligent agents that work together to produce complex behavior. Each agent is simple.
    • Comparatively, humans are able to absorb all of these procedure almost instantaneously. Adapting to AI, however, must make all of these processes explicit. AI must specify each operator, pre/postconditions, each goal, etc.

Block World Planning

bw1

  • Subgoal has an order of operators
  • Some subgoals' postcondition clobbers another one's precondition
  • Using pre/postconditions, we can order meeting subgoals to avoid conflict.

Hierarchichal Task Network Planning

Idea: Group lower-level operations into "macro" operators in order to reduce problem space. Then, we can expand those macro operators into move operations.

Hierarchichal Decomposition

In the block world problem, we can create 2 macro operations, unstack to put all blocks on to the table and stack-ascending to place blocks in an ascending order.

hd

Each macro operators contain a pre/postcondition and set of low-level operators.

hd2

RPM and Planning

  • What are our initial/final states?
  • What are the operators that allow the transition between them?
  • How do you select those operators?
  • What conflicts are possible when trying to solve the problem, how to detect them beforehand and avoid them?
  • Two level: Agent plans how to address any new problem, or agent discerning an underlying plan behind a new problem.

Cognitive Connection - Planning

  • Cognitive agents have multiple goals that interact.
  • Avoiding conflicting goals is a central process for achieving multiple goals at the same time.

Lesson 14 - Understanding

Thematic Role Systems

TRS are the most structure type of frame representation.

Types of textual analysis:

  1. Lexical - categorize each words into different categories (POS tagging, Ashok: proper noun, made: verb)
  2. Syntactic - Analyze structure of a sentence (noun phrase, verb phrase, etc)
  3. Semantic - Analyze meaning of the sentence (Ashok: agent, David: beneficiary, griddle: instrument)
  • For KBAI semantic analysis is at the forefront, with lexical and synatictic serving as a support.
  • Meaning is validated by the inferences we can draw from it. You understanding meaning of the sentence if you can draw the right inferences.

Semantic analysis of the sentence:

Ashok made pancakes for David with a griddle

This can be captured into a frame, with a focus on the verb

thematic_role: {
  verb: make
  agent: Ashok
  beneficiary: David
  thematic object: pancakes
  instrument: griddle
}

Another example:

David went to the meeting with Ashok by car

Frame:

thematic_role: {
  verb: go
  agent: David
  coagent: Ashok
  destination: meeting
  conveyance: car
}

How meaning is extracted out of a sentence

Constraints

Intelligent agent may use the structure of the sentence to make sense of the story.

Prepositional Constraints

Preposition Thematic Roles
by agent, conveyance, location
for beneficiary, duration
from source
to destination
with coagent, instrument

Resolving ambiguity in prepositions

nlu1

  • The "by" preposition constricts the thematic role of the associated noun. Cuts down options to 3.
  • But "by" still has 3 thematic roles - how do we know what "by" is referring to?
  • We can use an ontology of the world. It becomes the vocabulary of the knowledge representation. This ontology can classify nouns into leaf nodes.
  • We can narrow down meaning by finding the associated noun to a leaf node. This lets us whittle the options down to 1.

Resolving ambiguity in verbs

Verbs can also be ambiguous. Given "I took the candy from the baby" which verb meaning is it referring to?

nlu2

  • Each frame-like representation specifies the thamatic roles that go with a verb.
  • Using background knowledge, "candy" eliminates a number of options (frames that include medicine, prize, target agents, etc)
  • "from the baby" implies a source. This eliminates frames that don't contain a source attribute.
  • Hence, only I verb meaning for "take" remains, "to steal"

nlu3

Earthquake problem

  • "kill" has two meanings.
  • 1st sentence contains a preposition association with a victim. Hence it fits the first definition.
  • 2nd sentence contains a preposition association with an object. Hence it fits the second definition.

Limitations

  • Rule driven heuristic means that as more words are considered the number of rules lead to combinatorial explosion.

Understanding and RPM

  • Frame that dictates certain expected values for different transformations. The degree of fit to those expectations can dictate accuracy of that particular answer.
  • How do you first understand what's happening in the problem?
  • How do you fit that into a thematic role frame representation?
  • How would that representation help you solve the problem?

Cognitive Connection

How to make sense of the world, where it involves more than text (audio, visual, taste, touch). There are 3 sources of power:

  1. Exploit constraints about the world.
  2. Use structured knowledge representation. These representations in memory capture not only knowledge representation but its organization.
  3. Low level, bottom-up processing helps us activate these knowledge structures from memory. Once generated, these structures generate expectations that make the processing become top-down. Generating these expectations are important.

Lesson 15 - Common Sense Reasoning

Primitive Actions

NLU issues:

  • Homonyms/similar words: words use same word but different meaning ("ate away", "eat the losses")
  • Synonyms: words that sound different but same in meaning (eat, dined, ate)

Synonyms can be collapsed by using primitive actions, a representation that captures many forms of similar actions.

14 types of primitive actions:

pm

  • Includes percepts as well (hear, feel).
  • Creates an ontology of actions. Ontology: a set of concepts and categories in a subject area or domain that shows their properties and the relations between them.
  • Each primitive object is a frame with relevant slots and fillers.

How to interpret sentences where subject and direct object are the same? "I am moving across the room"

  • Use the "Move-object" primitive action where object being moved and object moving it are the same.

Thematic Roles and Primitive Actions

  • Example of a "structured" knowledge representation.
    • predicates and logic are atoms of knowledge rep, thematic roles are the molecule-level knowledge reps.
  • Each "action frame" contains a primitive action slot
  • There are rules for filling other slots (agent, object)
    • e.g. if there is an object after primitive action, and it is inanimate assign it as object.

Error correction if assigning words to wrong slot:

  1. Try to fill slots for another frame (ie. question frame), if it struggles to answer, change sentence is slotted.
  2. Force (mistaken) interpretation into another frame, if it struggles, it doesn't work.
  3. For words that can fill multiple slots (bat: verb or noun).

Implied Actions

Example: "John fertilized the field",

  • Ambiguous as to what John put on the field? Action is implied.
  • Better formed as "John put fertilizer on the field"
  • Adding the verb, we can then fill slots as needed.

Example: "Bill shot Bob"

  • Reworded using primitive: "Bill propelled a bullet into Bob"
  • Makes the ambiguous meaning of "shot" (gun? camera?) into an unambiguous form.
    Frame
    primitive: propel
    agent: Bill
    object: bullet
    destination: Bob

Linking multiple action frames:

Example: "Ashok put the wedge on the block."

  • Can make explicit implicit representations.
Frame
  primitive: move-object
  agent: Ashok
  object: wedge
  destination: block

Frame
  primitive: move-body-part
  object: fingers
  destination: closed

Frame
  primitive: move-body-part
  object: hand

Changed states

Example: "Ashok enjoyed eating the frog"

Frame
  primitive: ingest
  agent: Ashok
  object: frog
  result: State-Change Frame

State-Change Frame
  object: Ashok's mood
  destination: happy

Implied Actions and State Changes

Example: "Susan comforted Jing"

Frame
  primitive: do
  agent: Susan
  result: State-Change Frame

State-Change Frame
  object: Jing's mood
  destination: happy

Actions and Resultant Actions

Example: "Maria told Ben to throw the ball."

Frame
  primitive: speak
  agent: Maria
  result: Frame2

Frame2
  primitive: propel
  agent: Ben
  object: ball

RPM and Common-Sense Reasoning

  • Connect primitive actions of a gents to primitive transofmrations in these matrices.
  • What would primitive transformations be? What would the action frames involved in each transformation look like?
  • Connect individual primitive actions to higher level transformations. What would the higher level transformations look like?

Wrap-up

  • Primitive actions are like primitives in programming (simplest things we can interpret out the world)
  • These PA's are composed into bigger actions to build an ontology of the world.
  • Primitives can cause state changes, which let us predict the effect or cause of certain events.

Lesson 16 - Scripts

Hierarchy: Frames -> Understanding -> Commen Sense Reasoning -> Scripts

  • Scripts are the culmination of frames, understanding et al.

Story Understanding for AI Agents

For agents to understand stories, it needs the ability to take in disparate events and apply causal inference and generate expectations to make sense of what is going on.

Definition of Scripts

Scripts are a causally coherent set of events:

  1. Each event sets off, or causes, the next event.
  2. The causal connections between events make sense.
  3. The parts are actions or scenes in the world.

Parts of a script

  1. Entry conditions: conditions necessary to execute the script.
  2. Results: Conditions that will be true after the script has taken place.
  3. Props: Objects involved in the execution of the script.
  4. Roles: Agents involved in the execution of the script.
  5. Track: Variations of "subclasses" of the particular script.
  6. Scenes: The sequence of events that occur during the execution of the script.

Example of parts in a restaurant script (in order of parts):

  1. Customer is hungry
  2. Customer is not hungry, restaurant makes money.
  3. Food, money.
  4. Customer, waiter, cook.
  5. Fast food situation, coffeehouse situation, buffet situation, etc.
  6. Enter restaurant -> order food -> eat food -> pay check -> leave

In frame form:

Script
  script: restaurant
  track: formal dining
  props: tables, menu, check, money, F = food, P = place
  roles: S = customer, W = waiter, C = cook, M = cahshier, O = owner
  entry: S is hungry, S has money
  result: S has less money, O has more money, etc
  scenes: Scene1, Scene2, etc.

Scene 1 - Entering

Action Frame - enters the restaurant
  prim: move-object
  agent: S
  object: S
  destination: P

Action Frame - sees a table
  prim: see
  agent: S
  object: table

Action Frame - decides to sit at the table
  prim: conclude
  agent: S
  result: Action Frame (S, move-object, table)

...

Form vs Content

  • Each script can be understood as a class that can be instantiated.
  • An AI agent can have a multitude of scripts in its memory. Certain triggers evoke a certain script, which tell it what actions to take, and what expectations to have from taking those actions.
  • Primitive actions provide the fundamental units of frames. These frames are composed together to create a script.
  • Knowledge structures can be composed out of other knowledge structures.

Using a Sciprt to Generate Expectations

Scripts are a sequence of actions that expects a prior outcome. Expectation violation occur when:

  1. Expected action did not occur (waiter doesn't come to the table)
  2. Things happen in a sequence that we were not expecting (get a check before ordering).

Tracks

Tracks are different permutations of a script. In a restaurant script this can be the location, for example (coffee shop, fast food, formal dining, etc).

Example of a track

Restaurant track
  ├─ Coffehouse script
  ├─ Fast food script
  ├─ Casual dining script
  └─ Formal dining script

Topics that help an agent learn a script

Part I:

  1. Semantic networks: Knowledge representation
  2. Frames: Same as semantic networks
    • Storing information necessary to construct a thorough script.
  3. Incremental concept learning: AI agent encounters the world and develops a categorization schemes to determine which script to run.
  4. Planning: Agent has initial/goal state and figures out how to navigate between the two. Once agent figures out how to navigate between the two states, that becomes a script that could be transferred to new, similar situations without having to completely replan it from scratch.
  5. Commonsense reasoning: Gives AI agent a language of primitive actions in which to learn the script in the first place
  6. Production systems and learning by recording cases don't apply as much as because they work on a very low level of abstraction.

Part II:

  1. Problem reduction: Script breaks down the overall scene into smaller scenes, and furthermore smaller actions.
  2. Classification: Helps identify which script to execute in a given situation based on what we see.
  3. Logic: You could use formal logic to consider whether a plan has already been executed.
  4. Understanding: Helps us disambiguate similar events in different situations.
  5. Generate & Test / MEA don't fit because they are problem-solving methods.
  6. Case-based reasoning keeps things at the level of individual cases that can be adapted to our current problem, whereas scripts scripts serve as an abstraction over a number of cases.

RPM and Scripts

  • Might have scripts for different broad categories of RPM problems.
  • What would your entry conditions be for each script? What would your tracks/scenes be?

Wrap-up

  • Scripts are causally coherent series of events.
  • Provides a prototype for what to expect in certain situations.
  • Form of the general script shows us the form of the overall prototype for the situation.
  • A specific instantiation of the script specifies the content.

Scripts help us make sense of complex, repeated events with relative ease and generate expectations about the world around us. Scripts are the culmination of frames and understanding and common-sense reasoning. Scripts are causally coherent series of events, that gives a prototype for what to expect in certain situations.

Cognitive Connection - Scripts

  • Brains work by doing quick bottom-up processing followed by top-down processing with general expectations of the world. Then, we act on it.
  • When these expectations fail, we are surprised/amused/angered.
  • Do we keep scripts in our head or generate them at runtime?
  • Scripts are similar to mental models.
  • Scripts are culturally specific (e.g. leaving a tip in US vs Japan)

Lesson 17 - Explanation-based Learning

Hierarchy: Learning by Recording Cases -> Case-based Reasoning -> Explanation-based Learning -> Analogical Reasoning

  • Similar to incremental learning - Learning from small bits of data.
  • Similar to chunking - Create rule based on small number of previous episodes.
  • Related to creativity. Choose a solution not from previous memory but generating a new solution (soup to kitchen but no bowl... use a pitcher).

Concept Space

AI problem: What prior knowledge should the agent have so that it can build an explanation.

Prior Knowledge

priorknow priorknow2

  • Knowledge representation of characterization of example object.
  • Top line is the explanation
  • Yellow arrow shows causality

Abstraction

abstraction

  • AI agent abstracts knowledge from particular example
  • Agent only abstracts objects that are causally related (e.g. bowl). Simple features with no causal relationships can be dropped.

Transfer

transfer

  • Agent uses abstractions from multiple objects to generate an explanation of another object (e.g. Glass, from Bowl and Briefcase)
  • Can continue to chain this to get to the final explanation (e.g. combine with "Object is Stable" with Glas to reach Cup).
  • Agent actually works backwards, from explanation downwards. Keeps trying to prove explanation one level down by using lower-level characterizations.
    • For definition of a cup, it knows that in order for an object to be a cup, it must be stable and must enable drinking.
    • Hits memory to find examples of objects that are stable (brick), and enables drinking (glass)

Tying it back to previous concepts:

  • Uses problem reduction and application. Deduce problems into smaller and simpler problems.
  • Uses planning. Planning involves open conditions (preconditions of a goal which are required to satisfy). We use these open conditions to select operators.
    • When we want to prove an object is a cup, there are two open preconditions (stable, enables drinking)

Exercises

  • Explanation-based learning fails if the agent is unable to connect similar but not equal concepts together (protects against heat vs limits heat transfer)
  • Strategy: In order to accomplish a goal, what is the minimum amount of information required to solve it?
  • Depending on the background knowledge available, the agent will opportunistically build the right kind of causal proofs.
  • Sometimes called "Speed-up learning"

Cognitive connection - Explanation-based learning

  • Humans have a hard time explaining memory processes.
  • Difficult to explain physical actions.
  • Explanations are not necessarily 1:1 with the logic used to make a decision/action.
  • The act of expalantion can interferes with reasoning process.
  • Acceptance in society will require "explainable" AI.

Lesson 18 - Analogical Reasoning

Hierarchy: Learning by Recording Cases -> Case-based Reasoning -> Explanation-based Learning -> Analogical Reasoning

Analogical reasoning helps when dealing with finding similarities in cross-domain spaces.

Tumor-problem vs ruthless king - Different domains but similar solutions (multiple weak lasers vs multiple small armies)

Spectrum of similarity

  • Ranges between "More similar" <---> "Less similar"
  • Recording cases / CBR / analogical reasoning sits in different parts of the spectrum.

similarity

Processes of analogical reasoning

  • Analogical reasoning allows us to look at new problems in terms of familiar problems.
  • It also allows us to transfer knowledge from familiar problems to new problems.

Analogical reasoning vs CBR

Analogical reasoning

  • Retrieval -> mapping -> transfer -> evaluation -> storage

CBR

  • Retrieval -> adaptation -> evaluation -> storage

  • Both share retrieval, eval and storage.

  • In AR, one needs to map concepts from different domains and transfer it to target domain before eval.

Analogical retrieval

Q: What criteria should be used to decide on the similarity between the target problem and the source case, given that they come from different domains?

A: Distinguish between superficial similarity and deep similarity between two situations.

  • superficial: features/counts of objects
  • deep: relationships between objects, or relationships between relationships. higher order relationships.
    • two situations are deemed similar if the relationship between the objects in it arte similar

In general, as we go from unary relationships to binary relationships to tertiary relationships to even higher order relationships, the similarity be- comes deeper and deeper.

3 Types of Similarity

  1. Semantic: Conceptual similarity between target and source case.
    • Woman climbing up a ladder is conceptually similar to a woman climbing up a step ladder.
  2. Pragmatic: Similarity of external factors, like goals. Similarity external to the representation.
    • Physician killing the tumor as to soldiers raiding a castle.
  3. Similarity between representational structures.

Superficial vs Deep similarity

analog1

Why people have different analogical reasoning

analog2

  • Arrows note causality

Analogical mapping

Physician destroying a tumor as to soldiers raiding a castle analogy:

  • unary: patient as to a castle
  • binary: physician's laser beam as to army's soldiers
  • teriary: between goal and resource is an obstacle (human tissue vs castle walls)
  • king and patient are superficially similar but not deep

Deep and rich models are essential to decide how align analogies.

Analogical Transfer

Re: Physician vs army analogy

  • Given problem (patient with tumor), analogical retrieval gave us the source case (army capturing king).
  • Given model of two, analogical mapping gives us the corresponding elements (tumor as to king, laser as to army).
  • Analogical transfer gives us the ability to transfer solutions.
    • Because we know we must divide the army to raid the castle, and we have the corresponding elements, we can make the analogy to divid the laser beams into smaller ones to destroy the tumor.

Design by Analogy Retrieval

Desigining a robot that walks on water, inspired by a basilisk lizard:

lizard

  • Model is called structured behavior function model.
  • Top shows initial state and it's goal state.

Mapping & Transfer

How to adapt robot that can walk on ground to something that can walk on water?

walkwater

  1. Imitate structure model
  2. From structure model, transform info to create behavior model.
  3. FInally, transfer info from behavior model to create function model.

Method:

# Mapping + Transfer can loop
Retrieval -> Mapping -> Transfer -> Eval -> Storage

Adapting from multiple sources

squid

  • Multiple retrieval occurs to get design for both a copipod and a squid, each having their own design advantages we want to incorporate.

Method:

# Retrieval + Mapping + Transfer can loop
Retrieval -> Mapping -> Transfer -> Eval -> Storage

Evaluation & Storage

Evaluation of a swimming bot:

  • How: Simulation, or build physical prototype
  • If success: Encapsulate solution as a case and store it in memory
  • If fail: revisit transfer and see whether we want to transfer other characterists, map source case and target problem differently, or retrieve a completely different model.

Advanced Open Issues in Analogy

  • Common Vocab: Since analogical reasoning uses cross-domain transfer, do we need a common vocabulary across all the domains? If same vocab isn't used, what alternatives are there?
  • Abstraction & transformation: Analogical reasoning entails problem abstraction and transformation. Problem is not always fixed, often agent needs to abstract and transfer the problem to retrieve the source case (?)
  • Compound and compositional analogies. Often we retrieve from several cases, and we transfer knowledge from several cases. Compositional analogy (comparing corporations, human level, management level, org-level)
  • Visual-spatial analogies.
  • Conceptual combination - learning a new concept by combining parts of familiar concepts. e.g. combining solar system and nucleus to get new idea of nucleus.

Recap

  • Similarity: How can we design agents that can do the same kind of similarity evaluation as humans?
  • Analogical retrieval: How can we structure our knowl- edge to facilitate this kind of retrieval? How can a system know the given a model of the atom, it should retrieve a model of the solar system?
  • Analogical mappijng: figuring out which parts of different systems correspond (troops vs king and lasers vs tumor)
  • Analogical transfer: moving knowledge from the concept we know to the concept we don’t. e.g. solar system to atom
  • Evaluation and storage: How to evaluate analogies? e.g. simulation, physical modeling. How to store analogies.
  • Design by analogy: Use something that we know a lot about to infor our design of something new.

Cognitive connection - Analogy

  • We use metaphor all the time. "We've grown apart" - spatial metaphor.
  • Rubin's test of intelligence - common intelligence test based entirely on analogies.

Lesson 19 - Version Spaces

Learning hierarchy

Learning by Recording Cases -> Incremental Concept learning  -> Version Spaces | Classification

Problem:

  • Agents can either overgeneralize (what we learn is too specific) or undergeneralize (what we learn isn't useful)

Abstract Version Spaces

Version spaces logic:

if is_example_of_concept
  generalize the 'specific' model
else
  specialize the 'general' model

versionspace

  • Specific model: Positive samples generalize specific descriptions
  • General model: Negative samples specialize general descriptions
  • Prune models that no longer match the data
  • Positive and negative samples force models to converge - the right generalization of the concept for the given examples.

Unlike ICR which requires two examples which are identical except for one feature, version spaces can adapt to different in multiple features at once for each successive example.

Visualizing version spaces

  • ICR nudges a concept between specific and general spectrum.
  • Version space places 2 models and keeps nudging each (Specific model moves towards General for positive examples, General model moves toward Specific for negative example) until they meet somewhere in the spectrum.
  • Concepts and models are similar (representations of the world)

Example: Food allergies

allergies

Case objects:

  1. Visit1: {restaurant: Sam's, meal: breakfast, day: Friday, cost: cheap
  2. Visit2: {restaurant: Kim's, meal: lunch, day: Friday, cost: expensive
  3. Visit3: {restaurant: Sam's, meal: lunch, day: Friday, cost: expensive
  4. Visit4: {restaurant: Kim's, meal: lunch, day: Friday, cost: expensive
  5. Visit5: {restaurant: Kim's, meal: lunch, day: Friday, cost: expensive

Convergence:

final

  • Convergence occurs if there are enough examples.

Version Space Algorithm

if is_positive(example):
  generalize() all specific models to include it
  prune() away general models that cannot include it

if is_negative(example):
  specialize() all general models to include it
  prune() away specific models that cannot include it

prune() away any models subsumed by other models

Specialization example:

kimg

  • In lecture's implementation, there's a single pathway coming from the most specialized concept model. Hence, thise model doesn't need to prune away models subsumed by other models

Convergence:

kims2

Identification Trees

  • In CBR, we used discrimination tree learning, where we split a node using a discriminating feature. DTL provides no guarantee of the optimality of history (ie. at retrieval time, it might not be traversing down the most optimal representation of a tree).
  • Inductive decision tree learning (splitting the most common splitting feature) creates a more efficient tree structure.
    • Best split by feature where when it's split by an example ("brown" for hair color) it only leads to positive/negative target labels (e.g. sunburned).
  • Requires all the target labels to train on first, though.
  • When number of examples and features increase, the training becomes more computationally expensive and also more difficult to determine which feature to split on.

dt

  • Ideal solution, as this creates the most efficient decision tree

Less efficient trees are made if split using different features: dt

Takeaway:

  • Decision tree learning leads to more optimal classification trees. But need all the examples right up front.
  • Discrimination tree learning may lead to suboptimal trees, but you can learn incrementally.

Wrap up

  • Version spaces: algorithm for converging onto an understanding of a concept, even in the absence of prior background knowledge or an in- telligent teacher.
  • Algorithm: iteratively refine a general and specific model of a concept, until they converge down onto one another
  • Complex problems with version spaces
  • Limitations and question: what to do if there’s no single correct concept, or what to do in the absence of either positive or negative examples.
  • Identification trees: Another way to classify target labels based on features.

Lesson 20 - Constraint Propagation

Hierarchy

Constraint Propagation -> Visuospatial Reasoning

Definition

Constraint propagation is a method of inference that assigns values to variables characterizing a problem in such a way that some conditions (called constraints) are satisfied.

  • Given a problem, some problems are characterized by a set of variables, and each variable may locally take on some values.
  • Strategy: How can we locally assign values to variables in a way that some global constraints are satisfied?
  • Sometimes multiple interpretations can simultaneously satisfy all constraints.

CP in shapes - Determining 3D in 2D:

  • Variables: surfaces, orientations
  • 3D shapes determined by constraints:
    • For trihedral objects, 3 surfaces meet at a particular point.
  • One can see multiple shapes in a 2D image, and either are valid as long as it meets constraints.

CP in NLP - Nonsensical sentence but correct grammar:

  • Constraints are defined with the rules of the English grammar.
  • As we assign assign different POS to each word (variable), that assignment must satisfy constraints of the English grammar in order for us to accept it as a grammatically correct sentence.
  • If sentence was not grammatically correct, we won't be ablet oassign values to all the variables that will satisfy constraints set by the English grammar.

Pixels to 3D

Computer vision exercise as example of constraint propagation.

Pixels to lines -> Lines -> Surfaces -> 3d object

Takeaways:

  • The process of generating a 3D object from pixels is a series to decomposed tasks (problem reduction).
  • Before implementing an algorithm, we want to understand how a task gets decomposed into subtasks.
  • Problem reduction is a general purpose method decomposing complex tasks into smaller tasks, regardless of the algorithm.

Line labelling

Line labelling is the task of grouping lines into surfaces and the orientations of surfaces are identified via the perpendiculars. To do this, we rely on "junctions". Each junction is a constraints (Y, W, T, L junction)

ll

Example:

  • Y-junctions: each line represents a fold (at least 2 surfaces meet)
  • L-junction: represents a "blade" where we can't infer that two surfaces are getting connected with each other.
  • W-constrain: Two connecting lines are blade and one is fold
  • T-constraint: 180 degree, blade, blade and fold

fl

  • The actual ontology of these constraints is more complex, however. Each constraint can have multiple interpretations that lead to different fold/blade combos.
  • The more complex the object, the more complex our ontology needs to be.
  • Example of abduction task: Coming up with best interpretation with given data.

Constraint propagation for NLP

Problem: How to decide if a sentence is grammatically correct.

nlp

Contraints:

Syntactic constraints (NP,VP) -> Lexical constraints (POS)
  1. We know a NP can go into one or more adjectives followed by a noun/pronoun.
  2. Looking at lexicon, we can identify what POS each word is.
  3. When each word can be assigned as POS based on this heuristic, we can decide that this particular sentence is grammatically correct.
  4. Once constraint propagation is done using syntactic/lexical constraints, we can then use this tree to support semantic analysis.

Assignment - Constraint Propagation

  • If constraint propagation leverages a library of primitive constraints, what will your constraints be for the final project?
  • How will you propagate those constraints into the image to understand it?
  • Once you’ve actually propagated those constraints, how will you actually use those inferences?
  • Will you abstract out propositional representations? Or will you stick to the visual reasoning and transfer the results directly?

Wrap-up

  • CP: Method of inference where we assign values to satisfy certain constraints. By doing so, we arrive at strong inferences about the problem.
  • Vision: Using prior knowledge of constraints to anticipate predictable shapes.
  • NLU: Prior knowledge of grammar and POS allow us to make sense of new sentences.
  • CP is much more complex than simplified examples given in lecture.

Cognitive Connection

  • Constraints can be numeric (waterfall of formulas in a spreadsheet)

Lesson 21 - Configuration

Hierarchy

First part of 'designing creativity' section

Configuration -> Diagnosis -> Design -> Creativity

Design

Design: Takes in needs/goals/functions, and returns specification that satisfies those needs/goals/functions.

  • Can be concrete or ephemeral.
  • Problem evolves as solution evolves
  • AI can do design. Goal is to have AI agents that can design other AI agents.

Problem & solution evolution: Agent may come up with a design, but it encounters problems with it

  • Example: Agent designs a computer processor operating at higher speed, but overheats. The agent must now change the specification so that it should fast + doesn't overheat.

Configuration

Configuration: A problem-solving activity that assigns values to variables to satisfy constraints.

  • Configuration is the most common, simple type of design.
  • Problem remains fixed, unlike complex design where the problem specification can evolve as the solution evolves.
  • In configuration, you already know all components. The task is to figure out an arrangement that will satisfy constraints.
  • Examples:
    1. Configure existing furniture in a room to meet specifications)
    2. Configure existing camera features (focus, flash, aperture) to take a good photo.

Configuration Process

  1. Specifications
    • Can be all constraints on a configuration problem (e.g. constraints on design a basement layout)
  2. Arrangement model
    • Output of specifications.
    • Arrangement of all the components. Components are already known.
  3. Design plan
    • Starts with most abstract an partial solutions, then iterate to something more concrete.
    • Each plans specifies a subset of all the variables (e.g. basement design, first floor design, etc).
  4. Abstraction hierarchy
    • Diagrammatic representation of the plans from most abstract to most concrete.
  5. Component hierarchy
    • All components are already known, but we may be able to further specify these components (ie. need TV, but we can pick brand, size, etc)
  6. Relationship
    • Each step of the process is 2-way.
    • Specifications guide configuration, but the configuration can also influence specifications
    • (IMPORTANT) Major difference between design and problem-solving is that in problem solving, the problem remains fixed when we come up with a solution. In design, the problem evolves as the solution evolves.

config

Example: Representing a chair

chari

Exercise: Representing a chair

chairex

Steps to solve:

  1. Write out POC design that fulfills initial constraints.
  2. Use a "minize cost" heuristic. Reduce costs by:
    • Removing chair arms out of design.
    • Fulfill next important component (chair legs)
    • Given leftover budget, design chair back.

Takeaways:

  • Decomposing a large design into its components is a fundamental rule of KBAI
  • Allows us to address the problem efficiently.
  • Related to constraint propagation - design constraint dictates what we choose for the materials.

Configuration + Classification

  • Classification adn configuration are both hierarchichal
  • Configuration leverages classification's notion of prototype concept.
  • Classification make sense of the world by perceiving details in the world and decide what they are.
  • Configuration builds the world by receiving a design to create something, and we decide on the individual components.

Contrast with CBR

  • Both are applied to routine design problems.
  • CBR
    • Assumes we've already designed other chairs, and have those stored in memory.
    • CBR will go into case memory, retrieve closest matching case, and then start tweaking the case.
  • Configuration
    • Given a prototypical concept, we assign values to all the variables.
    • Assumes we've designed enough of the prototype that we can extract the plan.
    • Calls upon plan hierarchy and start defining plans.

Contrast with Planning

  • Configuration uses a skeletal plan, where variables exist but values are not filled.

Wrap-up

  • Configuration is tweaking the variables to meet constraints.
  • Routine design.

Lesson 22 - Diagnosis

Hierarchy

Configuration -> Diagnosis -> Design -> Creativity

Definition

Diagnosis is to determine what is wrong with a malfunctioning device.

  • Discrepancy between expected behavior and actual behavior. Find the fault that is causing the actual behavior.
  • Solve by rule-based, case-based, and model-based reasoning.

Steps:

  1. Principle of coverage - Make sure all diagnostic conclusion accounts for all input data.
  2. Principle of parsimony - Choose single hypothesis over a combination of hypotheses.
  3. Hypotheses can interact, and these interactions can make your diagnostic task complicated.
  4. Explanation - We want a set of hypotheses that could explain the input dat.

Data Space and Hypothesis Space

Heuristic classification

  • Initial set of data may range from specific to abstract.
  • Data Space: Bottom-up process. Don't deal with raw data, but abstract them to make it easier to map to hypothesis space.
  • Hypothesis space: Top-down process. Abstract hypothesis can then be refined into something that can explain all available data.

dshs

Problems with Diagnosis as Classification

  1. One data point, multiple hypotheses (Hyp: many, Data: 1)
  2. One hypothesis, multiple sets of data. (Hyp: 1, Data: many)
  3. Multiple hypotheses, multiple sets of data (Hyp: many, Data: many)
  4. Mutually exclusive hypotheses. If H3 and H6 have mutually exclusive data points, what if patient exhibits data for both?
  5. Interacting data points. Multiple data points may cancel each other out.

Deduction / Induction / Abduction

  1. Deduction: Given the rule and cause, deduce the effect.
    • Rule: It rains when cloudy, Cause: It is cloudy, Effect: It will rain.
  2. Induction: Given a cause and an effect, induce a rule.
    • Cause: It is cloudy, Effect: It rains, Cause: When it is cloudy, it will rain.
  3. Abduction: Given a rule and an effect, abduce a cause.
    • Rule: It rains when cloudy, Effect: It is raining. Cause: It was cloudy.
    • Diagnosis is a type of abduction.
  • Deduction doesn't guarantee that effect is correct. Multple effects can occur from a cause.
  • Induction doesn't guarantee that rule is correct. Multiple rules can explain cause + effect.
  • Abduction doesn't guarantee that the cause is correct. Multiple causes can have the same effect.

Criteria for Choosing Hypothesis (using abduction)

  1. Coverage: Hypotheses must cover as much of the data as possible.
  2. Parsimony: The smallest number of hypotheses ought to be used.
  3. Confidence: Some hypotheses may be more likely than others.

Diagnosis as Abduction

Strategy: Bias for coverage or parsimony?

  • If coverage: Betatosis + Zetad
    • Using both, we get max coverage of data space
    • Conflicting sympoms cancel each other out (e.g. elevated/reduced E)
  • If parsimony: Thetadesis
    • Explains B, C, H but doesn't explain F.
  • Other strategy: Thetadesis + Kappacide + Mutension
    • Covers all data space, but not as parsimonious as Betatosis + Zetad.
    • But, this might be better answer if they have a higher likelihood than B + Z.

Solution steps:

  1. Look at data space and find which hypotheses currently matched the data the best (explains the most data points and have fewest conflicts).
    • Best match: Thetadesis (3 data points), followed by Betatosis (3 data points, but 1 conflict)
  2. Using best match, find other hypotheses that fill out the rest of data space.
  3. Using the next best match, find other hypotheses that fill out the rest of the data space.
  4. Choose the one that has the least combinations (Betatosis).

diagnosis

Completing the process

This looks a lot like classification

process


Lesson 23 - Learning by Correcting Mistakes

Hierarchy - Meta-cognition

Learning by Correcting Mistakes -> Meta-reasononing -> Ethics in AI
  • related to explanation-based learning and incremental concept learning.

Questions for Learning from Mistakes

  1. How can the agent isolate the error in its former model.
  2. How can the agent explain the problem that led to the error.
  3. How can the agent repair the model to prevent the error from recurring.

Algorithm for isolating mistakes

The error may occur in the AI's: knowledge, reasoning, architecture. THis lecture focuses on "classification" errors in knowledge.

  • Think of venn diagram of correct example and incorrect example
  • Some features overlap, while others don't.
  • Focus on the features that only the negative example has. Called "false suspicious" features.
  • But which false suspicious feature to focus on?
    • Trial by error with each feature.
    • Check overlap with other false examples to see which ones overlap.

Algorithm

To find suspicious features in true-positive relations (get list of only the features that exist for true-positives):

  • ∩T: Intersect features shared across all true positives (∩ = intersect)
  • ∪F: Union (combine) all features among all false positives (∪ = union)
  • ∩T - ∪F: Remove assertions in union from intersection

To find suspicious features in false-positive relations (get list of only the features that exist for false positives):

  • ∩F: Intersect all false positives features
  • ∪T: Union all true positive features
  • ∩F - ∪T: Remove assertions in union from intersection

  • The more features, the more examples we need to identify the features that were responsible for a failure.

  • Explanation is a key features of knowledge based learning.

Explaining & correcting the mistake

(Lesson video 23.09 - too longwinded to take notes)

  • metacognition is "thinking about thinking".
  • metacognition reasons what went wrong that caused the error.
  • Fixes the mistakes made in the "deliberation" module, which contain learning, reasoning and memory.

Lesson 24 - Meta-reasoning (thinking about thinking)

Meta-reasoning: How do you know what you don't know?

Mistakes in reasoning and learning

Knowledge gaps: There are multiple explanations of relationships, but there are gaps in the network.

  • Once agent detects a gap, it sets a learning goal, ie. how to fill that gap.

Metacognition and delibration overlap.

Strategy selection

How might an agent know all problem solving methods and choose an appropriate one? Via metacognition.

  • Understand requirements for each strategy (CBR, Problem reduction, Generate & Test, etc)
  • Understand which one is the most computationally efficient.

Strategy integration

As agent works on a problem, it might shift which strategy it may choose.

Example - Block world: Means-end analysis first, but reaches cul-de-sac. Metacognition sets a new reasoning goal to problem-reduction for resolving the cul-de-sac. After resolving, it may switch back to MEA.

Process of meta-reasoning

Metacognition can choose from existing solving strategies for its own reasoning (hence meta).

  • Example: metacognition might use CBR to remember how it solved a problem previously.
  • Example: metacognition might use Generate & Test across all existing strategies and test results to decide on best strategy.

Meta-meta-meta reasoning

Meta x N reasoning is NOT helpful because each level of metacognition is conceptually identical. They're better represented as self-referential. Think of it working recursively.

(See lesson 24.09 for an example of metacognition in action, of robot building camera learning how to disassemble a camera)

Metacognition provides the flexibility of an AI agent to solve new problems

Cognitive connection

Creativity is not just about creating new products, but also about creating the processes the lead to interesting products.


Lesson 25 - Advanced Topics

Visuospatial knowledge

Knowledge wherein causality is, at most, implicit

Two views of reasoning:

  1. propositional representations - explicit representation (like a class object) that records various attributes
  2. analogical representations (not reasoning). Given representation, runs affine or set transformation to get the target image.
    • human cognitions use analogical representations. most computers are not good at analogical representations

Symbol Grounding Problem

symbol

  • Content: Visual, Encoding: Form
  • Verbal example: scripts for going to a restaurant.
  • Reasoning and knowledge can be visuospatial, and representations can be analogical. But we have yet to fully understand the role of human cognition and we are yet to build AI agents that can deal with visuospatial knowledge and analogical representation.

Visual-spatial reasoning

Galatea example

Duncke problem: Hear a story, then receive a problem, then try to find an answer to the problem. (Goes back to king/solders : laser:tumor problem).

  • Most AI agents try to solve this using propositional representations, ie. extracting a causal pattern and projecting it to another problem.
  • Could this be solved just by visuo-spatial knowledge? One GA grad tried.
    • Causality is inferred, but not explicit.
    • Solves the problem by transporting visual spatial knowledge to the new problem one step at a time.

In this way, Galatea was able to solve the addition problem without abstracting any causal pattern from it. Of course, one might say that the causal pattern is implicit here, and that is indeed true. But the entire point of a visual spatial knowledge here is that the causal pattern is not being obstructed, but as long as it is a problem-solving procedure where each step is represented only visually spatially, it is possible to transfer this problem-solving proceeded to the new problem.

Archytas example

Archytas AI agent can extract causal information from visuo-spatial representations to analogical reasoning.

  • Given knowledge memory and input drawing, the archytas AI can assemble a causal functional model for the new drawing (ie. how the input drawing should operate)

Procedure:

  1. Create library of source drawings, containing line, shapes, and composite shape knowledge.
  2. Abstract from components -> connections -> behavioral model -> functional spec.
  3. Given input drawing, it maps its component to source drawings, then abstracts them to higher order knowledge.

archytas

Systems Thinking

Connected topics: Frames, Scripts, Diagnosis (bee population dropping due to pesticides but can't see the action itself [invisible])

Structure-Behavior Function

Various types of representing structure/behavior function relationships. Example:

sbf

Design Thinking

Definition: Reasoning about ill-defined, unconstrained, open problems.

Connected-topics: Configuration, Case-based Reasoning.

IDOL: GA grad project. Did creative design. Learns creative design by learning design patterns from software engineering. Analogical transfer leading to creative design thinking (design water pump with gaps in knowledge, cross-domain transfer).

Creativity

Creativity: Ability to create something novel, valuable and unexpected.

Processes of creativity: Emergence, re-representation, serendipity.

"Do you agree with David’s assessment that most AIs aren't creative because we can trace through the underlying process that led to them?"

  1. Yes, because in order for a result to be creative, it must be novel, and an output of an algorithm can't be novel.
    • For a small problem, yes. But for large problems with multiple problems, its solution can be novel.
  2. Yes, because given a set of inputs, the output will always be the same; therefore, the product can never be unexpected.
    • Output is not just dependent on input, but also of its context/situation. This could lead to novel solutions.
  3. No, because it defines creativity in terms of the output rather than the process.
    • But we can find something creative yet know nothing about its inner-workings.
  4. No, because this definiton, humans are only considered creative because we don't know how the brain works yet.

AI Ethics

Problems:

  1. AI replacing human jobs.
  2. Modern AI is driven by military. Should we build AI soldiers.
  3. Civil rights for AI agents.

Lesson 26 - Wrap-up

overview

RPM agent could use meta-reasoning to use different strategies for 2x2 / 3x3 problems / different problem sets.

In [ ]: