Shoestring AI, Part 2, Nodal FORTH

Background: Sometime in the early 80’s (either 1981 or 82) I got hold of GraFORTH, not because it’s a nifty-neat programming language, but because it could do real 3D graphics on the Apple II… amazing, because for a super-slow 8 bit machine doing any kind of real 3D appears ‘impossible’. So motivated to do cool graphics I learned FORTH, and then after a while moved on with the march of technology. But it would remain a benchmark for me of simplicity and elegance.

Before I implement any code here, I noticed that some of the implementations of AI (or AGI… whatever…) are … messy. By that I mean that to some degree they have code or algorithms ‘bolted on’ to do specific things with knowledge, logic, and so on. Yeeech.

I also noticed that a graph-type KB (Knowledge Base) needs traversal and maintenance routines… which leads down the path of these ‘bolt ons’. Double Yeeech.

I then thought about a self-referential system, which might be built on primitives; the fewer the primitives, the fewer the bolt-ons. Hmmm… Thinking more about the qualities of the information in the KB I noticed that from any node, two bits of information might be interesting to know:

1. ‘Upward’ relationships – is the information ‘well placed’ in the KB?

2. ‘Downward’ evaluations – do the subcomponents of this node ‘evaluate’

I was initially thinking along the lines that downward evaluation into atomic actions / expressions / values would be neat; caching-in-context (virtual pruning), downward expansion if not, etc. I was going for the identification of atomic actions, starting first with the graph tree itself, then it would expand into logical constructs between objects, and so on.

What became fairly apparent very quickly is that for atomic actions what I was trying to do was a tree-traversal-evaluation, whereby the branch of the tree could be ‘evaluated’ and the actions and logic were expressed either as primitives/atomics, or within the nodes themselves as more complex constructs. That starts to look a whole lot like FORTH.

FORTH, in a nutshell.

Consider this statement:

2 2 + .

It should put a 2 on the stack, then another 2, then pop two values off the stack, sum them, put the result back on the stack, and then print the result. It illustrates something common about FORTH’s: ‘Reverse Polish Notation’, or RPN, which is super slick for machine language expression of FORTH primatives, since things like AVR chips are much like any other modern processor.

You could also write a useful function that adds 2 to whatever is on the stack:

2 +

Now there is a word-definition for ‘PLUSTWO’. By expanding the dictionary of FORTH this way you not only write the language, but you write the program; the two are the same.

This self-coding is why FORTH is so interesting; the definition and relationships of words in definitions starts to look somewhat like a KB graph-tree.

And now with a little extra stack-ness we can store not just stack-friendly integers, but really anything, like references to objects in the KB, confidence weights, probabilities, whatever.

We have (glossing over the obviously missing implementation) a downward-evalutating KB model now. It is self-expressive, so any node in the KB tree can be ‘executed’ to some degree with definitions that only exist in the tree itself. This means that code concepts become portable; and possibly opens the door to behaviour and pattern modelling.

Upward-evaluations are the logical ones, asking questions about the consistency and placement of concepts within the KB. I think they are executed in atomics during object creation and modification, but I’m not sure yet. Maybe tomorrow I’ll be sure.

For now lets settle on this as ‘Nodal FORTH’, or nFORTH, since it operates on nodes (and there are some more cool stack-like things we can do there, also).


Alan Turing Рstatue @ Bletchley Park [ Copyright Christine Matthews and licensed for reuse under this Creative Commons Licence. ]


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.