A Meta-Intelligence Graph Manager For Step-Level Behavioral Analytics

Ron Itelman
4 min readMar 5, 2022

--

In order to combine various inputs together in a completely generalized way that can be temporally sequenced using the meta-intelligence paradigm, we will be well served with a meta-intelligence graph manager. This article will cover the basic concepts, how they come together, a design, and architecture.

This intended audience is anyone interested in data-optimized marketing, Customer Journeys, Behavioral Influence, Computational Psychometrics, Robotic Process Automation, and A/B/n testing.

The goal is to propose a layout for an architecture, design, and knowledge graph integration, using the meta-intelligence paradigm.

SEQUENCER PANE
First, we will want all input to operate like GarageBand, meaning time moves across from left to right, and users are the rows. This is the same as Stephen Wolfram’s cellular automaton format, but switching the axis. Clicking on a particular user should let us see the individual attributes of the view, which we will call the attributes view pane. We will have a preview of the state of the view on the right side, as well as any rules associated with any attribute clicked on in the attributes view pane (below). Each unit of time is called a ‘step’. We aren’t tracking time, but sequences of steps (one theoretically could however).

ATTRIBUTES VIEW PANE
Each attribute is listed, which is in effect an edge label. These attributes act as counterfactuals, they describe what properties can and cannot be assigned. The value in each step to counterfactual attribute can either be true (1), false (0), or unknown (null). An attribute name is written out like “IS_BUTTON_1&&IN_VIEW_1”: true, “IS_BUTTON_1&&IN_VIEW&&LINK_TO_VIEW_2”: true, “IS_BUTTON_1&&IN_VIEW&&CLICK_EVENT”:false.

A key difference here in popular data architecture is that a table cell only describes a value as true, false, or null. We will need a way to convert JSON information into this paradigm, and into a graph database, but that is for another article.

Additionally, conditionals are part of the attribute name, where ‘&&’ is “AND” in combinatorial logic. You could have logical attributes, where rules are triggered from data that is stored in the same format: “IF_BUTTON_1_CLICKED::GO_TO_VIEW_2”:true. Reward levels (for a reinforcement learning paradigm) may also be included: “IF_BUTTON_1_CLICKED::REWARD_IS_1”:true. This should (in theory) be completely generalizable.

This maps to Wolfram’s visualization of cellular automata instructions

Like Wolfram proposes, now the data itself can have rules and combinatorial logic to automatically trigger the values of other cells.

TAXONOMY PANE
Each attribute can be clicked on in order to view what the meaning of the attribute is, or have any design reference.

PATHS VIEW
We need to map all possible paths of the UI, and the attribute format should allow us to generate that. This in effect turns our system into a continuous A/B/n testing system. The psychometrics labelling strategy is outlined in the Theory Of Meta-Intelligence, which describes meta-labels that can be added to each individual UI unit, which also maps 1:1 to our attributes list.

PUTTING IT ALL TOGETHER
Now we can have step-level interaction tracking, the UI paths, the attributes of each individual item in a particular view, and the behavior. This can be useful in influencing user behavior, behavioral analytics, A/B/n testing, that can learn how to personalize a user journey to guide them towards the most efficient path at a step-level.

TESTING OUR ASSUMPTIONS WITH SYNTHETIC DATA & KNOWLEDGE GRAPH QUERIES
One way we could test our assumptions is create synthetic agents who have certain rules. For example “User 1” could have a 75% chance of clicking on red buttons, and “User 2” could have a 100% chance of clicking the second option in a menu of three items.

These are synthetic behavioral hard-coded patterns that by themselves are attributes of agents (human or machine).

Our knowledge graph should map all of the vertices and edges laid out in the step-level observed behavior of agents, and the attributes can act as the labels and schema of views and any events.

In order to test our assumptions and prove this methodology works, it should be tried against an actual knowledge graph, which means we can store data and run queries against it. For example “How likely is User X to click on a button if the button is blue?”, or “How likely are they to go to the next step from View Y”, “What button should we show in order to maximize the chances they will go down the desired path?”, “What step sequence generates the highest value (where reward is generated at each step)?”

--

--

Ron Itelman

I like human + machine learning systems | Principal — Digital, Data & Analytics at Clarkston Consulting