You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a forward-chainer issue that shows up in the self-model prototype: the need to multiple steps or stages of rule-sets. That is, first, one must apply some rules from ruleset A, and only when these are done, then apply rules from ruleset B. After that, C and D can be run in parallel, but ruleset E cannot be used until both C and D have completed. The current robot-chatbot pipleline really does have about this many stages: ruleset A is relex2logic, B is the LG-to-control-language, C is the self-model update, D is the robot animation update, and E is a mish-mash of poorly implemented "action orchestration. Right now, this is done with a pile of ad-hoc scheme code to plumb everything together. Better plumbing would be nice.
A very different issue is the narrowing-down of rulesets. The current behavior infrstructure has maybe 50 or so rules, and so we can afford to run all of them at each inference step. However, this isn't scalable as the number of rules goes up.
We've already got a scalable solution for this in OpenPsi, but maybe the forward-chainer should do it too. The current working example is the AIML subsystem, where there are 20K or more rules, of which only a dozen or two are appropriate for the given input sentence. The cog-recognize function is used to narrow these own; its mostly integrated with openpsi, but needs polishing and enhancing.
More generally, we are currently using OpenPsi as a single-step chainer. Exactly how to manage that as a multi-step forward chainer is unclear. However, this discussion really does not belong in this bug report.
The text was updated successfully, but these errors were encountered:
I don't know if that sort of control (A->B->(C||D)->E) needs to be part of the URE, but why not, or I mean it's more like a meta-URE process that knows how to chain URE processes.
Regarding narrowing down of rules, yes I agree the current code is really slow and should use cog-recognize or something in that vein. The BC has the same short-coming.
Ah-ha! Yes, I am very very interested in having an infrastrcture, say, for example, for the BC, where you could do the narrowing (using cog-recognize, or a meta-unifier of some kind), that given one step, would tell you what rules are runnable in the next step, without having to actually run those rules. i.e. determine the rule chains without running the rules.
Moving comments from opencog/atomspace#1004
There is a forward-chainer issue that shows up in the self-model prototype: the need to multiple steps or stages of rule-sets. That is, first, one must apply some rules from ruleset A, and only when these are done, then apply rules from ruleset B. After that, C and D can be run in parallel, but ruleset E cannot be used until both C and D have completed. The current robot-chatbot pipleline really does have about this many stages: ruleset A is relex2logic, B is the LG-to-control-language, C is the self-model update, D is the robot animation update, and E is a mish-mash of poorly implemented "action orchestration. Right now, this is done with a pile of ad-hoc scheme code to plumb everything together. Better plumbing would be nice.
A very different issue is the narrowing-down of rulesets. The current behavior infrstructure has maybe 50 or so rules, and so we can afford to run all of them at each inference step. However, this isn't scalable as the number of rules goes up.
We've already got a scalable solution for this in OpenPsi, but maybe the forward-chainer should do it too. The current working example is the AIML subsystem, where there are 20K or more rules, of which only a dozen or two are appropriate for the given input sentence. The
cog-recognize
function is used to narrow these own; its mostly integrated with openpsi, but needs polishing and enhancing.More generally, we are currently using OpenPsi as a single-step chainer. Exactly how to manage that as a multi-step forward chainer is unclear. However, this discussion really does not belong in this bug report.
The text was updated successfully, but these errors were encountered: