FP - I'm still thinking over your comment, and considering how to best implement AID from a game play stand point. But my priority right now is to rewrite the AID system. I'll look at implementation after I have this rewrite done.
Everyone else - Getting close to a complete rewrite of a parser for emotes that integrates DMFI and AID.
Improvements so far:- DMFI and AID get text from the player chat event rather than using a listener
- DMFI and AID can be processed together which limits overhead when using both systems.
- emotes are parsed regardless of where they are located in a chat string. Previously the start of the string had to contain an *. Now emotes are understood to be text contained by *'s.
Example: Hello! *bows in greeting* It is good to see you. *looks around at the group*
Result: And your character will bow. Previously you had to start the emote on a fresh line. I have not however created a way to queue multiple emotes in succession into the action queue so "looks around" will not result in another emote animation. Only the first "actionable" emote reesults in animation. AID however will not respond to bows and will instead keep looking for the first actionable verb to respond to. see below.
- AID's parser is smarter and more efficient. It now is smart enough to recognize words, and then whether the word is a verb, an ignored word, or an object. It no longer needs to strip "ignored words" from the emote string. In addition it is easy to make it smarter by creating more word lists such as prepositions, conjunctives, etc... by following the example set with the "ignored word list".
what it does is looks for the first verb. then it starts looking for an object, while skipping over ignored words. If it encounters the end of the emote snippet before finding a suitable object or verb it will start looking for the first verb in the next emote snippet.
example: Hello! *bows in greeting* It is good to see you. *looks around at the group*
emote snippet 1: bows in greeting
emote snippet 2: looks around at the group
no actionable verbs are found in the first snippet. "looks" however is then found, and then "around" is identified as an object. AID parsing stops and then the "looks around" function is executed.
I am considering rewritting the DMFI emote parser further, so that it responds similarly to the way AID does - looking at each word one by one until it finds a match that it can do something with. OR ... I might give AID precedence - if AID finds an actionable verb that verb is the only one that will stimulate an animation. If however AID finds nothing that it can work with, then any emote will do for stimulating an animation.
Modifié par henesua, 28 décembre 2011 - 05:56 .