I believe that this was the initial idea of bioware when they created nwn, if you look in the game with nwexplerer there are 1 or 2 files beard and I think there are also hair files.
I think they began to move back in that direction with robes, and cloaks are a derivation of that principle. My offhand, pre-science guess is that any emitter slot can do what the robe slot does, and that I'll probably have to stick it on the supermodel for each race. So I'm fairly confident I can at least get a model that'll track the supermodel.
What I have no clue about is if the emitter will call animations like the robe slot does, thus allowing for things like a swaying, flapping, or flailing ponytail. Still, this would be sort of an extra bonus- having independently selectable hair that'll flex a bit as the head and neck move is already a leap forward over what we've got now.
However, to do what you're musing upon regarding hair models, they would presumably require multiple animations for different situations and, I guess, something to govern what basic danglymesh would previously have achieved with regards blowing in the wind, etc. So I suppose the question is can an emitter govern multiple animations selectively? That I haven't tried.
Interesting idea though.
Have to say I'd never even thought of danglymesh. I could never get it to work for me. Personally I'm not too happy with its effects for hair, it applies a single stretch to the entire model based on its horizontal motion, treating loose strands the same way as bound dreadlocks. It's not bad for other things, certainly. I've just never liked that hair didn't seem to respond to gravity or being gathered on the shoulders.
The main thing I'm after is the idea of creating a model that will parent to the player's supermodel like a robe does. That way even if I don't muck about with animating really long hair I've still got independently selectable hair that'll track head/neck/shoulder movement (depending on length). If the concept proves sound, I'd also want to play around with doing a reverse of this split by merging the head and neck with a movement-tracking emitter-based model. If content creation has taught me anything, though, it's that I'm going to hit some unexpected roadbumps, possibly even find the reason we ended up with play-doh-ball model composition rather than skeleton-based flexing models.
While I'm here, I've noticed that my toolset and game will hang for a tic if I load a model or player with more than 5000 polygons at once, and I take this to be the functional limit of the engine quickly loading new geometry. Has anyone else encountered this? I am no computer expert so I couldn't say with any authority where the bottleneck might be.