Not that I use dds a lot, but defining your own smoothing method for distant objects by creating your own dds frames might be the way to go. I was discussing the possibility of supplying non-similar frames at different dds levels with I think rjshae on one of my vault entries, and it seems possible that you could scale down your distant varieties with your chosen smoothing, or lack thereof, and present distant objects as more or less blurry, then pack those mipmaps into the dds. I haven't looked around at what is out there, but with the lack of complexity inside the various dds formats, it seems like it would be a really easy thing to create a dds converter for NW style dds and individually manipulate those frames.
Much of the tga blurring seems to stem from using a large with high noise or high contrast image, like most of the stuff I put out in texture packages a few years ago. But for some part, it seems you can curb that blurring by supplying txi commands on those high quality texture, so that they don't degrade as much, limiting their rescaled sizes. At least that is how I interpret some of those functions in txi.
Another thing I have noticed is how your have your visual settings in the game. Setting my video settings to the highest seems to mess them up less, but I assume that is driven by the video card, not as much the game, past the basics. The same is found with lighting. Two of the computers I have here render light entirely different in game. This one has an issue transitioning from light source to light source, flashing directly from one to another. The other computer transitions over a number of ms.
I actually have a far worse texture out put in GMAX than I do in game, but I think that is because I am working under gmax's optimal scale, which seems to be 10x that of the scale characters are portrayed in NWN. It has actually caused me to ponder rewriting nwmax in a way that it downscales your model to 1/10, or another value you set in the rollout, so that you can benefit in gmax from the more accurate texture representations. My guess is these representation issues are non-existent in newer versions of max.
Generally, I prefer to use 1024 pixels to represent 10m of space. Certain parts of character models I prefer to use more pixels per cm than that. I find that unless I am going to fake polygons with a texture, 512 pixel textures are insufficient for tile main textures. Depending on how far character drivers are zoomed in, I find that the optimal texture size on my machine for the largest texture on a tile is actually 2048, which almost nobody agrees with. I want to be able to look at something on a wall and see it is not made of big squares of the same color, so large that stretching in the engine cannot get rid of the square shape. Another method I use for getting rid of squares is to map element shapes at 45 degree angles, at least on one plane. This kind so prestretches the texture, reducing visible squares. Go too far and you get weird plastic look. Another method is to wrap the texture first, then add a noise modifier with a tiny offset, randomly stretching the verts and pulling the texture with it. There is an error in GMAX where you can do that and then delete a previous modifier in the stack and it will revert to the original shape while keeping the tvert offset you just created in reverse. It doesn't work all the time, and sometimes crashes gmax.