Author Topic: A second type of "bumpmapping" possible but not yet discovered. (?)  (Read 2867 times)

Legacy_Zarathustra217

  • Sr. Member
  • ****
  • Posts: 322
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #75 on: November 09, 2015, 03:18:45 pm »


               

As a note, the bumpreplacementtexture in all likelihood is intended to point to a substitute texture for when bumpmapping is not enabled or supported by the end user (when you use bumpmaps/normalmaps, you make your textures unshaded which looks quite flat when there's no bumpmap overlay). That you then use it to point to a bumpmap simply generates the same effect as if you refer directly to the bumpmap (8-bit greyscale with isbumpmap 1) on the model file.


               
               

               
            

Legacy_MerricksDad

  • Hero Member
  • *****
  • Posts: 2105
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #76 on: November 09, 2015, 03:45:52 pm »


               

I've put something together, but I think I need somebody to test it who has the ability to turn on shiny water. I don't.


 


It appears that I can get the built-in bump mapping to work properly if I had that option. Shinywater.tga uses the full 4 channels, and all it has turned on is ISBUMPMAP. The rest animates the texture and has nothing to do with the bumpmap otherwise. The thing is, shiny water is not the base texture, another texture is set to point to it.


 


When I create an alternative texture to shinywater.tga, and then give it a txi file with just ISBUMPMAP set to 1, and then point another texture to it using BUMPMAPTEXTURE, it does in fact try to load that image. I then tracked the error message to the same functions I crash when I try to use shiny water when I cannot. I think it will work on another machine. That does me no good, but it might help another.


 


 


 


In addition to playing with actual bump mapping, I did a three mesh stack, each mesh using a separate texture with a different alpha channel. I grabbed a 3 channel normal map and separated it into 3 axes. I think R was X, B was Y, and G is definitely Z. What I found what that I can combine them all together by playing the Z shine image on the bottom, because it has the most 100% opacity texture. I then stack either X or Y images above those, with mesh alpha set to 0.5 in addition to the texture alpha. This lets all three channels function on the same object. I then set the envmap in the txi for each texture, allowing me to specify different maps. I can put that together for viewing too, later today.


 


Setting all three envmaps to the same image just blends them together, and it is very not spectacular. The same could be created using one image. But separately, they generate something interesting.



               
               

               
            

Legacy_MerricksDad

  • Hero Member
  • *****
  • Posts: 2105
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #77 on: November 10, 2015, 12:13:53 am »


               

I'll pack the true bump map attempt probably tomorrow. Here is a multi-layer effect with three envmapped layers sandwiched between two true textures. It responds well to at least two camera regions, if not three. The sandwich starts with some base texture onto which you add the other layers. Technically you could start with no bottom bread and set the first envmapped layer to 100 opacity. The envmapped layers cover a normal map's split channels which represent X Y and Z lighting. Note, it will not function that way in NWN, so this is just faking it in 2D. The 5th layer, or top bun, is colorant texture. After the three inner layers are created, there is NO color left in the texture, as the envmapping turns it gray.


 


I chose 50% gray for the base color of the envmapped regions because it neither got too dark or too light, and was able to be repainted afterward. For a darker color, obviously you can go black. Go white for bright texture. I think of this as painting minis. If I need something to seem opaque, I prime with black. If I need it to seem transparent, or filled with otherworldly glow, I prime it white.


 


I tried about 100 ways of setting up the envmapping, but mostly all I can get is two hard visible channels. The other kind of flitters in real quick as it transmutes from X to Z lighting. I tried concentric rings so that the texture for the third channel would get hit harder. Previously I tried a combination of X-edge half-circles and Y-edge half circles. Imagine the first is a "dot" texture straddling the left and right edges instead of being centered in the middle of the image. The second would be straddling the top and bottom edges. It was interesting, but didn't get me exactly what I wanted, as there was too much visual overlap with the dot in the center.


 


You'll also see I needed to inverse some of the channels to properly show the lighting change. The opposite makes it start to "glow" in a way, making light come from nowhere to illuminate the whole thing like weird plastic.


 


To force the transition to be more visible, I burned the envmap texture in the centers so that there is a harder edge to the transition. Without that, the transition is so gradual you almost miss the change.


 


If you poke around at how I did this, you'll find that the majority of the work is done in the txi files. Even the envmap files have txi files which force the painting to be done as additive, rather than normal channel blending.


 


Anyway, here is the multi-layer kit for inspection. I certainly hope all the textures are there. I also modified the placeables.2da, but the changes were mostly non functional. I was trying to force more light onto the object, and I did that a little by unchecking static on the 4th object.


 


https://www.dropbox....mo_ret2.7z?dl=0


With that, as fascinating as this has been, and I can certainly use it on characters, I don't think this is a good use of my time while trying to make non-creature stuff. Definitely try it on creatures though, especially low poly ones! It will give them new life. Do note that I'm using !!! 5 !!! meshes where you would normally use one, so don't do this on high poly models, or skins. You'll probably not be happy with the output, or the engine load.



               
               

               
            

Legacy_Zarathustra217

  • Sr. Member
  • ****
  • Posts: 322
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #78 on: November 10, 2015, 10:18:47 am »


               

Hmm, I'm sorry to say it, but as far as I can tell, there's no bumpmapping effect.


 


The same applies to the other effect discussed here. The vibrant/"rich lit" effect is simply generated by the overlay texture alpha layer making it seem more pronounced. You can get the same effect by merging the alpha layer using the "multiply" filter effect in Photoshop though. This screenshot shows the result of that, compared to the original:


 


richliteverything_zps4qszog2l.jpg


 


One of these is just a plain 2d texture with no alpha and no environment map, where I've merged the alpha layer into the main texture using the 'multiply' filter in photoshop (I leave it up to you to guess which one it is though!)


 


In the end, it is just the same as chico used in NWNCQ - i.e. pseudobumpmapping through overlay. What we are all here hoping to do is to create diffuse-light bumpmapping, but where things ultimately still stall is in making the texels of the texture react properly adjusted to the position of lights in the scene. Despite everyone's tireless efforts, I'm afraid we still haven't broken that barrier.



               
               

               
            

Legacy_OldTimeRadio

  • Hero Member
  • *****
  • Posts: 2307
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #79 on: November 10, 2015, 11:36:07 pm »


               
@MerricksDad - I'm having the oddest problem with that demo.  When I load it up, this is what I see.  I copied your demo module into my HAK directory several times just to make sure, as well as making sure my override directory was completely empty.

 

@Zarathustra217 - Your concern has been noted.  Happy modding!  '<img'>


               
               

               
            

Legacy_MerricksDad

  • Hero Member
  • *****
  • Posts: 2105
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #80 on: November 11, 2015, 02:42:31 am »


               

Yeah that's how its supposed to be '<img'> I'm showing the multi-layered texture top down so when you move the camera angle you can see the change, rather than picking up imagined change from light sources. I needed a site with no grass, so I went into the castle for a look. Are you able to see all three envmap/alpha channel combos on the floor? I can't tell from your video.



If the question is about the delay at startup, I don't understand that either. You also had a point where it was not textured, just before the color was added. I don't have that on mine.



               
               

               
            

Legacy_OldTimeRadio

  • Hero Member
  • *****
  • Posts: 2307
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #81 on: November 11, 2015, 04:12:36 am »


               

@MerricksDad - For some reason the top layer wasn't working out for me and I had to fiddle with it a bit.  I may be only catching one of the layers below but it still looks good.  If you revisit this effect, you might consider inverting anything that looks like your env_test2.tga and see if you don't like the low-angles-produce-details effect.  I like the addition of the blending additive, too.  Speaking of, I was using gDebugger to poke into NWN's normal maps to get an idea about what grayscale ranges they use and it turns out it makes a 24-bit 2-axis (and a height map?) normal map.


 


':blink:'



               
               

               
            

Legacy_MerricksDad

  • Hero Member
  • *****
  • Posts: 2105
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #82 on: November 11, 2015, 02:18:07 pm »


               

24 bits would be three channel, where RGB = XYZ, as I found by separating the normal maps for skyrim into color plates. I assume the height map is then stored in the alpha. So technically as 32 bit file, not 24, is what they export.


 


I read that dragonage dumbs it down a bit and only has two channels which are unique. They use the whole of RGB as a single channel, from which they grab only the grayscale values (or maybe just get lazy and assume red is the same as green and blue). The other useful channel is stored in the alpha. Looking at the images, they don't store a height map, just a grayscale main image, and a grayscale alpha, combined gives you the red and blue channels in a regular normal map. They omit green and height mapping entirely. So the height values must be calculated or assumed, such as in TXI for NW where you set the depth of bump maps with bumpmapscale or something similar to that.


 


The NW shiny water is 32 bit, 4 channel, full normal map. So I assume they're using it all.That also makes sense that they'd use all 4 channels or the effect would be wrong without the height map. It was amazing to find that they simply swirl the colors around using water effects on all 4 channels to create the shiny water. I don't know if anybody has tried this, I cannot see it, but they left the arturo version in there which should take less memory to process than the water procedure. It probably doesn't look as good as water procedure, but arturo does the job most of the time. Amazing that all 4 channels are just common static.


 


Here's an example of the 24 bit image, minus the alpha channel.


 


UUG5992.png


 


I've split it into its constituents, and you can definitely see they have a X Y and Z. The third image appears to be a height map, but it's not because that would require the mortar region to be lower color than the rock tops. So it's definitely a top down view. I grabbed this from a bitmap, not a 32 bit tga, so I can't see if it has a height map stored in the alpha or not. I'd really like to see that.


 


If you go where I got the test texture for the package I sent over, you can get the height map, flat diffuse, normal map, and some other things I don't even know how to use at all.


 


http://emmacharnley....-continued.html


 


Very fun to see what all goes into one texture for the newer games out there!



               
               

               
            

Legacy_MerricksDad

  • Hero Member
  • *****
  • Posts: 2105
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #83 on: November 11, 2015, 02:33:51 pm »


               

If I had time, I would extend my example from before, into a 5 layer envmap stack that should be able to near-duplicate bump mapping, but again only for the camera angle. I'd do a positive map for every cardinal direction plus up. The envmaps would be concentric rings set up in a way that made it look like the light side was ALWAYS facing the camera. While that doesn't look good for lighting, it definitely makes things pop out more correctly. With only the three layers I put in that previous package, it has some really interesting depth in the stones. With 5 it should look as accurate from the X axis as it does on the Y axis. The trick is finding blending ranges for forward and backward which when combined with the Z axis map don't turn either too white, or too black. I tried cutting their values in half using the 50% opacity, but that really didn't help. I also tried 75%, which was better, but with the blending additive needed to pull off the effect, I needed higher values (100%) to really show the difference and make it worth the trouble. Because you can't blend additive black to get darker values, you'd have to have two of the cardinal direction maps be blending normal, which will be really tricky.



               
               

               
            

Legacy_MerricksDad

  • Hero Member
  • *****
  • Posts: 2105
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #84 on: November 11, 2015, 03:24:25 pm »


               

After examining what you showed in your link, I can see that you aren't supplying a normal map at all. Turning on isbumpmap expects a height map, and creates the three channel normal map itself. I think we're getting somewhere now.


 


So if we supply a height map instead of preconstructed normal maps, it should be able to do regular normal maps in the engine for us. Try that. I unfortunately can never use it with any machine in my house, but it would be really interesting to see.


 


For instance, have your object painted with this texture, and then set your txi to point to bumpmaptexture equal to this texture. Then set the second texture txi to isbumpmap 1 with nothing else. Give your first texture any common envmaptexture entry and see what it does. It should do the whole bumpy shiny water on that mesh, and I expect it to be accurate.



               
               

               
            

Legacy_OldTimeRadio

  • Hero Member
  • *****
  • Posts: 2307
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #85 on: November 11, 2015, 11:20:01 pm »


               

Well I am mostly trying to do things that don't use shinywater per se...but maybe use some of the effects.  For instance the isbumpmap 1 on an 8-bit texture will give me a 24 bit texture which seemingly responds to light like a bumpmap would and can even be roughened up with  bumpmapscaling in a TXI.  The problem is...it's blue because it's a normal map.  This is where (facial twitch) that isdiffusebumpmap stuff would come in handy if it could be coaxed to produce some effect.  Anyway, I'll poke around with those textures (thanks for the site link) and see what's worth seeing.


 


Edit 11/25/15:


I couldn't find any function that was being called in OpenGL to convert grayscale to normal map and, right now, the "isbumpmap 1" process appears to be the result of a Sobel edge detection procedure.  How "noisy" that procedure is, is determined by "bumpmapscaling".  These might be normalized to a certain range after that.  From some digging, this also appears to have been "how it was done" back in the day.  


* In checking out the oddball "life" proceduretype ("Conway's Game of Life"), it occurred to me that I can't find the !!vp1.0 vertex programs that generate it and the other proceduretypes.  I'm not sure what to make of that.  


*  When the proceduretypes are running as bumpyshiny, they appear to be doing what they're doing by modifying the alpha (and just the alpha) of a fabricated texture I will now describe:  A 128x128 texture that's pure black in the Red and Green channels, pure white in the Blue channel and gray at (127,127,127) in the Alpha channel.  This is "mixed" with the cubemap (the reflection), the NormCubeMap (the "normalisation cube map") and the vertex program, where the grayscale alpha is used to determine faces.  The only places I really see this sort of stuff and what it's doing are right around the time that NVidia released the "Bumpy Shiny Patch" demo.  


*  Using GLIntercept quite a bit and confirmed that every single texture that gets loaded up by NWN...has a cubemap in the pipeline for it AND the normalisation cube map.  I've been seeing this about as long as I've been using a GL debuger on NWN (couple of years now) and what it tells me is that, at least in some way of looking at it, NWN has what you need for simple bumpmapping (or other decently interesting effects) in every single texture it loads and sends to OpenGL.  That's one approach.  Another is trying to add face normals into a model that's compiled with just a regular number of vertices OR hex edit a pointer in an existing binary model to use something other than the normals that were produced when it was compiled.  Basically, anything to either hack more detail into a model or get more detail in a model in such a way that it's "lighter-weight" than a model with a normal map...and then be able to confirm that.  Remember, the other side of the same coin of bumpy shiny is basically causing shading because of a bump map, kind of like this "disease" demo NVidia did back in the day.



               
               

               
            

Legacy_MerricksDad

  • Hero Member
  • *****
  • Posts: 2105
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #86 on: November 26, 2015, 04:39:38 pm »


               

I can't seem to let this idea go of using multiple env maps in layers on a single object. When I was testing that weird brick texture, I originally let the separated envmap channels be colored while applying them, so I could see where one transitioned into the other. To see the transition I needed to switch to blending additive, and that's actually why I stumbled across that for use later. I let the separated 3 env maps be represented by cyan magenta and yellow, which made them very identifiable in game.


 


I've since forced my internet speed down when loading Drakensang so I can see how they place all the layers on top of each other to give the great appearance they have. I've also been looking at the spectacular mods for star wars battlefront. There is a screenshot where Vader's helmet is modified by two envmaps. His outer helmet is set up with a standard envmap which shows some lit-room color, but mostly shows light on the right. The face mask env map is set up with some red-room like for photo developing, probably to make it look like he's in front of a control panel on his ship. Most of that red light shows on the left. This is something you can easily duplicate with the multiple envmap layer with additive blending, and I'd like to play with that more.


 


One of the areas in Drakensang makes use of this a lot with crystal and some magical glows, and I'd really like to bring that over to NWN... just because we can.



               
               

               
            

Legacy_OldTimeRadio

  • Hero Member
  • *****
  • Posts: 2307
  • Karma: +0/-0
A second type of "bumpmapping" possible but not yet discovered. (?)
« Reply #87 on: November 27, 2015, 05:54:50 pm »


               

I can't seem to let this idea go of using multiple env maps in layers on a single object. When I was testing that weird brick texture, I originally let the separated envmap channels be colored while applying them, so I could see where one transitioned into the other. To see the transition I needed to switch to blending additive, and that's actually why I stumbled across that for use later. I let the separated 3 env maps be represented by cyan magenta and yellow, which made them very identifiable in game.



 

Well, if we're talking something like a static placeable you can have an unlimited number of environment maps.  But it has to be static (and maybe classification tile?) in order to work.  You're still "limited" to one envmap for each mesh in the model because you can only do one texture per mesh and the envmaptexture is called in the texture's .TXI.  I've done probably 4-5 envmaps on meshes within the same model that way.

 



I've since forced my internet speed down when loading Drakensang so I can see how they place all the layers on top of each other to give the great appearance they have. I've also been looking at the spectacular mods for star wars battlefront. There is a screenshot where Vader's helmet is modified by two envmaps. His outer helmet is set up with a standard envmap which shows some lit-room color, but mostly shows light on the right. The face mask env map is set up with some red-room like for photo developing, probably to make it look like he's in front of a control panel on his ship. Most of that red light shows on the left. This is something you can easily duplicate with the multiple envmap layer with additive blending, and I'd like to play with that more.

 

One of the areas in Drakensang makes use of this a lot with crystal and some magical glows, and I'd really like to bring that over to NWN... just because we can.



 


I would love to see how some of this stuff plays out if you experiment!


 


General notes:


* The graphics pipeline changes for the following settings (at least):

-Enable Texture Animations

-Environment Mapping on Creatures

-Visual Effects High Enabled (this might just mess with emitter functions)

-Enable Shiny Water

 

The full domain of what some of the above do and how (exactly) they do it is a bit of a mystery to me.  But I can see the difference in the indiviual frames I grab via GLIntercept.  With them turned on, it's pretty common to have a base texture and envmap pair occupying TEX0 and then other maps, usually cube maps, hanging out in TEX1-4 or 5.  It may be that while some TXI commands appear not to work, it could be more of a case that they may do something but not when some of the above settings are turned on.

* After reading this fascinating approach on hacking models into submission, I've started hex editing compiled models a bit.  Because of the ambiguity in testing presented by the first bullet point about the graphics pipeline, flipping bits or changing pointers from one thing to another haven't produced anything interesting yet.  I have hex edited DDS files even less, however...the last bit of information stored in the Bioware header is mostly unknown but someone once posted they believe it was an alphamean setting.  Though I don't know how this would differ from a .TXI-based alphamean change, if that's really what the setting is for, it could have some uses.  If anyone is interested in hex-editing binary models to see what can be seen, I will produce a brief guide to get them started.

* Reminding myself again that transparencyhint goes up to 5 in the Bioware export scripts.


 


Edit (11/30/2015):


*  To understand this next bit, you have to understand that compiling a model with a supermodel sometimes requires that supermodel to be available to the compiler.  A perfect in-a-nutshell description from Brian Chung in  "Change Phenotype Lag" in the Omnibus:


"BioWare uses a separate, external compiler for that, and yes, the source MDLs in the supermodel hierarchy need to be in the same folder during compile and play, as they're linked in some manner after being compiled."


If anyone is looking for more specifics, see 0x0068 in the model header.  Think "This is probably about memory management and letting the engine know when it can safely unload a model which is only used as a supermodel, during gameplay".


 


Years ago, I was looking at some raw compiled VFX_*.mdls  and found them referencing other VFX files.  But not files they were supermodeled into.  I remember little except puzzling over the oddity of it. 


 


Fast forward to this last week.  After examining a simple animesh ASCII file, I noticed some unusual additonal occurrences of "bitmap".  You can probably see this by making a plane, animating a noise (?) modifier with a length of 90 frames and a sample period of 3.  Anyway, being the imp I am, I changed each of the bitmap references to something different, compiled the model and checked with a hex editor.  The first occurrence of the three occurrences was not found in the compiled model, but the second was.  The third happened to be a tga (with a rather convoluted TXI) that was not referenced in the model at all, merely happened to be in the directory with the ASCII model at the time.  I bring all this up because I am starting to suspect that what the .MDX is in Knights of the Old Republic is probably the "raw data" area of a compiled model, just split into a separate file.  Why a file which was not specifically mentioned in an ASCII model (a texture, no less!) was referenced in a compiled model, in the correct place for a texture, is a mystery.  Using processmon to see what files the Bioware model compiler is looking for doesn't yield anything.  It just gets a directory listing then does (whatever) under the sheets.  I guess what I'm thinking is theremay be some "relationship" between a TGA (or a TGA with a TXI, or whatever) and a model which happens to be compiled with it.  This is all with the internal Bioware model compiler, BTW.