Acolyte of the Butterfly
Posts: 109 from 2013/12/5
@thellier
Quote:
So all is said : It is up to you to find another way ....
As being said:
I already have a workaround as a a fallback that doesn't need to use manual multiplying (at the cost of some image quality).
And no, actually it's not up to me, it's up to the driver-coder to fix it.
Quote:
He certainly use rectangular textures for a gui
In this particular case, yes. Although my textures are most often rectangular regardless of what I do with them
And again: it doesn't matter what I
want to do with it. A core feature is missing, period. We got a bug here.
Quote:
Doing it manually is only
having/testing 2 flags : card_cant_do_npot uvarray->already_scaled
if(card_cant_do_npot)
if(!uvarray->already_scaled)
for each U V in uvarray
multiply U V with 2 floats resizeu resizev (no need for a full matrix)
Obviously you didn't get the point:
I want to use a core GL feature for that task.
Your little snippet missed the most important parts of the real world implementation, and it missed most of the important implicatins I already mentioned above. It is over-simplified and therefore no valid argument. This is a more valid snippet:
Code:
if(uv_array_type==GL_FLOAT) DoFloatScaleImplementation(pointer,count,stride,factors);
else if(uv_array_type==GL_INT) DoIntScaleImplementation(pointer,count,stride,factors);
else if(uv_array_type==GL_SHORT) DoShortScaleImplementation(pointer,count,stride,factors);
else if(uv_array_type==GL_DOUBLE) DoDoubleScaleImplementation(pointer,count,stride,factors);
void DoFloatScaleImplementation(void *data,unsigned int count,int stride,const float *uv_scale)
{
float *uv=(float*)data;
for(unsigned int t=0;t<count;++t) {
uv[0]*=uv_scale[0];
uv[1]*=uv_scale[1];
((unsigned char*)uv)+=stride;
}
}
// repeat for each variant of use templates in the first place
Also you need to do additional management. Of course at the end it is at first a check against a boolean "already-multiplied".
But in the real world you have to add such a flag to your array-management code.
In contrast to all that:
by using the texture-matrix all that hazzle is gone, all you need to do is to add a flag/factors to the texture-info (which you'd have to do with the manual approach too) and wrap BindTexture to catch that info and modify the texture-matrix if necessary. Done. No need to flag or modify texcoord-arrays. Plus you can do more with it (it is a matrix, you can also do translation in combination with scaling to zoom a texture, for example).
By the way: your approach won't work when the texcoord-array is const (which is not a very unlikely situation!).
In such a case you'd have to make a copy of everything. Which would also imply that your scaler changes the tex-coord-pointer... Brilliant idea
No thanks, your approach has way too many side-effects if you really think about its consequences, is too limited and needs too many manual tweaks all over the code. I prefer to choose the commands GL offers for that task.
Yours may be the way to go in a HelloWorld-program, but not in a real world application.
Oh, here is a real world example where the texture-matrix was incredibly useful, and it had nothing to do with "rectangular UI textures" :
When I wrote
the 3D engine for that game here back in 2009 it had to run on iPhone1 hardware. And there was a heavy RAM limit. Therefore the UVs of all models and the terrain were SHORTs. Guess what: without texture-matrix-scaling this wouldn't have been possible, because the texture-matrix multiplication is applied
after the arrays are submitted to the GL, it is applied in terms of floats.
But thanks to texture-matrix-scaling the shorts could effectively hold a 16bit float.
Cheers, hope you learned something
Anyway, I think even thellier agrees that we got a bug in the driver here and that it would be nice if that gets fixed, so people can decide for themselfs to use that feature if they find it useful for whatever reason which other people probably don't realize at first sight.
Is this going to be fixed and if so, when? Would be nice to know, because if it's going to be fixed with the next MorphOS-version I could default to this rendering path if I detect a that new OS-version. Currently I default to the ugly workaround and you can toggle via tooltype