Set Pixel Question

Mar 7, 2012 at 9:33 PM

why does the set pixel function take uints instead of doubles 

Developer
Mar 8, 2012 at 3:01 PM

Why on earth would the set pixel function for a graphics driver take doubles?

A pixel location on the screen can't be negative, thus the pixel location is unsigned,  I don't currently know of a graphics card capable of rendering over 4GB of data to the screen, as would be required for anything larger than a uint, and lastly, you can't address a screen at the sub-pixel level, thus no need for decimals in the pixel setting.

So, I'll ask again, why on earth would the set pixel function for a graphics driver take doubles as pixel locations?

Mar 8, 2012 at 8:16 PM

first of all cpu calculate doubles and floats fast the ints and uints and secoundly to make a circle by * x and y by pi

Developer
Mar 8, 2012 at 8:42 PM

For the first point, that would be the GPU your thinking of, standard floating point math on X86 (ignoring SSE3 and above (which Cosmos doesn't even use, so it's a moot point anyways) ) is slower than integer math, especially in cosmos, which keeps all values on the regular stack, no matter if their floating point values, or integer values.

Secondly, the result of that multiplication is easily doable by first converting x & y to floating point values, or better yet, load them as integers onto the FPU, then to the multiplication, and store the result back to the stack, being sure to truncate the decimals. (see the DrawCircle() method in Orvid.Graphics.Image for an example)

 

Lastly, the floats and doubles would have to be converted to integer values before they could be drawn anyways, thus increasing the overhead for set pixel for everything, when that overhead isn't needed at all.