
why does the set pixel function take uints instead of doubles



Why on earth would the set pixel function for a graphics driver take doubles?
A pixel location on the screen can't be negative, thus the pixel location is unsigned, I don't currently know of a graphics card capable of rendering over 4GB of data to the screen, as would be required for anything larger than a uint, and lastly,
you can't address a screen at the subpixel level, thus no need for decimals in the pixel setting.
So, I'll ask again, why on earth would the set pixel function for a graphics driver take doubles as pixel locations?



first of all cpu calculate doubles and floats fast the ints and uints and secoundly to make a circle by * x and y by pi



For the first point, that would be the GPU your thinking of, standard floating point math on X86 (ignoring SSE3 and above (which Cosmos doesn't even use, so it's a moot point anyways) ) is slower than integer math, especially in cosmos, which keeps all values
on the regular stack, no matter if their floating point values, or integer values.
Secondly, the result of that multiplication is easily doable by first converting x & y to floating point values, or better yet, load them as integers onto the FPU, then to the multiplication, and store the result back to the stack, being sure to truncate
the decimals. (see the DrawCircle() method in Orvid.Graphics.Image for an example)
Lastly, the floats and doubles would have to be converted to integer values before they could be drawn anyways, thus increasing the overhead for set pixel for everything, when that overhead isn't needed at all.

