.csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, “Courier New”, courier, monospace; background-color: #ffffff; /white-space: pre;/ overflow:auto; } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }
I’ve been working up some demos for Project Cooper to show it working well with the Java class libraries and with the Android SDK. Most recently been trying my hand at getting some OpenGL demos going. I’ve briefly bumped into OpenGL over the years, but have by no means become a dab hand at navigating the mass of calls in the library and so have been finding the OpenGL documentation invaluable. To help me get something working I’ve been following steps in books and looking at demos found online.
One demo in a book I have involves a spinning cube (a popular type of demo), that has an image used to texture each side of the cube, and also makes the cube transparent. It’s a nice looking demo for sure. I cobbled the code into a Project Cooper application and tested it out and was left very puzzled. You see, the demo ran just fine on the Android emulator. But on my shiny HTC Desire the cube had no bitmap texture on it – the cube was bare!
So the challenge was to find out why a well-performing physical Android device was beaten by the meagre emulator. The solution was found using logging, documentation, the Android source and some general web searching.
The first thing to do was ensure the OpenGL code was emitting as much debug information as possible so I turned on the debug flags:
Back to the story…
After checking the log output in DDMS and seeing where the error came up in amongst all the various OpenGL calls I narrowed the issue down to where the bitmap was being loaded and applied as a texture:
The first thing I did was to find some similar OpenGL code in the Android SDK’s APIDemos. They have a TriangleRenderer
class, which does a similar texture load in its onSurfaceCreated()
method. However they seem to have avoided the problem entirely by reading the resource raw via an InputStream
, before involving the BitmapFactory
:
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
}
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
Further searching informed me that some devices are unable to use textures that do not have a width and height that are a power of 2. On the emulator, the bitmap from the BitmapFactory
is 128×128 and so satisfies the ‘power of 2’ requirement. Clearly on the device with the bitmap inflated to 192×192 the requirement is not met.
Apparently the capability to use textures that are sized as not-power-of-two (or NPOT textures) can be determined by checking the OpenGL extensions string for the presence of the text GL_ARB_texture_non_power_of_two
. So something equivalent to:
Log.i(Tag, 'NPOT texture extension found :-)')
else
Log.i(Tag, 'NPOT texture extension not found :-(');
Ok, so my telephone does not support NPOT textures, but the image in the Android project is 128×128 so why does it get scaled up to 192 when loaded via BitmapFactory.decodeResource()
? Well, looking at the documentation a call to BitmapFactory.decodeResource(ctx.Resources, resource)
results in a call to <a href="http://d.android.com/reference/android/graphics/BitmapFactory.html#decodeResourceStream(android.content.res.Resources,%20android.util.TypedValue,%20java.io.InputStream,%20android.graphics.Rect,%20android.graphics.BitmapFactory.Options)">BitmapFactory.decodeResourceStream()</a>
(as opposed to the earlier BitmapFactory.decodeStream()
) with a nil
(or null
) <a href="http://d.android.com/reference/android/graphics/BitmapFactory.Options.html">BitmapFactory.Options</a>
passed in.
Looking at the source for this method it all starts to becomes clear. The method description (just above the listing in the source code, and also in the documentation) says:
Decode a new
Bitmap
from anInputStream
. ThisInputStream
was obtained from resources, which we pass to be able to scale the bitmap accordingly.
Here’s a copy of the method, which is called with no options passed in along with a new, empty TypedValue
.
InputStream is, Rect pad, Options opts) {
if (opts == null) {
opts = new Options();
}
if (opts.inDensity == 0 && value != null) {
final int density = value.density;
if (density == TypedValue.DENSITY_DEFAULT) {
opts.inDensity = DisplayMetrics.DENSITY_DEFAULT;
} elseif (density != TypedValue.DENSITY_NONE) {
opts.inDensity = density;
}
}
if (opts.inTargetDensity == 0 && res != null) {
opts.inTargetDensity = res.getDisplayMetrics().densityDpi;
}
return decodeStream(is, pad, opts);
}
Well, in case it isn’t patently clear now, the functionality here is setting things up to have the image scaled to match the density of my device. In general this is probably a good thing – if I was placing the image on the screen on a regular View
I’d want it to look the same on devices with different screen densities and this auto-scaling helps there, but in the case of loading an image sized with power-of-two edges and making a texture out of it whilst still maintaining those power-of-two edges this auto-scaling is most definitely not helpful.
To avoid the problem a little referral to the documentation leads us to the inScaled
field of the BitmapFactory.Options
class. It tells us that if the field is not set (which it automatically is when the class is constructed) then no scaling takes place during the image load.
Changing the original code to start with this now makes everything happy:
This one comes from the telephone, which renders about three times as quickly, at around 52 FPS. Now, it doesn’t take me to point out that this screenshot doesn’t look very good.
It seems that the process of taking a screenshot is a repetitive linear process of grabbing various lines from the screen buffer. However if the view is updating itself quickly enough then it will have re-drawn before the screen capture concludes. Indeed it looks like 52 redraws per second is way too many to stand much of a chance of getting a reasonable screen capture.
I’d estimate that maybe the screen updated 25 or so times during that screen capture, which suggests it takes DDMS around half a second to perform the capture on a connected physical Android device!