Archives

All posts for the month June, 2015

The MonoGame team have been putting allot of effort into a cross platform content pipeline, but given that for the most part we support loading native assets like .png, .mp3, .wav why bother? Well it all boils down to a couple of words.. performance, efficiency. Lets look at an example, graphics are probably the biggest asset a game uses, they are also a major resource hog. Textures will probably take up most of the room in your deployment and will be taking up most of the memory on your device as well.

Textures

So lets say we have a 256×256 32 bit .png texture we are using in our game, we don’t want to bother with all this compiling to .xnb rubbish that people do, so we just use the texture as a raw .png file. On disk .png is very impressive in its size, that image probably only takes up 2-5 kb on disk, keeping your application package size down. Great!

Now lets go through what happens when we load this .png from storage on a device (like an iPhone). Firstly its loaded from storage into memory and decompressed/unpacked from its compressed png format into raw bytes. This is done because the GPU on your device doesn’t know how to use a png image directly, it can only use certain types of compression. So we unpacked the image into memory, this is 262,144 bytes , 256x256x4 , the x4 is because we have 1 byte per channel Red, Green, Blue and Alpha. Note that 262 KB  is quite a bit bigger than the compressed size. The next thing to do is create a texture for that data, because your device can’t compress on the fly (yet) it has to use that data as is. So in creating the texture we used 262kb  of graphics memory on the GPU. That doesn’t sound too bad, but if you are using larger textures say 1024×1024 then you are using 4 MB of GPU memory for that one texture. Multiply that over the number of textures in your game and you soon run out of texture memory on the GPU. Then the GPU has to swap that data out into system memory (if it supports that) or throw an error when you try to create textures that won’t fit into available memory. So to sum up using

.pngs = smaller package size & higher memory usage & less textures

Now let look at a texture pre-processed using the content pipeline, because we know we are targeting iOS we know the GPU’s on those devices support PVRTC texture compression directly. So lets take our sample .png and compress that using PVRTC, what we end up with is a 32kb file (size depends on the texture,alpha channel etc). Hmm that is allot bigger than the .png on disk but that is not the whole story. The difference is there is no need to unpack/decompress it which saves on load time, also we can create a texture from that data directly so we only use 32kb of texture memory on the GPU not 262kb. That is a massive saving.

compress textures = larger package size (maybe) & lower memory usage & more textures

Now we just looked at iOS, but the same applies to desktop environments. Most desktop GPU’s support DXT texture compression so the content pipeline will produce DXT compressed textures which can be loaded and used directly. The only platform which is a pain is android, because android does not have consistent support for compressed textures at the moment MonoGame has to decompress DXT on the device and use it directly. However even android will be getting compressed texture support. There is currently a piece of work happening where the pipeline tool will automatically pick a texture format to use, so for opaque textures it will use ETC1 (which is supported on all android devices but doesn’t support alpha channels) but for textures with a alpha channel it will use RGBA4444 (dithered) but also allow the user to pick from a wide variety of compression options manually such as PVRTC, ATITC, DXT/S3TC, ETC1 and RGBA4444. This will give the developer the choice of what to use/support.

Audio

Now lets look at audio. All the different platforms support different audio formats, if you are handling this yourself you will need to manually convert all your files and include the right ones for each platform. Would a better option be to keep one source file (be it .mp3, .wmv etc) and convert that to a supported format for the target platform at build time? Ok it makes for longer build times, but at least we know the music will work. MonoGame uses ffmpeg to do the heavy lifting when converting between formats as it can pretty much convert any type to any other type which is really cool.

Shaders

This is an area that causes real pain, custom shaders. There are a number of shading languages you can use depending on the platform you are targeting. For OpenGL based systems that is GLSL, for DirectX based systems its HLSL, there is also CG from nvidia. The Effect system in XNA/MonoGame was designed around the HLSL language. It is based around the .fx format which allows a developer to write both vertex and pixel shaders in one place. Historically both GLSL and HLSL have separate vertex and pixel shaders, HLSL until recently compiled and linked these at build time, GLSL does this at runtime. Now without a content pipeline or some form of tooling a developer would need to write two shaders, one for HLSL and one for GLSL. The good news is the MonoGame MGFX.exe tool can create a shader from an .fx format and have that work on GLSL. It does this by using an open source library called libmojoshader, which does some funky HLSL to GLSL instruction conversion to create OpenGL based shaders but rather than doing that at runtime we do it at build time so we don’t need to deploy mojoshader with the OpenGL based games. All this saves you the hassle of having to write and maintain two shaders.

Now the drawback of MGFX is that is only runs on a windows box at the time of writing. This is because it needs the DirectX shader tooling to compile the HLSL before passing it to libmojoshader for conversion (for OpenGL platform targets). There is a plan in place to create a version of the .fx file format which supports GLSL directly so people who want to do custom shaders on a Mac or Linux can do, but this is still undergoing development so for now you need to use a windows box.

Models

For the most part the model support in XNA/MonoGame is pretty good. XNA supports .x, .fbx files for 3D models, MonoGame, thanks to the excellent assimp project supports a much wider range of models including .3ds. However some of these formats might produce some weirdness at render time, only .fbx has been fully tested. Also note that assimp does not support the very old format .fbx files which ship with most of the XNA samples, so you’ll need to convert those to the new format manually. On nice trick I found was to open the old .fbx in Visual Studio 2012+ and then save it again under a new name. Oddly VS seems to know about .fbx files and will save the model in the new format :).

Now what happens when you use a 3D model is that it is converted by the pipeline into an optimised internal format which will contain the Vertices, Texture Coordinates and Normals. The pipeline will also pull out the textures used in the model and put those through the pipeline too, so you automatically get optimised textures without having to do all of that stuff yourself.

Summary

So hopefully you’ve got a good idea on why you should use the content pipeline in your games. Using the raw assets is ok when you are putting together a simple demo or proof of concept but sooner or later you will need to start optimising your content. My advice would be to use the Pipeline tooling from the outset so you get used to it.

Information on the Pipeline tool can be found here.

I will be covering in a future post how to produce custom content pipeline extensions for MonoGame which will allow you to optimise your own content or customise/extend existing content processors.

Until then Happy Coding.

So in one of my previous blog posts we covered how to scale your game for multiple screen resolutions using the matrix parameter in the SpriteBatch. Like all good problems that was just one solution or many, and since then things have moved on a bit. While that technique is still valid, the bits we did about scaling the Touch Points are no longer really necessary. In this post we’ll discover how to use the MonoGame Touch panel to get scaled inputs for touch and mouse as well as how to use RenderTargets to scale your game.

What is a Render Target?

Ok so your new to gaming and you have no idea what this “RenderTarget” thing is. So at its simplest a RenderTarget is just a texture, its on the GPU and you can use it just like a texture. The main difference is you can tell the rendering system to draw directly to the RenderTarget, so instead of your graphics being drawn to the screen its drawn to the RenderTarget. Which you can then use as a texture for something else, like a SpriteBatch. Pretty cool eh 🙂

Scaling your game

While RenderTargets are really useful for things like Differed Lighting and other shader based things, in our case we just want to use it to render our game. Supporting lots of different screen sizes is a pain, especially when doing 2D stuff, if you are supporting both Mobile and Desktop/Console you will need to support screen resolutions from 320×200 (low end android) to 1080p (HD TV) or bigger.. That is a huge range of screen sizes to make your game look good on. Quite a challenge.

So in our case rather than scaling all the graphics we are going to render our entire scene to a RenderTarget at a fixed resolution in this example 1366×768. Having a fixed resolution means we know EXACTLY how big our “game screen” will be, so we can make sure we have nice margins and make the graphics look great at that resolution. Once we have our scene rendered to the Render Target we can then scale that to it fits the device screen size. We’ll introduce letter boxing support so we can keep the correct aspect ratio like we did in the previous blogs.

So lets look at some code.

Creating a Render Target in MonoGame is really easy, we add the following code to the Initialise method of our game

scene = new RenderTarget2D(graphics.GraphicsDevice, 1366, 768, falseSurfaceFormat.Color, DepthFormat.None, pp.MultiSampleCount, RenderTargetUsage.DiscardContents);

Next step is to actually use the Render Target. We do this in the Draw method

GraphicsDevice.SetRenderTarget(scene);
// draw your game
GraphicsDevice.SetRenderTarget (null);

As you can see we tell the GraphicsDevice to use our RenderTarget, we draw the game, then tell the GraphicsDevice to not use any RenderTarget. We can how just use a SpriteBatch as normal to draw the RenderTarget to the screen. The following code will just draw the RenderTarget in the top left of the screen without scaling

spriteBatch.Being();
spriteBatch.Draw(scene, Vector2.Zero);
spriteBtach.End();

However what we really need to do before we draw the RenderTarget is calculate a destination rectangle which will “scale” the RenderTarget so it fits within the confines of the screen, but also maintain its aspect ratio. This last bit is important because if you don’t maintain the aspect ratio your graphics will distort.

So first this we need to do is calculate the aspect ratio of the RenderTarget and the Screen

float outputAspect = Window.ClientBounds.Width / (float)Window.ClientBounds.Height;
float preferredAspect = 1366 / (float)768;
Next we need to decide if we calculate the destination rectangle, but we need to add a “letter boxing” effect. these are black bars at the top and bottom of the screen (or to the left and right) which fill in the missing area so that we maintain aspect ration. Its a bit like watching a Wide Screen movie on an old TV, you get the black bars at the top and bottom. The code to do this is as follows
Rectangle dst;
if (outputAspect <= preferredAspect)
{
  // output is taller than it is wider, bars on top/bottom
  int presentHeight = (int)((Window.ClientBounds.Width / preferredAspect) + 0.5f);
  int barHeight = (Window.ClientBounds.Height - presentHeight) / 2;
  dst = new Rectangle(0, barHeight, Window.ClientBounds.Width, presentHeight);
}
else
{
  // output is wider than it is tall, bars left/right
  int presentWidth = (int)((Window.ClientBounds.Height * preferredAspect) + 0.5f);
  int barWidth = (Window.ClientBounds.Width - presentWidth) / 2;
  dst = new Rectangle(barWidth, 0, presentWidth, Window.ClientBounds.Height);
}
You can see from the code we calculate how much to offset the rectangle from the top and bottom/left and right of the screen to give the letterbox effect. This value is stored in barHeight/barWidth then used as the Top or Left values for the rectangle. The presentWidth/Height is the height of the destination rectangle. The width/height of the rect will match the ClientBounds depending on whether we are letter boxing at the top/bottom or left/right.
So with the destination rectangle calculated we can now use the following to draw the RenderTarget
graphics.GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 1.0f, 0);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);
spriteBatch.Draw(renderTarget, dst, Color.White);
spriteBatch.End();
Note we clear the background black before drawing to get the nice black borders. You can change that colour to anything you like really.

What about scaling the Input?

By using a fixed size render target we will need to do something about the Touch input. Its no good on a 320×200 screen getting a touch location of 320×200 and passing that into our game world which we think it 1366×760 as it won’t be in the right place. Fortunately MonoGame has an excellent solution

TouchPanel.DisplayWidth = 1366;
TouchPanel.DispalyHeight = 768;

By setting the DisplayWidth/Height on the TouchPanel it will AUTOMATICALLY scale the input for you.. That is just awesome! But wait for it .. it gets even better. You can also easily turn the Mouse input into Touch input which is handy if you’re  only interested in left click events.

TouchPanel.EnableMouseTouchPoint = true;

Now any mouse click on the screen will result in the Touch event. This is great if you want to debug/test with a Desktop app rather than messing about with mobile devices.

Things to Consider

This is all great but to be honest scaling a 1366×768 texture down for 320×200 will look awful! So you need to be a bit smarter about the RenderTarget size you pick. One solution might be to detect the screen size and scale down the render target but factors of 2 (for example) and use smaller textures. For example at full resolution 1366×768 you use hight res textures (e.g @2x on iOS) but on lower resolution devices you use a RenderTarget of half the size of your normal one and normal sized textures. Obviously you will need to scale your games physics (maybe) and other aspects to take into account the smaller area you have to deal with.. or make use of a 2d matrix camera.

The point being that smaller devices don’t always have the same capabilities are larger ones so you need to keep that in mind.

A full “Game1.cs” class with this code is available on gist here.