MonoGame

One of the issues MonoGame Mac (and Linux) developers face is not being able to build shaders. Getting a good shader HLSL compiler to work on a non Windows platform is tricky. Work is under way to get this done but it will take some time to get right.

That leaves Mac (and Linux) developers a bit stuck. Well there is a solution, a custom Content Pipeline processor. This is where the Content Pipeline can show its pure awesomeness.

The Problem

Lets break it down, we have a shader we can to build. But we MUST build it on a Windows box. One way is to do it manually, but doing stuff manually is dull. Rather than opening a Virtual Machine and copying compiled .xnb files about, I wrote a pipeline extension. Its goal to take the shader code, send it to the Windows box. Compile it and send the result back to be packaged into an .xnb.

The Solution

MonoGame has a tool called 2MGFX. This is the underlying tool which takes HLSL .fx files and compiles them to HLSL or GLSL. So what I did was create a service which just shells out to the tool and gets the compiled code (or errors). We then return the results and use the existing packaging process to produce the .xnb file or throw an error. Then I went one further. Hosted the server in Azure, which saves me having to boot my VM each time I want to compile a shader.

The resulting processor code for this is quite simple. The new class is derived from EffectProcessor. You will see that if we are running the processor on Windows we just fall back to the default EffectProcessor code. Which means you can use the same Processor on Mac and Windows.

One restriction at this time is that the .fx file needs to be self contained. In other words you cannot use include’s or have code in external files. One this I could do is plug in the MonoGame Effect pre-processor to pull all of those includes into one file. But that is a job for the future (or a PR 🙂 )

If you want to take a look at all the code you can find it here.

The Code

		public override CompiledEffectContent Process (EffectContent input, ContentProcessorContext context)
		{
			if (Environment.OSVersion.Platform != PlatformID.Unix) {
				return base.Process (input, context);
			}
			var code = input.EffectCode;
			var platform = context.TargetPlatform;
			var client = new HttpClient ();
			client.BaseAddress = new Uri (string.Format ("{0}://{1}:{2}/", Protocol, RemoteAddress, RemotePort));
			var response = client.PostAsync ("api/Effect", new StringContent (JsonSerializer (new Data  () {
				Platform = platform.ToString(),
				Code = code
			}), Encoding.UTF8, "application/json")).Result;
			if (response.IsSuccessStatusCode) {
				string data = response.Content.ReadAsStringAsync ().Result;
				var result = JsonDeSerializer (data);
				if (!string.IsNullOrEmpty (result.Error)) {
					throw new Exception (result.Error);
				}
				if (result.Compiled == null || result.Compiled.Length == 0)
					throw new Exception ("There was an error compiling the effect");
				return new CompiledEffectContent (result.Compiled);
			} else {
				throw new Exception (response.StatusCode.ToString ());
			}
			return null;
		}

Pretty simple code isn’t it! At some point I’ll see if we can replace the .Result code with async/await. But I’m not entirely sure how the Pipeline will respond to that.

Using InfinitespaceStudios.Pipeline

Using this extension could not be easier.

If you want to use the default service

  1. Open your project and find the Packages Folder. Right click and select Add Packages. 
  2. This will open the Nuget search Dialog. Search for “InfinitespaceStudios.Pipeline” and add the Package.
  3. Once the package has been added. Open the Content.mgcb file in the Pipeline Editor.
  4. Select the “Content” node and then find the References property in the property grid. Double click the References property to bring up the Add References Dialog.
  5. Search for the “InfinitespaceStudios.Pipeline.dll” and Add it by clicking on the “Add” button. Note this should be located in the “packages\InfinitespaceStudios.Pipeline.X.X.X\Tools” folder. Once that is done, Save the Content.mgcb. Close it an re open it (there is a bug in the Pipeline Tool). The select the .fx file you want to change.
  6. Select the Processor property and in the drop down you should see “Remote Effect Processor – Infinitespace Studios”. Select this Item.
  7. If you are using the defaults just Save the Content.mcgb. Close the Pipeline tool and Build and Run you app. It should compile without any issues. If there is a problem with the .fx file the error will be reported in the build log.

If you are using a Custom Azure site or the Local Service on a Windows box you can use the RemoteAddress , RemotePort and Protocol properties to change the location of the server. Valid Protocol values are “http” and “https” if you have a secured service. The RemoteAddress can be a CNAME or IP address.

 

Conclusion

Hopefully this post has shown you what a cool thing the Pipeline system is. One of my future posts will be about creating a pipeline extension from scratch. So if you are interested watch out for it. In the mean time, if you are a mac user. Get compiling those shaders!

In the past it might seem that Windows users of MonoGame get all the cool stuff, and Mac / Linux users are left out in the cold. To be honest for a while that was true, but the great news is that more recently that has begun to change. On going community efforts have resulted in both MacOS and Linux installers which will download and install templates into the Xamarin Studio and MonoDevelop. They also install the Pipeline tool which is a GUI you can use to build your content for your game.

All that was great, but again Windows has something that Mac and Linux developers just didn’t have access to. Automatic Content Building, this is where you just include the .mgcb file in your project, set its Build Action to “MonoGameContentReference” and providing you created the project via one of the templates it would “just work”. Your .xnb files would appear as if by magic in your Output Directory without all that messy manual linking of .xnbs.

So how does it work.. Well to fully understand we need to dig into MSBuild a bit 🙂 I know recently I’ve been talking about MSBuild allot but thats because in my day job (@Xamarin) I’m dealing with it ALLOT! So its topical from my point of view 😉

So if you dig into your csproj which was created in Visual Studio via one of the MonoGame templates  you will see a number of things. The first is a <MonoGamePlatform> element. This element is used later to tell the MGCB (MonoGame Content Builder) which platform it needs to build for.  Next up is the <MonoGameContentReference> element which will contain a link to the .mgcb file.. This again is used later to tell MGCB which files to build. Note that you are not just limited to one of these. If you have multiple assets and different resolutions (e.g @2x stuff for iOS) you can have a separate .mgcb file for those and include that in your project. The system will collect ALL the outputs (just make sure they build into different intermediate directories).

The last piece of this system is the “MonoGame.Content.Builder.targets” file. This is the core of the system and you should be able to see the Import near the bottom of your .csproj. This .targets file is responsible for going through ALL the MonoGameContentReference Items in the csproj and calling MGCB.exe for each of them to build the content, it will also pass

/platform:$(MonoGamePlatform)

to the .exe to that it will build the assets for the correct platform. This is all done in the BeforeBuild msbuild event, so it will happen before the code is even built, just like the old XNA content references used to do. But this time you don’t need to do any fiddling to get this to work on a command line, it will just work. Now calling an .exe on a build in a .targets isn’t exactly magic, the magic bit is right here


<Target Name="BuildContent" DependsOnTargets="Prepare;RunContentBuilder"
Outputs="%(ExtraContent.RecursiveDir)%(ExtraContent.Filename)%(ExtraContent.Extension)">
<CreateItem Include="$(ParentOutputDir)\%(ExtraContent.RecursiveDir)%(ExtraContent.Filename)%(ExtraContent.Extension)"
AdditionalMetadata="Link=$(PlatformResourcePrefix)$(ContentRootDirectory)\%(ExtraContent.RecursiveDir)%(ExtraContent.Filename)%(ExtraContent.Extension);CopyToOutputDirectory=PreserveNewest"
Condition="'%(ExtraContent.Filename)' != ''">
<Output TaskParameter="Include" ItemName="Content" Condition="'$(MonoGamePlatform)' != 'Android' And '$(MonoGamePlatform)' != 'iOS' And '$(MonoGamePlatform)' != 'MacOSX'" />
<Output TaskParameter="Include" ItemName="BundleResource" Condition="'$(MonoGamePlatform)' == 'MacOSX' Or '$(MonoGamePlatform)' == 'iOS'" />
<Output TaskParameter="Include" ItemName="AndroidAsset" Condition="'$(MonoGamePlatform)' == 'Android'" />
</CreateItem>
</Target>

This part is responsible for adding the resulting .xnb files to the appropriate ItemGroup for the platform that we are targeting. So in the case of a Desktop build like Windows, Linix we use Content. For iOS and Mac we use BundleResource and for Android we use AndroidAsset. Because we are doing this just before the Build process, when those target platforms actually build the content later they will pick up the items we added in addition to any other items that the projects themselves included.

Now the really interesting bit is that code above was not how it originally looked.. The problem with the old code was it didn’t work with xbuild, which is what is used on Mac and Linux. So it just wouldn’t work. But now the entire .targets file will run quite happily on Mac and Linux and have intact been included in the latest unstable installers. So if you want to try it out go and download the latest development installers and give it a go.

If you have an existing project and you want to upgrade to use the new content pipeline system you will need to do the following

  1. Open your Application .csproj in an Editor.
  2. In the first <PropertyGroup> section add <MonoGamePlatform>$(Platform)</MonoGamePlatform>
    where $(platform) is the system you are targeting e.g Windows, iOS, Android.
  3. Add the following lines right underneath the <MonoGamePlatform /> element <MonoGameInstallDirectory Condition="'$(OS)' != 'Unix' ">$(MSBuildProgramFiles32)</MonoGameInstallDirectory>
    <MonoGameInstallDirectory Condition="'$(OS)' == 'Unix' ">$(MSBuildExtensionsPath)</MonoGameInstallDirectory>
  4. Find the <Import/> element for the CSharp (or FSharp) targets and underneath add <Import Project="$(MSBuildExtensionsPath)\MonoGame\v3.0\MonoGame.Content.Builder.targets" />

Now providing you have the latest development release this should all work. So if you have an old project go ahead and give it a try, its well worth it 🙂

Xamarin announced something awesome yesterday.

Because we love seeing indie games succeed, Xamarin wants to support indie game developers all over the world in bringing their games to billions of mobile gamers. We want every indie game developer to enjoy the power of C# and Visual Studio, so we have an amazing special offer this December:

Free, community-supported subscriptions of Xamarin.iOS and Xamarin.Android, including our Visual Studio extensions

Indie game developers only need to have published a game in any framework on any platform to qualify.

This is just fantastic news. If you have an app already on one of the many stores you will qualify, this includes Xbox Live! So all you XNA developers out there with game now have the perfect opportunity to move that game to MonoGame and publish on iOS, Android ,Windows 10, MacOS and Linux or even Apple TV!*

It is worth noting as well that porting you app to a Windows 10 Universal app will also allow your game to work on Xbox One (in the app section).

This offer expires on the 31st of December at 9pm ET , so make sure you apply before the deadline expires.

* Apple TV support was merged into the develop branch a few days ago.

The MonoGame team have been putting allot of effort into a cross platform content pipeline, but given that for the most part we support loading native assets like .png, .mp3, .wav why bother? Well it all boils down to a couple of words.. performance, efficiency. Lets look at an example, graphics are probably the biggest asset a game uses, they are also a major resource hog. Textures will probably take up most of the room in your deployment and will be taking up most of the memory on your device as well.

Textures

So lets say we have a 256×256 32 bit .png texture we are using in our game, we don’t want to bother with all this compiling to .xnb rubbish that people do, so we just use the texture as a raw .png file. On disk .png is very impressive in its size, that image probably only takes up 2-5 kb on disk, keeping your application package size down. Great!

Now lets go through what happens when we load this .png from storage on a device (like an iPhone). Firstly its loaded from storage into memory and decompressed/unpacked from its compressed png format into raw bytes. This is done because the GPU on your device doesn’t know how to use a png image directly, it can only use certain types of compression. So we unpacked the image into memory, this is 262,144 bytes , 256x256x4 , the x4 is because we have 1 byte per channel Red, Green, Blue and Alpha. Note that 262 KB  is quite a bit bigger than the compressed size. The next thing to do is create a texture for that data, because your device can’t compress on the fly (yet) it has to use that data as is. So in creating the texture we used 262kb  of graphics memory on the GPU. That doesn’t sound too bad, but if you are using larger textures say 1024×1024 then you are using 4 MB of GPU memory for that one texture. Multiply that over the number of textures in your game and you soon run out of texture memory on the GPU. Then the GPU has to swap that data out into system memory (if it supports that) or throw an error when you try to create textures that won’t fit into available memory. So to sum up using

.pngs = smaller package size & higher memory usage & less textures

Now let look at a texture pre-processed using the content pipeline, because we know we are targeting iOS we know the GPU’s on those devices support PVRTC texture compression directly. So lets take our sample .png and compress that using PVRTC, what we end up with is a 32kb file (size depends on the texture,alpha channel etc). Hmm that is allot bigger than the .png on disk but that is not the whole story. The difference is there is no need to unpack/decompress it which saves on load time, also we can create a texture from that data directly so we only use 32kb of texture memory on the GPU not 262kb. That is a massive saving.

compress textures = larger package size (maybe) & lower memory usage & more textures

Now we just looked at iOS, but the same applies to desktop environments. Most desktop GPU’s support DXT texture compression so the content pipeline will produce DXT compressed textures which can be loaded and used directly. The only platform which is a pain is android, because android does not have consistent support for compressed textures at the moment MonoGame has to decompress DXT on the device and use it directly. However even android will be getting compressed texture support. There is currently a piece of work happening where the pipeline tool will automatically pick a texture format to use, so for opaque textures it will use ETC1 (which is supported on all android devices but doesn’t support alpha channels) but for textures with a alpha channel it will use RGBA4444 (dithered) but also allow the user to pick from a wide variety of compression options manually such as PVRTC, ATITC, DXT/S3TC, ETC1 and RGBA4444. This will give the developer the choice of what to use/support.

Audio

Now lets look at audio. All the different platforms support different audio formats, if you are handling this yourself you will need to manually convert all your files and include the right ones for each platform. Would a better option be to keep one source file (be it .mp3, .wmv etc) and convert that to a supported format for the target platform at build time? Ok it makes for longer build times, but at least we know the music will work. MonoGame uses ffmpeg to do the heavy lifting when converting between formats as it can pretty much convert any type to any other type which is really cool.

Shaders

This is an area that causes real pain, custom shaders. There are a number of shading languages you can use depending on the platform you are targeting. For OpenGL based systems that is GLSL, for DirectX based systems its HLSL, there is also CG from nvidia. The Effect system in XNA/MonoGame was designed around the HLSL language. It is based around the .fx format which allows a developer to write both vertex and pixel shaders in one place. Historically both GLSL and HLSL have separate vertex and pixel shaders, HLSL until recently compiled and linked these at build time, GLSL does this at runtime. Now without a content pipeline or some form of tooling a developer would need to write two shaders, one for HLSL and one for GLSL. The good news is the MonoGame MGFX.exe tool can create a shader from an .fx format and have that work on GLSL. It does this by using an open source library called libmojoshader, which does some funky HLSL to GLSL instruction conversion to create OpenGL based shaders but rather than doing that at runtime we do it at build time so we don’t need to deploy mojoshader with the OpenGL based games. All this saves you the hassle of having to write and maintain two shaders.

Now the drawback of MGFX is that is only runs on a windows box at the time of writing. This is because it needs the DirectX shader tooling to compile the HLSL before passing it to libmojoshader for conversion (for OpenGL platform targets). There is a plan in place to create a version of the .fx file format which supports GLSL directly so people who want to do custom shaders on a Mac or Linux can do, but this is still undergoing development so for now you need to use a windows box.

Models

For the most part the model support in XNA/MonoGame is pretty good. XNA supports .x, .fbx files for 3D models, MonoGame, thanks to the excellent assimp project supports a much wider range of models including .3ds. However some of these formats might produce some weirdness at render time, only .fbx has been fully tested. Also note that assimp does not support the very old format .fbx files which ship with most of the XNA samples, so you’ll need to convert those to the new format manually. On nice trick I found was to open the old .fbx in Visual Studio 2012+ and then save it again under a new name. Oddly VS seems to know about .fbx files and will save the model in the new format :).

Now what happens when you use a 3D model is that it is converted by the pipeline into an optimised internal format which will contain the Vertices, Texture Coordinates and Normals. The pipeline will also pull out the textures used in the model and put those through the pipeline too, so you automatically get optimised textures without having to do all of that stuff yourself.

Summary

So hopefully you’ve got a good idea on why you should use the content pipeline in your games. Using the raw assets is ok when you are putting together a simple demo or proof of concept but sooner or later you will need to start optimising your content. My advice would be to use the Pipeline tooling from the outset so you get used to it.

Information on the Pipeline tool can be found here.

I will be covering in a future post how to produce custom content pipeline extensions for MonoGame which will allow you to optimise your own content or customise/extend existing content processors.

Until then Happy Coding.

So in one of my previous blog posts we covered how to scale your game for multiple screen resolutions using the matrix parameter in the SpriteBatch. Like all good problems that was just one solution or many, and since then things have moved on a bit. While that technique is still valid, the bits we did about scaling the Touch Points are no longer really necessary. In this post we’ll discover how to use the MonoGame Touch panel to get scaled inputs for touch and mouse as well as how to use RenderTargets to scale your game.

What is a Render Target?

Ok so your new to gaming and you have no idea what this “RenderTarget” thing is. So at its simplest a RenderTarget is just a texture, its on the GPU and you can use it just like a texture. The main difference is you can tell the rendering system to draw directly to the RenderTarget, so instead of your graphics being drawn to the screen its drawn to the RenderTarget. Which you can then use as a texture for something else, like a SpriteBatch. Pretty cool eh 🙂

Scaling your game

While RenderTargets are really useful for things like Differed Lighting and other shader based things, in our case we just want to use it to render our game. Supporting lots of different screen sizes is a pain, especially when doing 2D stuff, if you are supporting both Mobile and Desktop/Console you will need to support screen resolutions from 320×200 (low end android) to 1080p (HD TV) or bigger.. That is a huge range of screen sizes to make your game look good on. Quite a challenge.

So in our case rather than scaling all the graphics we are going to render our entire scene to a RenderTarget at a fixed resolution in this example 1366×768. Having a fixed resolution means we know EXACTLY how big our “game screen” will be, so we can make sure we have nice margins and make the graphics look great at that resolution. Once we have our scene rendered to the Render Target we can then scale that to it fits the device screen size. We’ll introduce letter boxing support so we can keep the correct aspect ratio like we did in the previous blogs.

So lets look at some code.

Creating a Render Target in MonoGame is really easy, we add the following code to the Initialise method of our game

scene = new RenderTarget2D(graphics.GraphicsDevice, 1366, 768, falseSurfaceFormat.Color, DepthFormat.None, pp.MultiSampleCount, RenderTargetUsage.DiscardContents);

Next step is to actually use the Render Target. We do this in the Draw method

GraphicsDevice.SetRenderTarget(scene);
// draw your game
GraphicsDevice.SetRenderTarget (null);

As you can see we tell the GraphicsDevice to use our RenderTarget, we draw the game, then tell the GraphicsDevice to not use any RenderTarget. We can how just use a SpriteBatch as normal to draw the RenderTarget to the screen. The following code will just draw the RenderTarget in the top left of the screen without scaling

spriteBatch.Being();
spriteBatch.Draw(scene, Vector2.Zero);
spriteBtach.End();

However what we really need to do before we draw the RenderTarget is calculate a destination rectangle which will “scale” the RenderTarget so it fits within the confines of the screen, but also maintain its aspect ratio. This last bit is important because if you don’t maintain the aspect ratio your graphics will distort.

So first this we need to do is calculate the aspect ratio of the RenderTarget and the Screen

float outputAspect = Window.ClientBounds.Width / (float)Window.ClientBounds.Height;
float preferredAspect = 1366 / (float)768;
Next we need to decide if we calculate the destination rectangle, but we need to add a “letter boxing” effect. these are black bars at the top and bottom of the screen (or to the left and right) which fill in the missing area so that we maintain aspect ration. Its a bit like watching a Wide Screen movie on an old TV, you get the black bars at the top and bottom. The code to do this is as follows
Rectangle dst;
if (outputAspect <= preferredAspect)
{
  // output is taller than it is wider, bars on top/bottom
  int presentHeight = (int)((Window.ClientBounds.Width / preferredAspect) + 0.5f);
  int barHeight = (Window.ClientBounds.Height - presentHeight) / 2;
  dst = new Rectangle(0, barHeight, Window.ClientBounds.Width, presentHeight);
}
else
{
  // output is wider than it is tall, bars left/right
  int presentWidth = (int)((Window.ClientBounds.Height * preferredAspect) + 0.5f);
  int barWidth = (Window.ClientBounds.Width - presentWidth) / 2;
  dst = new Rectangle(barWidth, 0, presentWidth, Window.ClientBounds.Height);
}
You can see from the code we calculate how much to offset the rectangle from the top and bottom/left and right of the screen to give the letterbox effect. This value is stored in barHeight/barWidth then used as the Top or Left values for the rectangle. The presentWidth/Height is the height of the destination rectangle. The width/height of the rect will match the ClientBounds depending on whether we are letter boxing at the top/bottom or left/right.
So with the destination rectangle calculated we can now use the following to draw the RenderTarget
graphics.GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 1.0f, 0);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);
spriteBatch.Draw(renderTarget, dst, Color.White);
spriteBatch.End();
Note we clear the background black before drawing to get the nice black borders. You can change that colour to anything you like really.

What about scaling the Input?

By using a fixed size render target we will need to do something about the Touch input. Its no good on a 320×200 screen getting a touch location of 320×200 and passing that into our game world which we think it 1366×760 as it won’t be in the right place. Fortunately MonoGame has an excellent solution

TouchPanel.DisplayWidth = 1366;
TouchPanel.DispalyHeight = 768;

By setting the DisplayWidth/Height on the TouchPanel it will AUTOMATICALLY scale the input for you.. That is just awesome! But wait for it .. it gets even better. You can also easily turn the Mouse input into Touch input which is handy if you’re  only interested in left click events.

TouchPanel.EnableMouseTouchPoint = true;

Now any mouse click on the screen will result in the Touch event. This is great if you want to debug/test with a Desktop app rather than messing about with mobile devices.

Things to Consider

This is all great but to be honest scaling a 1366×768 texture down for 320×200 will look awful! So you need to be a bit smarter about the RenderTarget size you pick. One solution might be to detect the screen size and scale down the render target but factors of 2 (for example) and use smaller textures. For example at full resolution 1366×768 you use hight res textures (e.g @2x on iOS) but on lower resolution devices you use a RenderTarget of half the size of your normal one and normal sized textures. Obviously you will need to scale your games physics (maybe) and other aspects to take into account the smaller area you have to deal with.. or make use of a 2d matrix camera.

The point being that smaller devices don’t always have the same capabilities are larger ones so you need to keep that in mind.

A full “Game1.cs” class with this code is available on gist here.

 

 

 

 

Its finally done, the Pipeline tool is now working on OSX (and Linux). Over the christmas break I worked with another MonoGame contributor (@cra0zy) to finish off the Tool and build the Mac OS Installer.

The app itself behaves like the one on Windows so we have a fairly consistent experience. The command line tool MGCB is bundled with the App so you can use the following as part of a build process

mono /Applications/Pipeline.app/Contents/MonoBundle/MGCB.exe /@:SomeMGCBFile.mgcb

You can compile Textures, Fonts, 3D Models and Audio using this tool and the requirement to have XNA installed is totally gone. You can also use your own content processors and importers, exactly how you do that will be a topic of a future post 🙂

A few things to keep in mind.

  • There might be a few issues with Audio/Song’s atm, the MonoGame guys are looking to move over to using ffmpeg to get a better cross platform experience.
  • Only the newer .fbx file format is supported. If you are trying to use older models from the original xna samples they will need to be upgraded first. This can be done using either the Autodesk tools or just by opening the .fbx in visual studio 2013+

There is however one thing missing completely,  Effect compilation. Because of dependencies on DirectX the 2MGFX tool cannot be ported to Mac (or Linux) at this time. There is a plan to implement a GLSL style version of the HLSL .fx format which will allow Mac and Linux (and Windows) developers to use GLSL in their shaders rather than HLSL. This will be designed to work cross platform from the outset. For now Effect files will need to be compiled on Windows.

Even with this functionality missing this is still an extremely useful tool, I would highly recommend anyone using MonoGame on a Mac to check it out.

Pipeline Tool on OSX

Pipeline Tool on OSX

Don’t forget, MonoGame is a community driven project. The community is extremely friendly and helpful, so don’t be scared to join in.

Don’t be just a consumer, ‘go fork and contribute’.

Xamarin recently published a interview with George Banfill from LinkNode on their Augmented Reality product, you can see the full interview here. Since then I’ve had a number of requests from people wanting to know how to do this using MonoGame and Xamarin.Android. Believe it or not its simpler than you think to get started.

First thing you need is a class to handle the Camera. Google has done a nice job at giving you a very basic sample of a ‘CameraVew’ here. For those of you not wishing to port that over to C# from Java (yuk) here is the code


public class CameraView : SurfaceView, ISurfaceHolderCallback {
        Camera camera;

        public CameraView (Context context, Camera camera) : base(context)
        {
                this.camera = camera;
		Holder.AddCallback(this);
		// deprecated setting, but required on 
                // Android versions prior to 3.0
		Holder.SetType(SurfaceType.PushBuffers);
	}

	public void SurfaceChanged (ISurfaceHolder holder, Android.Graphics.Format format, int width, int height)
	{
		if (Holder.Surface == null){
			// preview surface does not exist
			return;
		}

		try {
			camera.StopPreview();
		} catch (Exception e){
		}
		try {
			camera.SetPreviewDisplay(Holder);
			camera.StartPreview();
		} catch (Exception e){
			Android.Util.Log.Debug ("CameraView", e.ToString ());
		}
	}

	public void SurfaceCreated (ISurfaceHolder holder)
	{
		try {
			camera.SetPreviewDisplay(holder);
			camera.StartPreview();
		} catch (Exception e) {
			Android.Util.Log.Debug ("CameraView", e.ToString ());
		}
	}

	public void SurfaceDestroyed (ISurfaceHolder holder)
	{
	}
}

It might not be pretty but it does the job, also it doesn’t handle a flipped view so that will need to be added.

The next step is to figure out how to show MonoGame’s GameWindow and the Camera View at the same time. Again, that is quite easy we can use a FrameLayout like so.


protected override void OnCreate (Bundle bundle)
{
	base.OnCreate (bundle);
	Game1.Activity = this;
	var g = new Game1 ();
	FrameLayout frameLayout = new FrameLayout(this);
	frameLayout.AddView (g.Window);  
	try {
		camera = Camera.Open ();
		cameraView = new CameraView (this, camera);
		frameLayout.AddView (cameraView);
	} catch (Exception e) {
		// oops no camera
		Android.Util.Log.Debug ("CameraView", e.ToString ());
	}
	SetContentView (frameLayout);
	g.Run ();
}

This is almost the same as the normal MonoGame android code you get, but instead of just setting the ContentView to the game Window directly we just add that and the CameraView to a frame layout and add that. Note that the order is important, the last item will be on the bottom so we want to add the game view first so it is over the top of the camera.

Now this won’t work out of the box because there are a couple of other small changes we need. First we need to set the SurfaceFormat of the game Window to Rgba8888, this is because it defaults to a format which does not contain an alpha channel. So if we leave it as is we will not see the camera view underneath the game windows since its opaque. We can change the surface format using

g.Window.SurfaceFormat = Android.Graphics.Format.Rgba8888;

We need to do that BEFORE we add that to the frameLayout though. Another thing to note not all devices support Rgba8888, not sure what you do in that case…

The next thing is we need to change our normal Clear colour in Game1 from the standard Color.CornflowerBlue to Color.Transparent

graphics.GraphicsDevice.Clear (Color.Transparent);

With those changes you should be done. Here is a screenshot, note the Xamarin logo in the top left this is drawn using a standard SpriteBatch call 🙂 All the code for this project is available here. I’ve not implemented the iOS version yet as I’m not an “iOS Guy” really, but I will accept pull requests 🙂
Screenshot_2014-11-19-05-40-53

Some of you might have heard that MonoGame now has its own content pipeline tooling, and it works! As a result the need to install the XNA 4.0 SDK is no longer required, unless you want to target Xbox 360 of course.  For those of you looking for documentation on the new tooling you can head over to here for information on the Pipeline GUI and here for information on the MGCB tool. But I’ll give you a basic overview on how this all hangs together.

MGCB.exe

This is a command line tool used to create .xnb files. It works on Windows and Mac (and Linux for some content AFAIK). On windows at the time of writing you will need to download the latest unstable release from here to install the tooling. It installs the tools to

c:\Program Files (x86)\MSBuild\MonoGame\Tools

On a Mac  you will need to get the source and compile this tool yourself. I am working on an add-in for Xamarin Studio which will install this tool for you, if I can figure out how to do it I’ll also knock up a .pkg file to install the tooling in the Applications folder too.

Using the tool is very simple you can either pass all your parameters on the command line  like so

MGCB.exe /outputDir:bin/foo/$(Platform) /intermediateDir:obj/foo/($Platform) /rebuild /platform:iOS /build:Textures\wood.png

Note for Mac users you will need to prefix your command with  ‘mono’.

The other option is to create a .mgcb response file which contains all the required commands.

Now a .mgcb file has some distinct advantages. Firstly its compatible with the Pipeline GUI tooling, secondly it allows you do process a bunch of files at once and still have a nice readable file rather than a HUGE command line. Here is a sample .mgcb file

# Directories
/outputDir:bin/foo/$(Platform) 
/intermediateDir:obj/foo/$(Platform) 

/rebuild

# Build a texture
/importer:TextureImporter
/processor:TextureProcessor
/processorParam:ColorKeyEnabled=false
/build:Textures\wood.png
/build:Textures\metal.png
/build:Textures\plastic.png

You can pass this to MGCB using

MGCB /platform:Android /@:MyFile.mgcb

This will process the file and build your content, again on a Mac prefix the command with ‘mono’

Note that in in both cases I passed the /platform parameter and used the $(Platform) macro’s in the command line and the response file. This allows me to produce different content per platform. A good example of this is with textures, to get the most out of the iOS platform its best to produce PVRTC compressed textures. MGCB knows which texture compression works best on each platform and will optimise your content accordingly, as a result an .xnb build for iOS won’t work on Android. Well it might but only if the GPU on the device supports that texture compression. In reality its best to compile your content for each platform, that said for desktop platforms (Windows, Linux, Mac) you can get away with using the same content as most GPU’s on desktop PC’s/Mac support DXT texture compression.

Those of you familiar with XNA will have noticed familiar ‘processorParam’ values in the sample response file above. The great news is that all the various processor parameters on the various processors you had in XNA are also available in MonoGame.

Pipeline.exe

This tool is just a GUI over the top of MGCB.exe, currently its only available on windows, but it is being ported to Mac and Linux. When you create a new project file it creates an .mgcb file which to totally compatible with the response file mentioned earlier. So you can hand edit it, or use the tooling its up to you. The Pipeline tool is in the early stages of development but its already useful enough to allow you to replace the existing XNA content projects.

I’m not going to go into the details of how to use the Pipeline tool as it’s covered pretty well in the documentation. Like the MGCB tool it is included in the latest unstable installers and can be found in

c:\Program Files (x86)\MSBuild\MonoGame\Tools

It was a conscious decision on the team’s part NOT to go down a tightly integrated MSBuild style solution for content processing. At the end of the day a stand alone console app gives the developer allot of flexibility on how they want to integrate content processing into there own build processes (some of you might just want to use Nant, ruby, make or some other build scripting tooling). That said there are some .targets files available for those of you who wish to make use of msbuild.

That said, the other nice thing is the Pipeline GUI tool has an import function (on windows) to import and existing XNA .contentproj file into a .mgcb file. So if you want to upgrade your existing projects to use the new tooling there is an easy route.

Custom Content Processors

Now one of the fantastic things about the XNA content pipeline was the ability to extend it. The great news is that MonoGame supports that too, in fact the changes are if you have an existing XNA custom content processor (or importer) if you rebuild it against the MonoGame content pipeline assembles which are installed as part of the installer it should “just work”. At some point I’m sure templates for both Visual Studio and Xamarin Studio will be available for those of you wanting to create your own processors.

Things to remember

It is worth remembering that all of the work done on MonoGame is done in spare time and for free! So if there is a feature that doesn’t work or hasn’t been completed yet please remember that people work on this project because they love doing it they also have day jobs , families and other commitments.

One good example of this is the content processing on a Mac (and Linux), for the most part it will work for Textures, Models, Fonts and Audio (mostly) it will work fine, but there is no installer. Also shaders will NOT work, this is mainly because HLSL is just not supported on a Mac (On a side note the team are abut to embark on a project to support GLSL in the .fx file format which will allow users on a Mac to write their shaders using GLSL but at the time of writing if you use custom shaders you will need to compile those on a windows box).

People might be tempted to start complaining about how ‘it doesn’t work on a Mac or Linux’ and yes the support for those platforms is lagging behind the windows support (mostly due to a lack of contributors on those platforms). But we are working on it so please be patient and if its a feature you really really really want and no one seems to be working on it please feel free to ‘go fork and contribute’, just let the team know what you are up to so two people don’t end up working on the same thing.

In my previous post we looked at how to modify the ScreenManager class to support multiple resolutions. This works fine but what we also need is a way to scale the inputs from the Mouse or TouchScreen so that they operate at the same virtual resolution as the game does.

We already put in place the following properties in the ScreenManager

property Matrix InputScale …..
property Vector2 InputTranslate  …..

InputScale is the Inverse of the Scale matrix which will allow use to scale the input from the mouse into the virtual resolution. InputTranslate needs to be used because we sometimes put a letterbox around the gameplay area to center it and the input system needs to take this into account (otherwise you end up clicking above menu items rather than on them).

So we need to update the InputState.cs class which comes as part of the Game State Management example. First thing we need to do is to add a property to the InputState class for the  ScreenManager.

public ScreenManager ScreenManager
{
  get; set;
}

and then update the ScreenManager constructor to set the property.

input.ScreenManager=this;

Now we need to update the InputState.Update method. Find the following line

CurrentMouseState = Mouse.GetState();

We now need to translate and scale the CurrentMouseState field into the correct virtual resolution. We can do that my accessing the ScreenManager property which we just added, so add the following code.

Vector2 _mousePosition = new Vector2(CurrentMouseState.X, CurrentMouseState.Y);
Vector2 p = _mousePosition - ScreenManager.InputTranslate;
p = Vector2.Transform(p, ScreenManager.InputScale);
CurrentMouseState =new MouseState((int)p.X, (int)p.Y, CurrentMouseState.ScrollWheelValue, CurrentMouseState.LeftButton, CurrentMouseState.MiddleButton, CurrentMouseState.RightButton, CurrentMouseState.XButton1, CurrentMouseState.XButton2);

This bit of code transforms the current mouse position and then scales it before creating a new MouseState instance with the new position values, but the same values for everything else. If the ScrollWheelValue is being used that might need scaling too.

Next stop is to scale the gestures, in the same Update method there should be the following code

Gestures.Clear();
while (TouchPanel.IsGestureAvailable) {
  Gestures.Add(TouchPanel.ReadGesture());
}

We need to change this code over to

Gestures.Clear();
while (TouchPanel.IsGestureAvailable) {
GestureSample g = TouchPanel.ReadGesture();
Vector2 p1 = Vector2.Transform(g.Position- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p2 = Vector2.Transform(g.Position2- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p3 = Vector2.Transform(g.Delta- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p4 = Vector2.Transform(g.Delta2- ScreenManager.InputTranslate, ScreenManager.InputScale);
g =new GestureSample(g.GestureType, g.Timestamp, p1, p2, p3, p4);
Gestures.Add(g);
}

We use similar code to translate and scale each position and delta value from the GestureSample, then again create a new GestureSample with the new values.

That should be all you need to do. This will now scale both mouse and gesture inputs into the virtual resolution.

As before the complete code can be downloaded here, or you can download the both files InputState.cs ScreenManager.cs

Update : The MouseGestureType is the InputState is from the CatapultWars sample and can be downloaded here

When developing for iOS or Windows Phone you don’t really need to take into account different screen resolutions. Yes the iPhone and iPad do scale things differently but on the iPad you provide some special content which will allow the system to scale you game and still get it to look nice.

On android its a different environment, there are many different devices, with different capabilities not only with screen resolutions but also CPU, memory and graphics chips. In this particular post I’ll cover how to write you game in such a way that it can handle all the different screen resolutions that the android eco-system can throw at you.

One of the nice things about XNA is that its been about a while and when you develop for Xbox Live you need to take into account screen resolutions because everyone has a different sized television. I came across this blog post which outlines a neat solution on handling this particular problem for 2D games. However rather than just bolting this code into a MonoGame android project I decided to update the ScreenManager class to handle multiple resolutions. For those of you that have not come across the ScreenManager class , it is used in many of the XNA samples to handle transitions of screens within your game. It also helps you break up you game into “Screen” which make for more maintainable code.

The plan is to add the following functionality to the ScreenManager

  1. The ability to set a virtual resolution for the game.This is the resolution that you game is designed to run at, the screen manager will then use this information to scale all the graphics and input so that it works nicely on other resolutions.
  2. Expose a Matrix property called Scale which we can use in the SpriteBatch to scale our graphics
  3. Expose a Matrix property called InputScale , which is the inverse of the Scale matrix so we can scale the Mouse and Gesture inputs into the virtual resolution.
  4. Expose an Vector2 property called InputTranslate  so we can translate our mouse and gesture inputs. This is because as part of the scaling will will make sure the game is always centered, so we will see a boarder around the game to take into account aspect ratio differences.
  5. Add a Viewport property which will return the virtual viewport for the game rather than use the GraphicsDevice.Viewport

We need to define a few private fields to store the virtual width/height and a reference to the GraphicsDeviceManager.

private int virtualWidth;
private int virtualHeight;
private GraphicsDeviceManager graphicsDeviceManager;
private bool updateMatrix =true;
private Matrix scaleMatrix = Matrix.Identity;

Next we add the new properties to the ScreenManager, we should probably have local fields for the these as it will save having to allocate a new Vector2/Viewport/Matrix each time the property is accessed. But for now this will work, we can optimize it later.

public Viewport Viewport {
  get{ return new Viewport(0, 0, virtualWidth, virtualHeight);}
}
 
public Matrix Scale {get;private set;}
 
public Matrix InputScale {
  get { return Matrix.Invert(Scale); }
}
 
public Vector2 InputTranslate {
  get { return new Vector2(GraphicsDevice.Viewport.X, GraphicsDevice.Viewport.Y); }
}

The constructor needs to be modified to include the virtual Width/Height paramerters and to resolve the GraphicsDeviceManager from the game.

public ScreenManager(Game game, int virtualWidth, int virtualHeight):base(game)
{
  // set the Virtual environment up
  this.virtualHeight= virtualHeight;
  this.virtualWidth= virtualWidth;
  this.graphicsDeviceManager=(GraphicsDeviceManager)game.Services.GetService(typeof(IGraphicsDeviceManager));
  // we must set EnabledGestures before we can query for them, but
 
  // we don't assume the game wants to read them.
  TouchPanel.EnabledGestures= GestureType.None;
}

Next is the code to create the Scale matrix. Update the Scale property to look like this. We use the updteMatrix flag to control when to re-generate the scaleMatrix so we don’t have to keep updating it every frame.

private Matrix scaleMatrix = Matrix.Identity;
public Matrix Scale { 
  get {
    if(updateMatrix) {
      CreateScaleMatrix();
      updateMatrix =false;
    }
    return scaleMatrix;
  }
}

Now implement the CreateScale method, this method will return a Matrix which we wil use to tell the SpriteBatch how to scale the graphics when they finally get drawn.

protected void CreateScaleMatrix() {
  scaleMatrix = Matrix.CreateScale((float)GraphicsDevice.Viewport.Width/ virtualWidth, (float)GraphicsDevice.Viewport.Width/ virtualWidth, 1f);
}

So what we have done so far is coded up all the properties we need to make this work. There are a few other methods we need to write. These methods will setup the GraphicsDevice viewport and ensure that we clear the backbuffer with a Color.Blank so we get that nice letterbox effect.

First thing to do is to update the Draw method of the ScreenManager to call a new method BeginDraw. This method will setup the Viewports and Clear the backbuffer.

public override void Draw(GameTime gameTime) {
  BeginDraw();
  foreach(GameScreen screen in screens)
  {
    if(screen.ScreenState== ScreenState.Hidden)
      continue;
    screen.Draw(gameTime);
  }
}

The BeginDraw method calls a bunch of other methods to setup the Viewports. Here is the code

protected void FullViewport ()
{ 
	Viewport vp = new Viewport (); 
	vp.X = vp.Y = 0; 
	vp.Width = DeviceManager.PreferredBackBufferWidth;
	vp.Height = DeviceManager.PreferredBackBufferHeight;
	GraphicsDevice.Viewport = vp;   
}
 
protected float GetVirtualAspectRatio ()
{
	return(float)virtualWidth / (float)virtualHeight;   
}
 
protected void ResetViewport ()
{
	float targetAspectRatio = GetVirtualAspectRatio ();   
	// figure out the largest area that fits in this resolution at the desired aspect ratio     
	int width = DeviceManager.PreferredBackBufferWidth;   
	int height = (int)(width / targetAspectRatio + .5f);   
	bool changed = false;     
	if (height &gt; DeviceManager.PreferredBackBufferHeight) { 
		height = DeviceManager.PreferredBackBufferHeight;   
		// PillarBox 
		width = (int)(height * targetAspectRatio + .5f);
		changed = true;   
	}     
	// set up the new viewport centered in the backbuffer 
	Viewport viewport = new Viewport ();   
	viewport.X = (DeviceManager.PreferredBackBufferWidth / 2) - (width / 2); 
	viewport.Y = (DeviceManager.PreferredBackBufferHeight / 2) - (height / 2); 
	viewport.Width = width; 
	viewport.Height = height; 
	viewport.MinDepth = 0; 
	viewport.MaxDepth = 1;     	
	if (changed) {
		updateMatrix = true;
	}   
	DeviceManager.GraphicsDevice.Viewport = viewport;   
}
 
protected void BeginDraw ()
{   
	// Start by reseting viewport 
	FullViewport ();   
	// Clear to Black 
	GraphicsDevice.Clear (Color.Black);   
	// Calculate Proper Viewport according to Aspect Ratio 
	ResetViewport ();   
	// and clear that    
	// This way we are gonna have black bars if aspect ratio requires it and     
	// the clear color on the rest 
	GraphicsDevice.Clear (Color.Black);   
}

So first thing we do is reset the Full viewport to the size of the PrefferedBackBufferWidth/Height and then Clear it. Then we reset the viewport to take into account the aspect ratio of the virtual viewport and calculate the virtical/horizontal offsets to center the new viewport and then Clear that just to be sure.That is all the code changes for the ScreenManager. To use it all we need to do is add the extra parameters when we create the new ScreenManager like so

screenManager=new ScreenManager (this, 800, 480);

You will need to pass in the resolution your game was designed for, in this case 800×480.
Then in all the places where we call SpriteBatch.Begin() we need to pass in the screenManager.Scale matrix like so

spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null, ScreenManager.Scale);

Note that the SpriteBatch has a number of overloaded Begin methods, you will need to adapt your code if you use things like SamplesState, BlendState, etc. Each of the Game screens should already have a reference to the ScreenManager if you follow the sample code from Microsoft. Also if you make use of GraphicsDevice.Viewport in your game to place objects based on screen size (like ui elements) that will need to be changed to use the ScreenManager.Viewport instead so they are placed within the virtual Viewport. So in the MenuScreen the following call would change from

position.X= GraphicsDevice.Viewport.Width/2- menuEntry.GetWidth(this)/2;

to this

position.X= ScreenManager.Viewport.Width/2- menuEntry.GetWidth(this)/2;

This should be all you need. In the next post we will look at the changes we need to make to the InputState.cs class to get the mouse and gesture inputs scaled as well.
You can download a copy of the modified ScreenManager class here.