In my previous post we looked at how to modify the ScreenManager class to support multiple resolutions. This works fine but what we also need is a way to scale the inputs from the Mouse or TouchScreen so that they operate at the same virtual resolution as the game does.

We already put in place the following properties in the ScreenManager

property Matrix InputScale …..
property Vector2 InputTranslate  …..

InputScale is the Inverse of the Scale matrix which will allow use to scale the input from the mouse into the virtual resolution. InputTranslate needs to be used because we sometimes put a letterbox around the gameplay area to center it and the input system needs to take this into account (otherwise you end up clicking above menu items rather than on them).

So we need to update the InputState.cs class which comes as part of the Game State Management example. First thing we need to do is to add a property to the InputState class for the  ScreenManager.

public ScreenManager ScreenManager
{
  get; set;
}

and then update the ScreenManager constructor to set the property.

input.ScreenManager=this;

Now we need to update the InputState.Update method. Find the following line

CurrentMouseState = Mouse.GetState();

We now need to translate and scale the CurrentMouseState field into the correct virtual resolution. We can do that my accessing the ScreenManager property which we just added, so add the following code.

Vector2 _mousePosition = new Vector2(CurrentMouseState.X, CurrentMouseState.Y);
Vector2 p = _mousePosition - ScreenManager.InputTranslate;
p = Vector2.Transform(p, ScreenManager.InputScale);
CurrentMouseState =new MouseState((int)p.X, (int)p.Y, CurrentMouseState.ScrollWheelValue, CurrentMouseState.LeftButton, CurrentMouseState.MiddleButton, CurrentMouseState.RightButton, CurrentMouseState.XButton1, CurrentMouseState.XButton2);

This bit of code transforms the current mouse position and then scales it before creating a new MouseState instance with the new position values, but the same values for everything else. If the ScrollWheelValue is being used that might need scaling too.

Next stop is to scale the gestures, in the same Update method there should be the following code

Gestures.Clear();
while (TouchPanel.IsGestureAvailable) {
  Gestures.Add(TouchPanel.ReadGesture());
}

We need to change this code over to

Gestures.Clear();
while (TouchPanel.IsGestureAvailable) {
GestureSample g = TouchPanel.ReadGesture();
Vector2 p1 = Vector2.Transform(g.Position- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p2 = Vector2.Transform(g.Position2- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p3 = Vector2.Transform(g.Delta- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p4 = Vector2.Transform(g.Delta2- ScreenManager.InputTranslate, ScreenManager.InputScale);
g =new GestureSample(g.GestureType, g.Timestamp, p1, p2, p3, p4);
Gestures.Add(g);
}

We use similar code to translate and scale each position and delta value from the GestureSample, then again create a new GestureSample with the new values.

That should be all you need to do. This will now scale both mouse and gesture inputs into the virtual resolution.

As before the complete code can be downloaded here, or you can download the both files InputState.cs ScreenManager.cs

Update : The MouseGestureType is the InputState is from the CatapultWars sample and can be downloaded here

In my last post we looked at using ETC1 compressed textures on the Xamarin Android platform. In that case we just used the texture and some fancy shader magic to fake transparency. In this article we’ll how we can split the alpha channel out to a separate file which we load at run time so we don’t have to rely on the colour key.

One of the things that can be a pain is having to pre-process your content to generate compressed textures etc outside of your normal development processes. In this case it would be nice for the artist to give us a .png file and we add it to the project and as part of the build and packaging process we get the required compressed textures in the .apk. XNA did something similar with its content pipeline where all the content was processed during the build into formats optimised for the target platform (xbox, windows etc), MonoGame also has a similar content pipeline as does many other game development tools. Pre-processing your content is really important, because you don’t want to be doing any kind of heavy processing on the device itself. While phones are getting more powerful every year, they still can’t match high end PC’s or consoles. In this article we’ll look and hooking into the power of msbuild and xbuild (Xamarin’s cross platform version of msbuild) so implement a very simple content processor.

So what we want to do it this, be able to add a .png to our Assets folder in our project and have some magic happen which turns that .png file into a .etc1 compressed texture and save the alpha channel of the .png file to a .alpha file and have those files appear in the .apk. To do this we are going to need a few things

  1. A Custom MSBuild Task to split out and convert the files
  2. A .targets file in which we can hook into the Xamarin.Android build process at the right points to call our custom task.
  3. A way of detecting where the etc1tool is installed on the target system.

We’ll start with the .targets file. First thing we need to know where in the Xamarin.Android build process we need to do our fancy bate and switch of the assets. Turns out after looking into Xamarin.Common.CSharp.targets file the perfect place to hook in is between the UpdateAndroidAssets target and the UpgradeAndroidInterfaceProxies target.  At the point where these targets run there is already a list of the assets in the project stored in the  @(_AndroidAssetsDest) property, which is perfect for what we need. Getting the location of the etc1tool is also a bit of a breeze because again Xamarin have done the hard work for us, there is a $(AndroidSdkDirectory) property onto which we just need to append tools/etc1tool in order to run the app. So thats 2) and 3) kinda sorted. Lets look at the code for the custom Task.

	public class CompressTextures : Task
	{
		[Required]
		public ITaskItem[] InputFiles { get; set; }
 
		[Required]
		public string AndroidSdkDir { get; set; }
 
		[Output]
		public ITaskItem[] OutputFiles { get; set; }
 
		public override bool Execute ()
		{
			Log.LogMessage (MessageImportance.Low, "  CompressTextures Task");
 
			List items = new List ();
			var etc1tool = new Etc1Tool ();
			etc1tool.AndroidSdkDir = AndroidSdkDir;
 
			foreach (var item in InputFiles) {
				if (item.ItemSpec.Contains(".png")) {
					var etc1file = item.ItemSpec.Replace (".png", ".etc1");
					var alphafile = item.ItemSpec.Replace (".png", ".alpha");
					byte[] data = null;
 
					using (var bitmap = (Bitmap)Bitmap.FromFile (item.ItemSpec)) {
						data = new byte[bitmap.Width * bitmap.Height];
						for (int y = 0; y < bitmap.Height; y++) {
							for (int x = 0; x < bitmap.Width; x++) {
								var color = bitmap.GetPixel (x, y);
								data [(y * bitmap.Width) + x] = color.A;
							}
						}
					}
 
					if (data != null)
						File.WriteAllBytes (alphafile, data);
 
					etc1tool.Source = item.ItemSpec;
					etc1tool.Destination = etc1file;
					etc1tool.Execute ();
 
					items.Add (new TaskItem (etc1file));
					items.Add (new TaskItem (alphafile));
 
					if (File.Exists (item.ItemSpec)) {
						try {
						File.Delete (item.ItemSpec);
						} catch(IOException ex) {
							// read only error??
							Log.LogErrorFromException (ex);
						}
					}
 
				} else {
					items.Add (item);
				}
 
			}
			OutputFiles = items.ToArray ();
			return !Log.HasLoggedErrors;
		}
 
		public class Etc1Tool {
 
			public string Source { get; set; }
 
			public string Destination { get; set; }
 
			public string AndroidSdkDir { get; set; }
 
			public void Execute() {
 
				var tool = Path.Combine (AndroidSdkDir, "tools/etc1tool");
 
				var process = new System.Diagnostics.Process ();
				process.StartInfo.FileName = tool;
				process.StartInfo.Arguments = string.Format (" {0} --encode -o {1}", Source, Destination);
				process.StartInfo.CreateNoWindow = true;
				process.Start ();
				process.WaitForExit ();
			}
		}
	}

I’m not going to go into all the in’s and out’s of writing msbuild tasks, that is what google and bing are for :). But if you look at the code we have two [Required] properties , the AndroidSDKDir and the InputFiles. The InputFiles are going to be the list of files we get from @(_AndroidAssetDest) and the AndroidSDKDir is obviously the $(AndroidSdkDirectory) property. We also have an OutputFiles property which we use to populate a list with our new files once we have converted them. The code in the Execute method itself should be fairly easy to follow. For each of the files we extract the alpha channel and save that to a .alpha file, then call out to the etc1tool to compress the .png file to an .etc1 file, note we also deleted the original file so it does not get included in the final .apk. Don’t worry this is a file in the obj/<Configuration>/assets directory not the original file we added to the project :).  Now we could make this more robust and make it conditional so it doesn’t compress every .png in the assets list , but for now this will do the trick. So with the task code done, the .targets file now looks like this.

<?xml version="1.0" encoding="UTF-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <UsingTask TaskName="InfiniteSpace.Framework.Build.Tasks.CompressTextures" AssemblyFile="InfiniteSpace.Framework.Build.Tasks.dll"/>
 
  <Target Name="_CompressAssets" AfterTargets="UpdateAndroidAssets" 
      BeforeTargets="UpdateAndroidInterfaceProxies">
     <CompressTextures InputFiles="@(_AndroidAssetsDest)" AndroidSdkDir="$(AndroidSdkDirectory)">
        <Output TaskParameter="OutputFiles" ItemName="_CompressedTextures"/>
     </CompressTextures>
     <Touch Files="@(_CompressedTextures)" />
  </Target>
</Project>

Again this should be fairly easy to follow. The important bits are the AfterTargets and BeforeTargets values, this is where we hook into the build process. The next step is to include this target file in our project, we do this by adding the following line just under the Import statement for Xamarin.Android.CSharp.targets (or Novell.MonoDroid.CSharp.targets)

<Import Project="$PATH$/InfiniteSpace.Framework.Build.Tasks/Infinitespace.Common.targets" />

Now the $PATH$ bit depends on where you put the build tasks. For me I just added the project to my solution and used “../InfiniteSpace.Framework.Build.Tasks/Infinitespace.Common.targets”, then did a small tweak in the .targets file so it loaded the assembly from the debug folder

AssemblyFile="./bin/$(Configuration)/InfiniteSpace.Framework.Build.Tasks.dll"

This worked for me in Xamarin Studio on the mac, and it sort of worked in Visual Studio on windows. However in both IDE’s if you want to change the Task code you need to close and re-load the IDE since the assembly gets loaded during the build process and cannot be overwritten after that.

So with the msbuild task stuff hooked in, you should now be able to add a .png file to your assets folder and have it produce a .etc1 and .alpha for that file in your .apk. After that you can just load the .etc1 and .alpha files as you would any other resource. The code for this blog entry includes a sample project so you can see exactly how to load the files and use them for alpha.

As mentioned already the CompressTextures task could be improved. Some ideas might be

  • Add the ability to compress other formats (PVRTC, ATITC, S3TC)
  • Add the ability to read additional properties from the TaskItem to control if it needs to be compressed or not
  • Add support for Resizing to a Power of 2 (POW2) ETC1 only supports POW2 textures I think.. PVRTC certainly only supports POW2.
  • Add support for Colour Key, this wouldn’t save the .alpha file.
  • Add support for compressing the alpha channel to etc1.

I’ll leave these ideas with you at the moment, I might look at implementing them myself at some point in the future. I know I mentioned looking at PVRTC , ATITC and S3TC texture support in my last article and I assure you I will get to that soon. In the meantime have fun playing with the code and I hope you find it useful.

The code for this blog entry can be downloaded from here.

When developing for iOS or Windows Phone you don’t really need to take into account different screen resolutions. Yes the iPhone and iPad do scale things differently but on the iPad you provide some special content which will allow the system to scale you game and still get it to look nice.

On android its a different environment, there are many different devices, with different capabilities not only with screen resolutions but also CPU, memory and graphics chips. In this particular post I’ll cover how to write you game in such a way that it can handle all the different screen resolutions that the android eco-system can throw at you.

One of the nice things about XNA is that its been about a while and when you develop for Xbox Live you need to take into account screen resolutions because everyone has a different sized television. I came across this blog post which outlines a neat solution on handling this particular problem for 2D games. However rather than just bolting this code into a MonoGame android project I decided to update the ScreenManager class to handle multiple resolutions. For those of you that have not come across the ScreenManager class , it is used in many of the XNA samples to handle transitions of screens within your game. It also helps you break up you game into “Screen” which make for more maintainable code.

The plan is to add the following functionality to the ScreenManager

  1. The ability to set a virtual resolution for the game.This is the resolution that you game is designed to run at, the screen manager will then use this information to scale all the graphics and input so that it works nicely on other resolutions.
  2. Expose a Matrix property called Scale which we can use in the SpriteBatch to scale our graphics
  3. Expose a Matrix property called InputScale , which is the inverse of the Scale matrix so we can scale the Mouse and Gesture inputs into the virtual resolution.
  4. Expose an Vector2 property called InputTranslate  so we can translate our mouse and gesture inputs. This is because as part of the scaling will will make sure the game is always centered, so we will see a boarder around the game to take into account aspect ratio differences.
  5. Add a Viewport property which will return the virtual viewport for the game rather than use the GraphicsDevice.Viewport

We need to define a few private fields to store the virtual width/height and a reference to the GraphicsDeviceManager.

private int virtualWidth;
private int virtualHeight;
private GraphicsDeviceManager graphicsDeviceManager;
private bool updateMatrix =true;
private Matrix scaleMatrix = Matrix.Identity;

Next we add the new properties to the ScreenManager, we should probably have local fields for the these as it will save having to allocate a new Vector2/Viewport/Matrix each time the property is accessed. But for now this will work, we can optimize it later.

public Viewport Viewport {
  get{ return new Viewport(0, 0, virtualWidth, virtualHeight);}
}
 
public Matrix Scale {get;private set;}
 
public Matrix InputScale {
  get { return Matrix.Invert(Scale); }
}
 
public Vector2 InputTranslate {
  get { return new Vector2(GraphicsDevice.Viewport.X, GraphicsDevice.Viewport.Y); }
}

The constructor needs to be modified to include the virtual Width/Height paramerters and to resolve the GraphicsDeviceManager from the game.

public ScreenManager(Game game, int virtualWidth, int virtualHeight):base(game)
{
  // set the Virtual environment up
  this.virtualHeight= virtualHeight;
  this.virtualWidth= virtualWidth;
  this.graphicsDeviceManager=(GraphicsDeviceManager)game.Services.GetService(typeof(IGraphicsDeviceManager));
  // we must set EnabledGestures before we can query for them, but
 
  // we don't assume the game wants to read them.
  TouchPanel.EnabledGestures= GestureType.None;
}

Next is the code to create the Scale matrix. Update the Scale property to look like this. We use the updteMatrix flag to control when to re-generate the scaleMatrix so we don’t have to keep updating it every frame.

private Matrix scaleMatrix = Matrix.Identity;
public Matrix Scale { 
  get {
    if(updateMatrix) {
      CreateScaleMatrix();
      updateMatrix =false;
    }
    return scaleMatrix;
  }
}

Now implement the CreateScale method, this method will return a Matrix which we wil use to tell the SpriteBatch how to scale the graphics when they finally get drawn.

protected void CreateScaleMatrix() {
  scaleMatrix = Matrix.CreateScale((float)GraphicsDevice.Viewport.Width/ virtualWidth, (float)GraphicsDevice.Viewport.Width/ virtualWidth, 1f);
}

So what we have done so far is coded up all the properties we need to make this work. There are a few other methods we need to write. These methods will setup the GraphicsDevice viewport and ensure that we clear the backbuffer with a Color.Blank so we get that nice letterbox effect.

First thing to do is to update the Draw method of the ScreenManager to call a new method BeginDraw. This method will setup the Viewports and Clear the backbuffer.

public override void Draw(GameTime gameTime) {
  BeginDraw();
  foreach(GameScreen screen in screens)
  {
    if(screen.ScreenState== ScreenState.Hidden)
      continue;
    screen.Draw(gameTime);
  }
}

The BeginDraw method calls a bunch of other methods to setup the Viewports. Here is the code

protected void FullViewport ()
{ 
	Viewport vp = new Viewport (); 
	vp.X = vp.Y = 0; 
	vp.Width = DeviceManager.PreferredBackBufferWidth;
	vp.Height = DeviceManager.PreferredBackBufferHeight;
	GraphicsDevice.Viewport = vp;   
}
 
protected float GetVirtualAspectRatio ()
{
	return(float)virtualWidth / (float)virtualHeight;   
}
 
protected void ResetViewport ()
{
	float targetAspectRatio = GetVirtualAspectRatio ();   
	// figure out the largest area that fits in this resolution at the desired aspect ratio     
	int width = DeviceManager.PreferredBackBufferWidth;   
	int height = (int)(width / targetAspectRatio + .5f);   
	bool changed = false;     
	if (height &gt; DeviceManager.PreferredBackBufferHeight) { 
		height = DeviceManager.PreferredBackBufferHeight;   
		// PillarBox 
		width = (int)(height * targetAspectRatio + .5f);
		changed = true;   
	}     
	// set up the new viewport centered in the backbuffer 
	Viewport viewport = new Viewport ();   
	viewport.X = (DeviceManager.PreferredBackBufferWidth / 2) - (width / 2); 
	viewport.Y = (DeviceManager.PreferredBackBufferHeight / 2) - (height / 2); 
	viewport.Width = width; 
	viewport.Height = height; 
	viewport.MinDepth = 0; 
	viewport.MaxDepth = 1;     	
	if (changed) {
		updateMatrix = true;
	}   
	DeviceManager.GraphicsDevice.Viewport = viewport;   
}
 
protected void BeginDraw ()
{   
	// Start by reseting viewport 
	FullViewport ();   
	// Clear to Black 
	GraphicsDevice.Clear (Color.Black);   
	// Calculate Proper Viewport according to Aspect Ratio 
	ResetViewport ();   
	// and clear that    
	// This way we are gonna have black bars if aspect ratio requires it and     
	// the clear color on the rest 
	GraphicsDevice.Clear (Color.Black);   
}

So first thing we do is reset the Full viewport to the size of the PrefferedBackBufferWidth/Height and then Clear it. Then we reset the viewport to take into account the aspect ratio of the virtual viewport and calculate the virtical/horizontal offsets to center the new viewport and then Clear that just to be sure.That is all the code changes for the ScreenManager. To use it all we need to do is add the extra parameters when we create the new ScreenManager like so

screenManager=new ScreenManager (this, 800, 480);

You will need to pass in the resolution your game was designed for, in this case 800×480.
Then in all the places where we call SpriteBatch.Begin() we need to pass in the screenManager.Scale matrix like so

spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null, ScreenManager.Scale);

Note that the SpriteBatch has a number of overloaded Begin methods, you will need to adapt your code if you use things like SamplesState, BlendState, etc. Each of the Game screens should already have a reference to the ScreenManager if you follow the sample code from Microsoft. Also if you make use of GraphicsDevice.Viewport in your game to place objects based on screen size (like ui elements) that will need to be changed to use the ScreenManager.Viewport instead so they are placed within the virtual Viewport. So in the MenuScreen the following call would change from

position.X= GraphicsDevice.Viewport.Width/2- menuEntry.GetWidth(this)/2;

to this

position.X= ScreenManager.Viewport.Width/2- menuEntry.GetWidth(this)/2;

This should be all you need. In the next post we will look at the changes we need to make to the InputState.cs class to get the mouse and gesture inputs scaled as well.
You can download a copy of the modified ScreenManager class here.

So …. supporting compressed textures on android is a right pain. There are a number of different graphics chips in play and they all support different texture compression formats, not very helpful. The only common format that is supported by all devices is the ETC1 format (ETC2 is GLES 3.0 only), the problem with this format is it doesn’t support an alpha channel. Most game developers out there will know that the alpha channel is kinda important for games. So how can we make use of ETC1 and get a alpha channel at the same time?

There are a number of different methods that I’ve found on the net mostly have to do with storing the alpha channel separately and combining them in the fragment shader, this article seems to contain the most accepted solutions. But personally I hate the idea of having to have two textures or mess about with texture coordinates to sample the data from different parts. If only there was another way… well lets turn back the clock a bit an use a technique we used to use before such a thing as an alpha channel even existed.

I’m talking about using a ColorKey, this is where you define a single colour in your image as the transparent colour, its used allot in movies these days for green screen work. We used to use this technique allot back in the day when video hardware didn’t event know what an alpha channel was, so you just skip over the bits of the image that match the colour key and hey presto , you get a transparent image :).

So lets take a look at this image. 
f_spot

it has a nice alpha channel. But if we replace the alpha with a constant colour like so

f_spot_rgb

we can then re-write our fragment shader to just detect this colour and set the alpha channel explicitly without having to mess about with other bitmaps, or changing texture coordinates. So our fragment shader simply becomes

uniform lowp sampler2D u_Texture;
varying mediump vec2 v_TexCoordinate;
 
void main()
{
  vec4 colorkey = vec4(1.0,0.0,0.96470588,0.0);
  float cutoff = 0.2;
  vec4 colour = texture2D(u_Texture, v_TexCoordinate);
  if ((abs(colour.r - colorkey.r) &lt;= cutoff) &amp;&amp;
      (abs(colour.g - colorkey.g) &lt;= cutoff) &amp;&amp;
      (abs(colour.b - colorkey.b) &lt;= cutoff)) {
       colour.a = 0.0;
  }
  gl_FragColor = colour;
}

In this case I hardcoded the colour I’m using for the ColorKey but this could easily be passed in as a uniform if you wanted the flexibility of being able to change it.

With this in place we can now use ETC1 textures across all android devices and get a alpha channel. While its not a full alpha channel (with semi transparency) it will probably be enough for most games. You can generate your compressed textures using the ‘etc1util’ tool provided with the android sdk. Its located in the tools folder of the sdk and you can just call

  etc1tool  infile -encode -o outfile

you can then include the resulting outfile in your Assets folder and set its build action to ‘AndroidAsset’ then use the following code to load the texture

 static int LoadTextureFromAssets (Activity activity, string filename)
{
 
  using (var s = activity.Assets.Open (filename)) {
    using (var t = Android.Opengl.ETC1Util.CreateTexture (s)) {
 
      int tid = GL.GenTexture ();
      GL.ActiveTexture (All.Texture0);
      GL.BindTexture (All.Texture2D, tid);
      // setup texture parameters
      GL.TexParameter (All.Texture2D, All.TextureMagFilter, (int)All.Linear);
      GL.TexParameter (All.Texture2D, All.TextureMinFilter, (int)All.Nearest);
      GL.TexParameter (All.Texture2D, All.TextureWrapS, (int)All.ClampToEdge);
      GL.TexParameter (All.Texture2D, All.TextureWrapT, (int)All.ClampToEdge);
      Android.Opengl.ETC1Util.LoadTexture ((int)All.Texture2D, 0, 0, (int)All.Rgb, (int)All.UnsignedShort565, t);
 
      return tid;
    }
  }
}

Note you’ll need to add using clauses for the various OpenTK namespaces used in the code.

Now there is a problem with this technique, because of the way ETC1 works you will more than likely get some compression artefacts on the resulting image. In this my case I ended up with a purple/pink line around the image I was rendering. So perhaps that colour isn’t the best choice in this case.

Screenshot_2014-05-19-09-55-01

So I tried again this time with a black colour key. This might help reduce the compression artifacts around the edges of the image. But I had to make some changes to the shader to make it a bit more generic and to handle a black colour key. The resulting shader turned out to be as follows.

uniform lowp sampler2D u_Texture;
varying mediump vec2 v_TexCoordinate;
 
void main()
{
  float cutoff = 0.28;
  vec4 colour = texture2D(u_Texture, v_TexCoordinate);
  if ((colour.r &lt;= cutoff) &amp;&amp;
      (colour.g &lt;= cutoff) &amp;&amp;
      (colour.b &lt;= cutoff)) {
       colour.a = colour.r;
  }
  gl_FragColor = colour;
}

You can see we are just using a rgb cutoff value to detect the black in the image and turn that into a transparency. Note that I’m using the red channel to set the transparent colour rather than just using 0.0, hopefully this will help with the blending. This produced the following result.

Screenshot_2014-05-19-09-56-31

There is a slight black edge around the image, but it is probably be something you can get away with. The only issue with this is you can’t use black in your image 🙁 or if you do it has to be above the colour cutoff in the shader, that will require some testing to see what values you can get away with.

Well hopefully this will be useful for someone. Next time around I’ll be going into how to support texture compression formats like PVRTC, ATITC and S3TC on your android devices. These formats should cover most of the devices out there (but not all) and support a full alpha channel, but if you are after full compatibility then ETC1 is probably the way to go.

In the mean time the code from this article is available on github.

So you have a great android project you are working on. It might be a game or an app it doesn’t matter, the key thing is you have a ton of Assets you want to include. Not Resources, i.e stuff you find in the resources folder like layout/values etc, but Assets we are talking graphics, sound, music you name it.

If you have more than a few of these assets you are probably already suffering from fairly lengthy build times. Packaging those assets up into the .apk does take some time, unfortunately its just a side effect of how stuff works on android. The problem is these long build times make debugging the app a real issue. Its like going back to the days when you needed to get a coffee while your app builds. There is a way around it.. C# to the rescue 🙂

What you can do is write an Extension method for the Android.Content.Res.AssetManager which will conditionally look in the Assets in the .apk or get that data from an external file. So we upload all the assets in a zip file to the device and then just run the app with no assets in it what so ever. We get quick build and debug times and we only need to update that zip file if the assets change or stuff gets added.

So I put together this.

public static class AssetMgrExt {
 
	static Java.Util.Zip.ZipFile zip = null;
 
	public static void Initialize(string data) {
		zip = new Java.Util.Zip.ZipFile (data);
	}
 
	public static void Close() {
		zip.Close ();
		zip = null;
	}
 
	public static System.IO.Stream OpenExt(this Android.Content.Res.AssetManager mgr, string filename) {
#if !DEBUG
		return mgr.Open(filename);
#else
		if (zip != null) {
			var entry = zip.GetEntry(filename);
			if (entry == null)
				throw new Exception(string.Format("Could not find {0} in external zip", filename));
			try {
 
			using (var s = zip.GetInputStream(entry)) {
				System.IO.MemoryStream ms = new System.IO.MemoryStream();
				s.CopyTo(ms);
				ms.Position = 0;
				return ms;
			}
			}
			finally {
				entry.Dispose();
			}
 
		}
		else  {
			throw new InvalidOperationException("Call Initialize first!!");
		}
#endif
	}
}

Add it to your project. Call AssetMgrExt.Initialize(“/mnt/sdcard/Dowloads/blah.zip”) at the start of your app and AssetMgrExt.Close() at the end. Then replace all the calls to Assets.Open with Assets.OpenExt to get the data out (don’t forget to free up that memory stream 🙂 ). If you want to get an AssetFileDescriptor (i.e you use Assets.OpenFd) you can write a similar extension and mess about with ParcelFileDescriptor’s etc , but for my own purposes this worked out great.

What I tend to do now is never add assets to the project unless its a small project. Instead I use this extension or something similar during the debugging/development process. For release builds I take a release .apk and run a post build task on it to add the assets later, that way I don’t have to mess about with different projects or msbuild conditionals. I can just build the .apk in release mode and add the assets later before signing using

aapt add your.apk assets/someasset.foo

That is if for now. Happy coding!

So in my last post I explained a challenge set down by my good friend @TheCacktus to write a game in 4 weeks. If you want to checkout the rules to this challenge read the previous post which has them all there. Over the last 2 weeks I have been desperately finding the spare time to design and write a game.

title

The first week was the Design week. So I didn’t do any coding or start graphics or sound or anything, just design the game. Given its my first go at this (I’ve historically worked on game engines/api’s) I figured I’d start off slowly. Two weeks ago I was planning to sit myself down in a coffee shop and figure out what kind of game to write, action, rpg, platformer.. But life took and unexpected turn and I ended up going round a mates house for a curry.. First day lost.

I was a bit more successful the following week, a few evenings after work I managed to nail down the idea. Basically a 2-4 player top down shooter using aircraft. The player is able to pick from 3 different aircraft, the legendary BAe Harrier or AV8 for the American’s out there.. and yes sorry to say that that wonderful jet the US Marine corps uses is British designed. The F35B the Harriers future replacement, and the AH-64 Apache Gunship. I picked these aircraft for a number of reasons, principally they all have some ability to fly backwards. Obviously the Apache is will be as fast as the two jets going forward, but it can go just as fast backwards as it can forwards and I thought this might be interesting to try out.

As will all game designs things may or may not work out. For weapons I decided that all aircraft will have a cannon with 300 rounds in it. That is the default weapon, the game will drop an ammo picked every few minutes so people can top up. There is also a 4 round Air – Air missile pickup which might drop. This gives the player 4 missiles which do more damage than a cannon bullet. But the only ‘lock on’ to a target you a facing so its’ no go firing it in the other direction.

Each aircraft has a set of 10 flare packs as well, these can be used to avoid the missiles but the plan is it will take some skill and might not always work. The game win condition for the game is last man standing, while I didn’t think of it at the time and it’s not in the game I can use stat’s like number of rounds fired , number of hits, number of collisions etc to issue awards at the end of each match. This idea occurred to me during the coding week, but as I was following the rules I can’t add it till after the test/review phase.

lobby

I think I spent about 6-8 hours in that first week working on the design and figuring out what I wanted to do. In hindsight that could have probably spent a bit more time on it.

Now for the coding, this was really hard. I had a busy week at work/family so I only had the evenings and the weekend to code up the game. I think the total hours I spent on this was in the region of 24-28 hours, which isn’t bad it averages out at about 4 hours each evening.

I made a couple of important decisions before starting to code, I had a few suggestions before I started to use engines like Unity, Marmalade or Moai for the game but in the end I decided to stick with XNA. The great thing about XNA is it has a whole bunch of Samples you can use to base your games on. I started of by downloading the GameStateManagement sample, this sample provides you will a framework for transitioning between and Menu screen to Options and Gameplay etc. You could waste hours writing that stuff yourself, you are better off using existing code. There was a temptation to base the code on the excellent NetRumble sample to start with, but I decided against that because it was a full game already and I wanted to write the code ‘game’ myself.

That said I did borrow the CollisionManager and Weapon system from NetRumble, it was fairly extendable and the Weapon system made implementing my own weapons easy. One of the hard parts was implementing the IR Missiles, getting it to turn towards a aircraft was fairly straight forward, implementing a limited line of sight was a bit more problematic to get right. I’m still not entirely happy with the results as the missile tends to turn before its even left the aircraft rather than waiting a while before it starts to lock-on. But then again I didn’t put that in the spec so I didn’t change it.

gameplay

I had some weird bugs during testing which boiled down to me not clearing the CollisionManager between games, so I ended up with ghost objects in the game that players would collide with. Other than that it went pretty smoothly, I think the key was to keep the concept simple to start with. Reusing existing code and samples helped allot, I don’t think I would have done it in the time otherwise.

For graphics, being somewhat artistically challenged I decided to make the most of the tools I had available. I used a Windows Store app called Fresh Paint. This is a cool artistic app that is great on a Surface RT/Pro, the Pro much more so because you can use the Pen. It has a cool feature of being able to take images and do a water color version of them. So I simply did that on all the graphics I had, it might look like I painted all the menu screens but I didn’t ;). I think it turned out very well considering I spent about 2-3 hours of the total coding time on graphics.

The sounds effects we courtesy of my voice and Audacity, its amazing what you can do with your own voice, I’m surprised more game developers don’t do it if I’m honest.

So the next week is the testing/review phase. I’ve sent the game out to a few friends to test and feedback, hopefully they will have time to play it otherwise I’ll have to ditch it all and start again. After this week we have one more week of polish and implanting the feedback before I have to release the app, not entirely sure where to release it yet though. I might try porting it to OUYA in that last week and see how that works.

Anyway for those of you wanting to test it out you can get the zip file of the release here.

Happy coding.

gameover

 

 

Its been a while since my last blog post, allot have happened since then. Firstly for all of your looking for the MonoGame tutorials, I’m afraid they are gone :(. The backup database I had was corrupt and the hacker completely ruined the live db. So I’m sorry for those of you coming here looking for those articles.

While I’m on the subject of MonoGame, I’m afraid to say that I am unable to continue too contribute to MonoGame. I can’t go into details on why this is,  but i’m utterly heart broken that I can’t work on it anymore :'( . The people on the core MonoGame team are really top notch developers, and there is a great community building up around it,  I have really enjoyed the last 2 years working on the project. It’s a fantastic project but they need people willing to get involved and help fix bugs and add features. So if you can get involved.

But there is a silver lining in every cloud, I now have some spare time which I need to do something with. Sitting in front of the TV works for a while but your brain starts rotting without something interesting to do :). I was chatting with my good friend @TheCacktus from Songbird Creations, and he set me a challenge.. Write a Game in 4 weeks, sounds like a simple enough task doesn’t it. But there are some rules involved here (he’s a bit of a git sometimes 😉 )  The 4 weeks are divided up into 4 sections,  Design, Develop, PlayTest, Tweak, you can only spend a week on each (that includes anytime spent doing your day job, eating, sleeping, family time etc). If you go over a week for any section, you have to scrap the entire project and start again (this last bit I think it supposed to force you to focus and stick to your schedule).

So the sections..

  1. Design – This phase does not involve code at all, you have 7 days to design and spec out a game you can right in 7 days. All on paper, or a google doc. So no code, no graphics , don’t even think about going near an IDE, graphics can only be on paper unless you are using free graphics. You need to spec out the game mechanics, controls, any menu transitions, scoring, target platform(s) etc, all needs to be done in those 7 days. Remember this is actual time not cumulative time, so if you have to work late for your day job that week, or its your girlfriends birthday, tough you loose that time.
  2. Develop – Now you get to code up your spec. Use what ever tooling you want, MonoGame, Unity, Marmalade, etc, do your graphics and sound (if any) and code , code , code. At this point you cannot change  your spec, if something looks like it won’t work too bad. You have to resist the temptation to ‘tweak’ the design from the first week. Again, you only have a week, despite what life throws at you.
  3. PlayTest – This is your time to take a week off. Get a friend (preferably on that will give you honest feedback) to play your game. Get their feedback, make a note of all of their comments. Note do not change the code this week, you are not allowed to! This week is only for testing, you can’t fix bugs so try to catch them in the Develop week.
  4. Tweak – Finally you can get back to the code, and start to polish it a bit. Implement the feedback you got from your testers or ignore it if you like if you think the game is good enough. Fix bugs.

At the end of the 4 weeks the theory is you will have a simple game, at this point you can decide to publish it or tweak it a bit more. But the point is you’ve written a game.  I’m going to give this a go in the next couple of weeks and see how I get on. I’ll blog my progress and we’ll see how I get on.

I am back!

Thanks to a nice guy who decided to totally screw up my blog I have had to wipe and reinstall everything. I will be trying to re-post some of the content that was lost during this episode but I’m afraid some of it is lost forever.

I would like to apologise to the MonoGame community as some of the articles were quite useful.

To the Hacker in question. Thanks for pointing out a security hole in software that I didn’t write or maintain. May I suggest that you hack the actual workpress site next time as they are in the position to fix these holes you so expertly used to ruin a free site run by someone in their spare time.

You have not only wasted my time but damaged/hindered many game development projects which used the information on my blog to write their software.

I do not wish you any ill will, however I would like to point out that Karma isn’t so forgiving so the next time you stubb your little toe on the table think of this site and all the people you inconvenienced 🙂