Archives

All posts for the month May, 2014

In my previous post we looked at how to modify the ScreenManager class to support multiple resolutions. This works fine but what we also need is a way to scale the inputs from the Mouse or TouchScreen so that they operate at the same virtual resolution as the game does.

We already put in place the following properties in the ScreenManager

property Matrix InputScale …..
property Vector2 InputTranslate  …..

InputScale is the Inverse of the Scale matrix which will allow use to scale the input from the mouse into the virtual resolution. InputTranslate needs to be used because we sometimes put a letterbox around the gameplay area to center it and the input system needs to take this into account (otherwise you end up clicking above menu items rather than on them).

So we need to update the InputState.cs class which comes as part of the Game State Management example. First thing we need to do is to add a property to the InputState class for the  ScreenManager.

public ScreenManager ScreenManager
{
  get; set;
}

and then update the ScreenManager constructor to set the property.

input.ScreenManager=this;

Now we need to update the InputState.Update method. Find the following line

CurrentMouseState = Mouse.GetState();

We now need to translate and scale the CurrentMouseState field into the correct virtual resolution. We can do that my accessing the ScreenManager property which we just added, so add the following code.

Vector2 _mousePosition = new Vector2(CurrentMouseState.X, CurrentMouseState.Y);
Vector2 p = _mousePosition - ScreenManager.InputTranslate;
p = Vector2.Transform(p, ScreenManager.InputScale);
CurrentMouseState =new MouseState((int)p.X, (int)p.Y, CurrentMouseState.ScrollWheelValue, CurrentMouseState.LeftButton, CurrentMouseState.MiddleButton, CurrentMouseState.RightButton, CurrentMouseState.XButton1, CurrentMouseState.XButton2);

This bit of code transforms the current mouse position and then scales it before creating a new MouseState instance with the new position values, but the same values for everything else. If the ScrollWheelValue is being used that might need scaling too.

Next stop is to scale the gestures, in the same Update method there should be the following code

Gestures.Clear();
while (TouchPanel.IsGestureAvailable) {
  Gestures.Add(TouchPanel.ReadGesture());
}

We need to change this code over to

Gestures.Clear();
while (TouchPanel.IsGestureAvailable) {
GestureSample g = TouchPanel.ReadGesture();
Vector2 p1 = Vector2.Transform(g.Position- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p2 = Vector2.Transform(g.Position2- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p3 = Vector2.Transform(g.Delta- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p4 = Vector2.Transform(g.Delta2- ScreenManager.InputTranslate, ScreenManager.InputScale);
g =new GestureSample(g.GestureType, g.Timestamp, p1, p2, p3, p4);
Gestures.Add(g);
}

We use similar code to translate and scale each position and delta value from the GestureSample, then again create a new GestureSample with the new values.

That should be all you need to do. This will now scale both mouse and gesture inputs into the virtual resolution.

As before the complete code can be downloaded here, or you can download the both files InputState.cs ScreenManager.cs

Update : The MouseGestureType is the InputState is from the CatapultWars sample and can be downloaded here

In my last post we looked at using ETC1 compressed textures on the Xamarin Android platform. In that case we just used the texture and some fancy shader magic to fake transparency. In this article we’ll how we can split the alpha channel out to a separate file which we load at run time so we don’t have to rely on the colour key.

One of the things that can be a pain is having to pre-process your content to generate compressed textures etc outside of your normal development processes. In this case it would be nice for the artist to give us a .png file and we add it to the project and as part of the build and packaging process we get the required compressed textures in the .apk. XNA did something similar with its content pipeline where all the content was processed during the build into formats optimised for the target platform (xbox, windows etc), MonoGame also has a similar content pipeline as does many other game development tools. Pre-processing your content is really important, because you don’t want to be doing any kind of heavy processing on the device itself. While phones are getting more powerful every year, they still can’t match high end PC’s or consoles. In this article we’ll look and hooking into the power of msbuild and xbuild (Xamarin’s cross platform version of msbuild) so implement a very simple content processor.

So what we want to do it this, be able to add a .png to our Assets folder in our project and have some magic happen which turns that .png file into a .etc1 compressed texture and save the alpha channel of the .png file to a .alpha file and have those files appear in the .apk. To do this we are going to need a few things

  1. A Custom MSBuild Task to split out and convert the files
  2. A .targets file in which we can hook into the Xamarin.Android build process at the right points to call our custom task.
  3. A way of detecting where the etc1tool is installed on the target system.

We’ll start with the .targets file. First thing we need to know where in the Xamarin.Android build process we need to do our fancy bate and switch of the assets. Turns out after looking into Xamarin.Common.CSharp.targets file the perfect place to hook in is between the UpdateAndroidAssets target and the UpgradeAndroidInterfaceProxies target.  At the point where these targets run there is already a list of the assets in the project stored in the  @(_AndroidAssetsDest) property, which is perfect for what we need. Getting the location of the etc1tool is also a bit of a breeze because again Xamarin have done the hard work for us, there is a $(AndroidSdkDirectory) property onto which we just need to append tools/etc1tool in order to run the app. So thats 2) and 3) kinda sorted. Lets look at the code for the custom Task.

	public class CompressTextures : Task
	{
		[Required]
		public ITaskItem[] InputFiles { get; set; }
 
		[Required]
		public string AndroidSdkDir { get; set; }
 
		[Output]
		public ITaskItem[] OutputFiles { get; set; }
 
		public override bool Execute ()
		{
			Log.LogMessage (MessageImportance.Low, "  CompressTextures Task");
 
			List items = new List ();
			var etc1tool = new Etc1Tool ();
			etc1tool.AndroidSdkDir = AndroidSdkDir;
 
			foreach (var item in InputFiles) {
				if (item.ItemSpec.Contains(".png")) {
					var etc1file = item.ItemSpec.Replace (".png", ".etc1");
					var alphafile = item.ItemSpec.Replace (".png", ".alpha");
					byte[] data = null;
 
					using (var bitmap = (Bitmap)Bitmap.FromFile (item.ItemSpec)) {
						data = new byte[bitmap.Width * bitmap.Height];
						for (int y = 0; y < bitmap.Height; y++) {
							for (int x = 0; x < bitmap.Width; x++) {
								var color = bitmap.GetPixel (x, y);
								data [(y * bitmap.Width) + x] = color.A;
							}
						}
					}
 
					if (data != null)
						File.WriteAllBytes (alphafile, data);
 
					etc1tool.Source = item.ItemSpec;
					etc1tool.Destination = etc1file;
					etc1tool.Execute ();
 
					items.Add (new TaskItem (etc1file));
					items.Add (new TaskItem (alphafile));
 
					if (File.Exists (item.ItemSpec)) {
						try {
						File.Delete (item.ItemSpec);
						} catch(IOException ex) {
							// read only error??
							Log.LogErrorFromException (ex);
						}
					}
 
				} else {
					items.Add (item);
				}
 
			}
			OutputFiles = items.ToArray ();
			return !Log.HasLoggedErrors;
		}
 
		public class Etc1Tool {
 
			public string Source { get; set; }
 
			public string Destination { get; set; }
 
			public string AndroidSdkDir { get; set; }
 
			public void Execute() {
 
				var tool = Path.Combine (AndroidSdkDir, "tools/etc1tool");
 
				var process = new System.Diagnostics.Process ();
				process.StartInfo.FileName = tool;
				process.StartInfo.Arguments = string.Format (" {0} --encode -o {1}", Source, Destination);
				process.StartInfo.CreateNoWindow = true;
				process.Start ();
				process.WaitForExit ();
			}
		}
	}

I’m not going to go into all the in’s and out’s of writing msbuild tasks, that is what google and bing are for :). But if you look at the code we have two [Required] properties , the AndroidSDKDir and the InputFiles. The InputFiles are going to be the list of files we get from @(_AndroidAssetDest) and the AndroidSDKDir is obviously the $(AndroidSdkDirectory) property. We also have an OutputFiles property which we use to populate a list with our new files once we have converted them. The code in the Execute method itself should be fairly easy to follow. For each of the files we extract the alpha channel and save that to a .alpha file, then call out to the etc1tool to compress the .png file to an .etc1 file, note we also deleted the original file so it does not get included in the final .apk. Don’t worry this is a file in the obj/<Configuration>/assets directory not the original file we added to the project :).  Now we could make this more robust and make it conditional so it doesn’t compress every .png in the assets list , but for now this will do the trick. So with the task code done, the .targets file now looks like this.

<?xml version="1.0" encoding="UTF-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <UsingTask TaskName="InfiniteSpace.Framework.Build.Tasks.CompressTextures" AssemblyFile="InfiniteSpace.Framework.Build.Tasks.dll"/>
 
  <Target Name="_CompressAssets" AfterTargets="UpdateAndroidAssets" 
      BeforeTargets="UpdateAndroidInterfaceProxies">
     <CompressTextures InputFiles="@(_AndroidAssetsDest)" AndroidSdkDir="$(AndroidSdkDirectory)">
        <Output TaskParameter="OutputFiles" ItemName="_CompressedTextures"/>
     </CompressTextures>
     <Touch Files="@(_CompressedTextures)" />
  </Target>
</Project>

Again this should be fairly easy to follow. The important bits are the AfterTargets and BeforeTargets values, this is where we hook into the build process. The next step is to include this target file in our project, we do this by adding the following line just under the Import statement for Xamarin.Android.CSharp.targets (or Novell.MonoDroid.CSharp.targets)

<Import Project="$PATH$/InfiniteSpace.Framework.Build.Tasks/Infinitespace.Common.targets" />

Now the $PATH$ bit depends on where you put the build tasks. For me I just added the project to my solution and used “../InfiniteSpace.Framework.Build.Tasks/Infinitespace.Common.targets”, then did a small tweak in the .targets file so it loaded the assembly from the debug folder

AssemblyFile="./bin/$(Configuration)/InfiniteSpace.Framework.Build.Tasks.dll"

This worked for me in Xamarin Studio on the mac, and it sort of worked in Visual Studio on windows. However in both IDE’s if you want to change the Task code you need to close and re-load the IDE since the assembly gets loaded during the build process and cannot be overwritten after that.

So with the msbuild task stuff hooked in, you should now be able to add a .png file to your assets folder and have it produce a .etc1 and .alpha for that file in your .apk. After that you can just load the .etc1 and .alpha files as you would any other resource. The code for this blog entry includes a sample project so you can see exactly how to load the files and use them for alpha.

As mentioned already the CompressTextures task could be improved. Some ideas might be

  • Add the ability to compress other formats (PVRTC, ATITC, S3TC)
  • Add the ability to read additional properties from the TaskItem to control if it needs to be compressed or not
  • Add support for Resizing to a Power of 2 (POW2) ETC1 only supports POW2 textures I think.. PVRTC certainly only supports POW2.
  • Add support for Colour Key, this wouldn’t save the .alpha file.
  • Add support for compressing the alpha channel to etc1.

I’ll leave these ideas with you at the moment, I might look at implementing them myself at some point in the future. I know I mentioned looking at PVRTC , ATITC and S3TC texture support in my last article and I assure you I will get to that soon. In the meantime have fun playing with the code and I hope you find it useful.

The code for this blog entry can be downloaded from here.

When developing for iOS or Windows Phone you don’t really need to take into account different screen resolutions. Yes the iPhone and iPad do scale things differently but on the iPad you provide some special content which will allow the system to scale you game and still get it to look nice.

On android its a different environment, there are many different devices, with different capabilities not only with screen resolutions but also CPU, memory and graphics chips. In this particular post I’ll cover how to write you game in such a way that it can handle all the different screen resolutions that the android eco-system can throw at you.

One of the nice things about XNA is that its been about a while and when you develop for Xbox Live you need to take into account screen resolutions because everyone has a different sized television. I came across this blog post which outlines a neat solution on handling this particular problem for 2D games. However rather than just bolting this code into a MonoGame android project I decided to update the ScreenManager class to handle multiple resolutions. For those of you that have not come across the ScreenManager class , it is used in many of the XNA samples to handle transitions of screens within your game. It also helps you break up you game into “Screen” which make for more maintainable code.

The plan is to add the following functionality to the ScreenManager

  1. The ability to set a virtual resolution for the game.This is the resolution that you game is designed to run at, the screen manager will then use this information to scale all the graphics and input so that it works nicely on other resolutions.
  2. Expose a Matrix property called Scale which we can use in the SpriteBatch to scale our graphics
  3. Expose a Matrix property called InputScale , which is the inverse of the Scale matrix so we can scale the Mouse and Gesture inputs into the virtual resolution.
  4. Expose an Vector2 property called InputTranslate  so we can translate our mouse and gesture inputs. This is because as part of the scaling will will make sure the game is always centered, so we will see a boarder around the game to take into account aspect ratio differences.
  5. Add a Viewport property which will return the virtual viewport for the game rather than use the GraphicsDevice.Viewport

We need to define a few private fields to store the virtual width/height and a reference to the GraphicsDeviceManager.

private int virtualWidth;
private int virtualHeight;
private GraphicsDeviceManager graphicsDeviceManager;
private bool updateMatrix =true;
private Matrix scaleMatrix = Matrix.Identity;

Next we add the new properties to the ScreenManager, we should probably have local fields for the these as it will save having to allocate a new Vector2/Viewport/Matrix each time the property is accessed. But for now this will work, we can optimize it later.

public Viewport Viewport {
  get{ return new Viewport(0, 0, virtualWidth, virtualHeight);}
}
 
public Matrix Scale {get;private set;}
 
public Matrix InputScale {
  get { return Matrix.Invert(Scale); }
}
 
public Vector2 InputTranslate {
  get { return new Vector2(GraphicsDevice.Viewport.X, GraphicsDevice.Viewport.Y); }
}

The constructor needs to be modified to include the virtual Width/Height paramerters and to resolve the GraphicsDeviceManager from the game.

public ScreenManager(Game game, int virtualWidth, int virtualHeight):base(game)
{
  // set the Virtual environment up
  this.virtualHeight= virtualHeight;
  this.virtualWidth= virtualWidth;
  this.graphicsDeviceManager=(GraphicsDeviceManager)game.Services.GetService(typeof(IGraphicsDeviceManager));
  // we must set EnabledGestures before we can query for them, but
 
  // we don't assume the game wants to read them.
  TouchPanel.EnabledGestures= GestureType.None;
}

Next is the code to create the Scale matrix. Update the Scale property to look like this. We use the updteMatrix flag to control when to re-generate the scaleMatrix so we don’t have to keep updating it every frame.

private Matrix scaleMatrix = Matrix.Identity;
public Matrix Scale { 
  get {
    if(updateMatrix) {
      CreateScaleMatrix();
      updateMatrix =false;
    }
    return scaleMatrix;
  }
}

Now implement the CreateScale method, this method will return a Matrix which we wil use to tell the SpriteBatch how to scale the graphics when they finally get drawn.

protected void CreateScaleMatrix() {
  scaleMatrix = Matrix.CreateScale((float)GraphicsDevice.Viewport.Width/ virtualWidth, (float)GraphicsDevice.Viewport.Width/ virtualWidth, 1f);
}

So what we have done so far is coded up all the properties we need to make this work. There are a few other methods we need to write. These methods will setup the GraphicsDevice viewport and ensure that we clear the backbuffer with a Color.Blank so we get that nice letterbox effect.

First thing to do is to update the Draw method of the ScreenManager to call a new method BeginDraw. This method will setup the Viewports and Clear the backbuffer.

public override void Draw(GameTime gameTime) {
  BeginDraw();
  foreach(GameScreen screen in screens)
  {
    if(screen.ScreenState== ScreenState.Hidden)
      continue;
    screen.Draw(gameTime);
  }
}

The BeginDraw method calls a bunch of other methods to setup the Viewports. Here is the code

protected void FullViewport ()
{ 
	Viewport vp = new Viewport (); 
	vp.X = vp.Y = 0; 
	vp.Width = DeviceManager.PreferredBackBufferWidth;
	vp.Height = DeviceManager.PreferredBackBufferHeight;
	GraphicsDevice.Viewport = vp;   
}
 
protected float GetVirtualAspectRatio ()
{
	return(float)virtualWidth / (float)virtualHeight;   
}
 
protected void ResetViewport ()
{
	float targetAspectRatio = GetVirtualAspectRatio ();   
	// figure out the largest area that fits in this resolution at the desired aspect ratio     
	int width = DeviceManager.PreferredBackBufferWidth;   
	int height = (int)(width / targetAspectRatio + .5f);   
	bool changed = false;     
	if (height &gt; DeviceManager.PreferredBackBufferHeight) { 
		height = DeviceManager.PreferredBackBufferHeight;   
		// PillarBox 
		width = (int)(height * targetAspectRatio + .5f);
		changed = true;   
	}     
	// set up the new viewport centered in the backbuffer 
	Viewport viewport = new Viewport ();   
	viewport.X = (DeviceManager.PreferredBackBufferWidth / 2) - (width / 2); 
	viewport.Y = (DeviceManager.PreferredBackBufferHeight / 2) - (height / 2); 
	viewport.Width = width; 
	viewport.Height = height; 
	viewport.MinDepth = 0; 
	viewport.MaxDepth = 1;     	
	if (changed) {
		updateMatrix = true;
	}   
	DeviceManager.GraphicsDevice.Viewport = viewport;   
}
 
protected void BeginDraw ()
{   
	// Start by reseting viewport 
	FullViewport ();   
	// Clear to Black 
	GraphicsDevice.Clear (Color.Black);   
	// Calculate Proper Viewport according to Aspect Ratio 
	ResetViewport ();   
	// and clear that    
	// This way we are gonna have black bars if aspect ratio requires it and     
	// the clear color on the rest 
	GraphicsDevice.Clear (Color.Black);   
}

So first thing we do is reset the Full viewport to the size of the PrefferedBackBufferWidth/Height and then Clear it. Then we reset the viewport to take into account the aspect ratio of the virtual viewport and calculate the virtical/horizontal offsets to center the new viewport and then Clear that just to be sure.That is all the code changes for the ScreenManager. To use it all we need to do is add the extra parameters when we create the new ScreenManager like so

screenManager=new ScreenManager (this, 800, 480);

You will need to pass in the resolution your game was designed for, in this case 800×480.
Then in all the places where we call SpriteBatch.Begin() we need to pass in the screenManager.Scale matrix like so

spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null, ScreenManager.Scale);

Note that the SpriteBatch has a number of overloaded Begin methods, you will need to adapt your code if you use things like SamplesState, BlendState, etc. Each of the Game screens should already have a reference to the ScreenManager if you follow the sample code from Microsoft. Also if you make use of GraphicsDevice.Viewport in your game to place objects based on screen size (like ui elements) that will need to be changed to use the ScreenManager.Viewport instead so they are placed within the virtual Viewport. So in the MenuScreen the following call would change from

position.X= GraphicsDevice.Viewport.Width/2- menuEntry.GetWidth(this)/2;

to this

position.X= ScreenManager.Viewport.Width/2- menuEntry.GetWidth(this)/2;

This should be all you need. In the next post we will look at the changes we need to make to the InputState.cs class to get the mouse and gesture inputs scaled as well.
You can download a copy of the modified ScreenManager class here.

So …. supporting compressed textures on android is a right pain. There are a number of different graphics chips in play and they all support different texture compression formats, not very helpful. The only common format that is supported by all devices is the ETC1 format (ETC2 is GLES 3.0 only), the problem with this format is it doesn’t support an alpha channel. Most game developers out there will know that the alpha channel is kinda important for games. So how can we make use of ETC1 and get a alpha channel at the same time?

There are a number of different methods that I’ve found on the net mostly have to do with storing the alpha channel separately and combining them in the fragment shader, this article seems to contain the most accepted solutions. But personally I hate the idea of having to have two textures or mess about with texture coordinates to sample the data from different parts. If only there was another way… well lets turn back the clock a bit an use a technique we used to use before such a thing as an alpha channel even existed.

I’m talking about using a ColorKey, this is where you define a single colour in your image as the transparent colour, its used allot in movies these days for green screen work. We used to use this technique allot back in the day when video hardware didn’t event know what an alpha channel was, so you just skip over the bits of the image that match the colour key and hey presto , you get a transparent image :).

So lets take a look at this image. 
f_spot

it has a nice alpha channel. But if we replace the alpha with a constant colour like so

f_spot_rgb

we can then re-write our fragment shader to just detect this colour and set the alpha channel explicitly without having to mess about with other bitmaps, or changing texture coordinates. So our fragment shader simply becomes

uniform lowp sampler2D u_Texture;
varying mediump vec2 v_TexCoordinate;
 
void main()
{
  vec4 colorkey = vec4(1.0,0.0,0.96470588,0.0);
  float cutoff = 0.2;
  vec4 colour = texture2D(u_Texture, v_TexCoordinate);
  if ((abs(colour.r - colorkey.r) &lt;= cutoff) &amp;&amp;
      (abs(colour.g - colorkey.g) &lt;= cutoff) &amp;&amp;
      (abs(colour.b - colorkey.b) &lt;= cutoff)) {
       colour.a = 0.0;
  }
  gl_FragColor = colour;
}

In this case I hardcoded the colour I’m using for the ColorKey but this could easily be passed in as a uniform if you wanted the flexibility of being able to change it.

With this in place we can now use ETC1 textures across all android devices and get a alpha channel. While its not a full alpha channel (with semi transparency) it will probably be enough for most games. You can generate your compressed textures using the ‘etc1util’ tool provided with the android sdk. Its located in the tools folder of the sdk and you can just call

  etc1tool  infile -encode -o outfile

you can then include the resulting outfile in your Assets folder and set its build action to ‘AndroidAsset’ then use the following code to load the texture

 static int LoadTextureFromAssets (Activity activity, string filename)
{
 
  using (var s = activity.Assets.Open (filename)) {
    using (var t = Android.Opengl.ETC1Util.CreateTexture (s)) {
 
      int tid = GL.GenTexture ();
      GL.ActiveTexture (All.Texture0);
      GL.BindTexture (All.Texture2D, tid);
      // setup texture parameters
      GL.TexParameter (All.Texture2D, All.TextureMagFilter, (int)All.Linear);
      GL.TexParameter (All.Texture2D, All.TextureMinFilter, (int)All.Nearest);
      GL.TexParameter (All.Texture2D, All.TextureWrapS, (int)All.ClampToEdge);
      GL.TexParameter (All.Texture2D, All.TextureWrapT, (int)All.ClampToEdge);
      Android.Opengl.ETC1Util.LoadTexture ((int)All.Texture2D, 0, 0, (int)All.Rgb, (int)All.UnsignedShort565, t);
 
      return tid;
    }
  }
}

Note you’ll need to add using clauses for the various OpenTK namespaces used in the code.

Now there is a problem with this technique, because of the way ETC1 works you will more than likely get some compression artefacts on the resulting image. In this my case I ended up with a purple/pink line around the image I was rendering. So perhaps that colour isn’t the best choice in this case.

Screenshot_2014-05-19-09-55-01

So I tried again this time with a black colour key. This might help reduce the compression artifacts around the edges of the image. But I had to make some changes to the shader to make it a bit more generic and to handle a black colour key. The resulting shader turned out to be as follows.

uniform lowp sampler2D u_Texture;
varying mediump vec2 v_TexCoordinate;
 
void main()
{
  float cutoff = 0.28;
  vec4 colour = texture2D(u_Texture, v_TexCoordinate);
  if ((colour.r &lt;= cutoff) &amp;&amp;
      (colour.g &lt;= cutoff) &amp;&amp;
      (colour.b &lt;= cutoff)) {
       colour.a = colour.r;
  }
  gl_FragColor = colour;
}

You can see we are just using a rgb cutoff value to detect the black in the image and turn that into a transparency. Note that I’m using the red channel to set the transparent colour rather than just using 0.0, hopefully this will help with the blending. This produced the following result.

Screenshot_2014-05-19-09-56-31

There is a slight black edge around the image, but it is probably be something you can get away with. The only issue with this is you can’t use black in your image 🙁 or if you do it has to be above the colour cutoff in the shader, that will require some testing to see what values you can get away with.

Well hopefully this will be useful for someone. Next time around I’ll be going into how to support texture compression formats like PVRTC, ATITC and S3TC on your android devices. These formats should cover most of the devices out there (but not all) and support a full alpha channel, but if you are after full compatibility then ETC1 is probably the way to go.

In the mean time the code from this article is available on github.