Xamarin

Xamarin announced something awesome yesterday.

Because we love seeing indie games succeed, Xamarin wants to support indie game developers all over the world in bringing their games to billions of mobile gamers. We want every indie game developer to enjoy the power of C# and Visual Studio, so we have an amazing special offer this December:

Free, community-supported subscriptions of Xamarin.iOS and Xamarin.Android, including our Visual Studio extensions

Indie game developers only need to have published a game in any framework on any platform to qualify.

This is just fantastic news. If you have an app already on one of the many stores you will qualify, this includes Xbox Live! So all you XNA developers out there with game now have the perfect opportunity to move that game to MonoGame and publish on iOS, Android ,Windows 10, MacOS and Linux or even Apple TV!*

It is worth noting as well that porting you app to a Windows 10 Universal app will also allow your game to work on Xbox One (in the app section).

This offer expires on the 31st of December at 9pm ET , so make sure you apply before the deadline expires.

* Apple TV support was merged into the develop branch a few days ago.

One of the many requests I see from customers of Xamarin.Android is the ability to “run a script” after the package has been built and signed, or do some process before the build process. Allot of people end up trying to use the CustomCommands which are available in the IDE to do this work, and shy away from getting down and dirty with MSBuild. The thing is, sometimes MSBuild is the only way to get certain things done.

Most people already know about the “SignAndroidPackage” target which is available on Xamarin.Android projects. If not basically you can use this target to create a Signed package, most people would use this on a Continuous Integration (CI) server. You can use it like so

  msbuild MyProject.csproj /t:SignAndroidPackage

Note that this target is ONLY available on the .csproj NOT the .sln but that is ok because MSBuild does its just of resolving projects etc and it should build fine. You can also pass in addition parameters to define your KeyStore etc, more information can be found in the excellent documentation on the subject.

Now we want to run a script after this has been done, Custom Commands can’t be used as they run at the wrong time in the build process, so what do we do? This is where MSBuild targets come in. What we need to do is add a custom MSBuild target which hooks into the build process After the SignAndroidPackage has run.

First thing to do is open the .csproj of your application in your favourite editor and find the last </Project> element right at the bottom of the file. Next we define a new target just above the </Project> element like so

<Target Name="AfterAndroidSignPackage" AfterTargets="SignAndroidPackage">
   <Message Text="AfterSignAndroidPackage Target Ran" />
</Target>

Save the .csproj and then run the command

  msbuild MyProject.csproj /t:SignAndroidPackage

and you should see the text “AfterSignAndroidPackage Target Ran” in the build output.  If not add a /v:d argument to the command line, which switches the MSBuild output to verbose mode. The key thing here is the Target defines the “AfterTargets” attribute which tells MSBuild when to run this script. In this case after the SignAndroidPackage, there is also a “BeforeTargets” attribute which will run a target.. you guessed it.. before the target(s) listed in the attribute.

Now we have a target that will run after the package has been signed. Next this to do is to get it to run our script, for this we can use the MSBuild Exec task. One thing to note is that its more than likely that some of your developers are running on Mac as well as Windows so we need to make sure that any command we do run will work on both systems.

Fortunately we have a way of conditionally executing targets using the Condition attribute. This means we can do something like

<Exec Condition=" '$(OS)' != 'Unix' " Command="dir" />
<Exec Condition=" '$(OS)' == 'Unix' " Command="ls" />

this allows us to run separate commands based on the OS we are running (in this case a directory listing). As you can see we can easily extend this to run a batch file or shell script. So the final target would be

<Target Name="AfterAndroidSignPackage" AfterTargets="SignAndroidPackage">
  <Exec Condition=" '$(OS)' != 'Unix' " Command="dir" />
  <Exec Condition=" '$(OS)' == 'Unix' " Command="ls" />
</Target>

MSBuild targets can also define Inputs and Outputs which will allow the build engine to decide if it needs to run the target at all. That is something that you can read up on if you are interested in learning more, but for now if you just want to “run a script after the apk is built” this is all you need.

Xamarin recently published a interview with George Banfill from LinkNode on their Augmented Reality product, you can see the full interview here. Since then I’ve had a number of requests from people wanting to know how to do this using MonoGame and Xamarin.Android. Believe it or not its simpler than you think to get started.

First thing you need is a class to handle the Camera. Google has done a nice job at giving you a very basic sample of a ‘CameraVew’ here. For those of you not wishing to port that over to C# from Java (yuk) here is the code


public class CameraView : SurfaceView, ISurfaceHolderCallback {
        Camera camera;

        public CameraView (Context context, Camera camera) : base(context)
        {
                this.camera = camera;
		Holder.AddCallback(this);
		// deprecated setting, but required on 
                // Android versions prior to 3.0
		Holder.SetType(SurfaceType.PushBuffers);
	}

	public void SurfaceChanged (ISurfaceHolder holder, Android.Graphics.Format format, int width, int height)
	{
		if (Holder.Surface == null){
			// preview surface does not exist
			return;
		}

		try {
			camera.StopPreview();
		} catch (Exception e){
		}
		try {
			camera.SetPreviewDisplay(Holder);
			camera.StartPreview();
		} catch (Exception e){
			Android.Util.Log.Debug ("CameraView", e.ToString ());
		}
	}

	public void SurfaceCreated (ISurfaceHolder holder)
	{
		try {
			camera.SetPreviewDisplay(holder);
			camera.StartPreview();
		} catch (Exception e) {
			Android.Util.Log.Debug ("CameraView", e.ToString ());
		}
	}

	public void SurfaceDestroyed (ISurfaceHolder holder)
	{
	}
}

It might not be pretty but it does the job, also it doesn’t handle a flipped view so that will need to be added.

The next step is to figure out how to show MonoGame’s GameWindow and the Camera View at the same time. Again, that is quite easy we can use a FrameLayout like so.


protected override void OnCreate (Bundle bundle)
{
	base.OnCreate (bundle);
	Game1.Activity = this;
	var g = new Game1 ();
	FrameLayout frameLayout = new FrameLayout(this);
	frameLayout.AddView (g.Window);  
	try {
		camera = Camera.Open ();
		cameraView = new CameraView (this, camera);
		frameLayout.AddView (cameraView);
	} catch (Exception e) {
		// oops no camera
		Android.Util.Log.Debug ("CameraView", e.ToString ());
	}
	SetContentView (frameLayout);
	g.Run ();
}

This is almost the same as the normal MonoGame android code you get, but instead of just setting the ContentView to the game Window directly we just add that and the CameraView to a frame layout and add that. Note that the order is important, the last item will be on the bottom so we want to add the game view first so it is over the top of the camera.

Now this won’t work out of the box because there are a couple of other small changes we need. First we need to set the SurfaceFormat of the game Window to Rgba8888, this is because it defaults to a format which does not contain an alpha channel. So if we leave it as is we will not see the camera view underneath the game windows since its opaque. We can change the surface format using

g.Window.SurfaceFormat = Android.Graphics.Format.Rgba8888;

We need to do that BEFORE we add that to the frameLayout though. Another thing to note not all devices support Rgba8888, not sure what you do in that case…

The next thing is we need to change our normal Clear colour in Game1 from the standard Color.CornflowerBlue to Color.Transparent

graphics.GraphicsDevice.Clear (Color.Transparent);

With those changes you should be done. Here is a screenshot, note the Xamarin logo in the top left this is drawn using a standard SpriteBatch call 🙂 All the code for this project is available here. I’ve not implemented the iOS version yet as I’m not an “iOS Guy” really, but I will accept pull requests 🙂
Screenshot_2014-11-19-05-40-53

So as you may have noticed I’ve been playing about allot with OpenGL and android recently and I started to see some very weird behaviour. The normal activity life cycle states that when the lock screen is enabled, the activity will be Paused, then Stopped. This seemed to be the behaviour I saw in the devices I was testing on until I tried a Nexus 4.. On lock it would Pause, Stop then Destroy.. WFT! and it would only do this on the Nexus 4.

This caused me a big problem because the app I was putting together was a game, so if the screen locked in the middle of a game it wouldn’t just pickup from where it left it would restart the entire app and loose your progress… Not happy.. So I spent ages looking at my code to try to figure out what was going on and I stumbled across the following property when messing about in my OnDestroy method of the Activity.

this.ChangingConfigurations

Hmm, what was that property about. Well it turns out this property tells you which configurations changed. When I looked at this when in my OnDestroy method I was it had a value of ScreenSize… Hmm A bit more research lead me to this from the android docs.

Caution: Beginning with Android 3.2 (API level 13), the “screen size” also changes when the device switches between portrait and landscape orientation. Thus, if you want to prevent runtime restarts due to orientation change when developing for API level 13 or higher (as declared by the minSdkVersion andtargetSdkVersion attributes), you must include the "screenSize" value in addition to the"orientation" value. That is, you must decalareandroid:configChanges="orientation|screenSize". However, if your application targets API level 12 or lower, then your activity always handles this configuration change itself (this configuration change does not restart your activity, even when running on an Android 3.2 or higher device).

So now it all makes sense.. When the nexus 4 (Andoid 4.3) was screen locking it was changing the Orientation to Portrait (helpful..not) which I was handling, but also raising a screen size config change, which I was not.. hence the OS destroying my activity. Normally when you get a new OpenGL based app in Xamarin.Android you get the following attributes added automagically to your activity.

[Activity (Label = "Some app",
#if __ANDROID_11__
  HardwareAccelerated=false,
#endif
  ConfigurationChanges = ConfigChanges.Orientation | ConfigChanges.Keyboard | ConfigChanges.KeyboardHidden,
  MainLauncher = true)]

These tell android that you want to handle these changes and not the OS, so the OS doesn’t restart you app when this stuff happens. Because of this change in Android 3.2+ we need to add the ScreenSize enumeration as well like so

[Activity (Label = "Some app",
#if __ANDROID_11__
  HardwareAccelerated=false,
#endif
  ConfigurationChanges = ConfigChanges.Orientation | ConfigChanges.Keyboard | ConfigChanges.KeyboardHidden | <strong>ConfigChanges.ScreenSize</strong>,
  MainLauncher = true)]

Now our app can be paused and resumed as normal when you lock the screen without it restarting. Yay!

In my previous post we looked at how to modify the ScreenManager class to support multiple resolutions. This works fine but what we also need is a way to scale the inputs from the Mouse or TouchScreen so that they operate at the same virtual resolution as the game does.

We already put in place the following properties in the ScreenManager

property Matrix InputScale …..
property Vector2 InputTranslate  …..

InputScale is the Inverse of the Scale matrix which will allow use to scale the input from the mouse into the virtual resolution. InputTranslate needs to be used because we sometimes put a letterbox around the gameplay area to center it and the input system needs to take this into account (otherwise you end up clicking above menu items rather than on them).

So we need to update the InputState.cs class which comes as part of the Game State Management example. First thing we need to do is to add a property to the InputState class for the  ScreenManager.

public ScreenManager ScreenManager
{
  get; set;
}

and then update the ScreenManager constructor to set the property.

input.ScreenManager=this;

Now we need to update the InputState.Update method. Find the following line

CurrentMouseState = Mouse.GetState();

We now need to translate and scale the CurrentMouseState field into the correct virtual resolution. We can do that my accessing the ScreenManager property which we just added, so add the following code.

Vector2 _mousePosition = new Vector2(CurrentMouseState.X, CurrentMouseState.Y);
Vector2 p = _mousePosition - ScreenManager.InputTranslate;
p = Vector2.Transform(p, ScreenManager.InputScale);
CurrentMouseState =new MouseState((int)p.X, (int)p.Y, CurrentMouseState.ScrollWheelValue, CurrentMouseState.LeftButton, CurrentMouseState.MiddleButton, CurrentMouseState.RightButton, CurrentMouseState.XButton1, CurrentMouseState.XButton2);

This bit of code transforms the current mouse position and then scales it before creating a new MouseState instance with the new position values, but the same values for everything else. If the ScrollWheelValue is being used that might need scaling too.

Next stop is to scale the gestures, in the same Update method there should be the following code

Gestures.Clear();
while (TouchPanel.IsGestureAvailable) {
  Gestures.Add(TouchPanel.ReadGesture());
}

We need to change this code over to

Gestures.Clear();
while (TouchPanel.IsGestureAvailable) {
GestureSample g = TouchPanel.ReadGesture();
Vector2 p1 = Vector2.Transform(g.Position- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p2 = Vector2.Transform(g.Position2- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p3 = Vector2.Transform(g.Delta- ScreenManager.InputTranslate, ScreenManager.InputScale);
Vector2 p4 = Vector2.Transform(g.Delta2- ScreenManager.InputTranslate, ScreenManager.InputScale);
g =new GestureSample(g.GestureType, g.Timestamp, p1, p2, p3, p4);
Gestures.Add(g);
}

We use similar code to translate and scale each position and delta value from the GestureSample, then again create a new GestureSample with the new values.

That should be all you need to do. This will now scale both mouse and gesture inputs into the virtual resolution.

As before the complete code can be downloaded here, or you can download the both files InputState.cs ScreenManager.cs

Update : The MouseGestureType is the InputState is from the CatapultWars sample and can be downloaded here

In my last post we looked at using ETC1 compressed textures on the Xamarin Android platform. In that case we just used the texture and some fancy shader magic to fake transparency. In this article we’ll how we can split the alpha channel out to a separate file which we load at run time so we don’t have to rely on the colour key.

One of the things that can be a pain is having to pre-process your content to generate compressed textures etc outside of your normal development processes. In this case it would be nice for the artist to give us a .png file and we add it to the project and as part of the build and packaging process we get the required compressed textures in the .apk. XNA did something similar with its content pipeline where all the content was processed during the build into formats optimised for the target platform (xbox, windows etc), MonoGame also has a similar content pipeline as does many other game development tools. Pre-processing your content is really important, because you don’t want to be doing any kind of heavy processing on the device itself. While phones are getting more powerful every year, they still can’t match high end PC’s or consoles. In this article we’ll look and hooking into the power of msbuild and xbuild (Xamarin’s cross platform version of msbuild) so implement a very simple content processor.

So what we want to do it this, be able to add a .png to our Assets folder in our project and have some magic happen which turns that .png file into a .etc1 compressed texture and save the alpha channel of the .png file to a .alpha file and have those files appear in the .apk. To do this we are going to need a few things

  1. A Custom MSBuild Task to split out and convert the files
  2. A .targets file in which we can hook into the Xamarin.Android build process at the right points to call our custom task.
  3. A way of detecting where the etc1tool is installed on the target system.

We’ll start with the .targets file. First thing we need to know where in the Xamarin.Android build process we need to do our fancy bate and switch of the assets. Turns out after looking into Xamarin.Common.CSharp.targets file the perfect place to hook in is between the UpdateAndroidAssets target and the UpgradeAndroidInterfaceProxies target.  At the point where these targets run there is already a list of the assets in the project stored in the  @(_AndroidAssetsDest) property, which is perfect for what we need. Getting the location of the etc1tool is also a bit of a breeze because again Xamarin have done the hard work for us, there is a $(AndroidSdkDirectory) property onto which we just need to append tools/etc1tool in order to run the app. So thats 2) and 3) kinda sorted. Lets look at the code for the custom Task.

	public class CompressTextures : Task
	{
		[Required]
		public ITaskItem[] InputFiles { get; set; }
 
		[Required]
		public string AndroidSdkDir { get; set; }
 
		[Output]
		public ITaskItem[] OutputFiles { get; set; }
 
		public override bool Execute ()
		{
			Log.LogMessage (MessageImportance.Low, "  CompressTextures Task");
 
			List items = new List ();
			var etc1tool = new Etc1Tool ();
			etc1tool.AndroidSdkDir = AndroidSdkDir;
 
			foreach (var item in InputFiles) {
				if (item.ItemSpec.Contains(".png")) {
					var etc1file = item.ItemSpec.Replace (".png", ".etc1");
					var alphafile = item.ItemSpec.Replace (".png", ".alpha");
					byte[] data = null;
 
					using (var bitmap = (Bitmap)Bitmap.FromFile (item.ItemSpec)) {
						data = new byte[bitmap.Width * bitmap.Height];
						for (int y = 0; y &lt; bitmap.Height; y++) {
							for (int x = 0; x &lt; bitmap.Width; x++) {
								var color = bitmap.GetPixel (x, y);
								data [(y * bitmap.Width) + x] = color.A;
							}
						}
					}
 
					if (data != null)
						File.WriteAllBytes (alphafile, data);
 
					etc1tool.Source = item.ItemSpec;
					etc1tool.Destination = etc1file;
					etc1tool.Execute ();
 
					items.Add (new TaskItem (etc1file));
					items.Add (new TaskItem (alphafile));
 
					if (File.Exists (item.ItemSpec)) {
						try {
						File.Delete (item.ItemSpec);
						} catch(IOException ex) {
							// read only error??
							Log.LogErrorFromException (ex);
						}
					}
 
				} else {
					items.Add (item);
				}
 
			}
			OutputFiles = items.ToArray ();
			return !Log.HasLoggedErrors;
		}
 
		public class Etc1Tool {
 
			public string Source { get; set; }
 
			public string Destination { get; set; }
 
			public string AndroidSdkDir { get; set; }
 
			public void Execute() {
 
				var tool = Path.Combine (AndroidSdkDir, "tools/etc1tool");
 
				var process = new System.Diagnostics.Process ();
				process.StartInfo.FileName = tool;
				process.StartInfo.Arguments = string.Format (" {0} --encode -o {1}", Source, Destination);
				process.StartInfo.CreateNoWindow = true;
				process.Start ();
				process.WaitForExit ();
			}
		}
	}

I’m not going to go into all the in’s and out’s of writing msbuild tasks, that is what google and bing are for :). But if you look at the code we have two [Required] properties , the AndroidSDKDir and the InputFiles. The InputFiles are going to be the list of files we get from @(_AndroidAssetDest) and the AndroidSDKDir is obviously the $(AndroidSdkDirectory) property. We also have an OutputFiles property which we use to populate a list with our new files once we have converted them. The code in the Execute method itself should be fairly easy to follow. For each of the files we extract the alpha channel and save that to a .alpha file, then call out to the etc1tool to compress the .png file to an .etc1 file, note we also deleted the original file so it does not get included in the final .apk. Don’t worry this is a file in the obj/<Configuration>/assets directory not the original file we added to the project :).  Now we could make this more robust and make it conditional so it doesn’t compress every .png in the assets list , but for now this will do the trick. So with the task code done, the .targets file now looks like this.

<?xml version="1.0" encoding="UTF-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <UsingTask TaskName="InfiniteSpace.Framework.Build.Tasks.CompressTextures" AssemblyFile="InfiniteSpace.Framework.Build.Tasks.dll"/>
 
  <Target Name="_CompressAssets" AfterTargets="UpdateAndroidAssets" 
      BeforeTargets="UpdateAndroidInterfaceProxies">
     <CompressTextures InputFiles="@(_AndroidAssetsDest)" AndroidSdkDir="$(AndroidSdkDirectory)">
        <Output TaskParameter="OutputFiles" ItemName="_CompressedTextures"/>
     </CompressTextures>
     <Touch Files="@(_CompressedTextures)" />
  </Target>
</Project>

Again this should be fairly easy to follow. The important bits are the AfterTargets and BeforeTargets values, this is where we hook into the build process. The next step is to include this target file in our project, we do this by adding the following line just under the Import statement for Xamarin.Android.CSharp.targets (or Novell.MonoDroid.CSharp.targets)

<Import Project="$PATH$/InfiniteSpace.Framework.Build.Tasks/Infinitespace.Common.targets" />

Now the $PATH$ bit depends on where you put the build tasks. For me I just added the project to my solution and used “../InfiniteSpace.Framework.Build.Tasks/Infinitespace.Common.targets”, then did a small tweak in the .targets file so it loaded the assembly from the debug folder

AssemblyFile="./bin/$(Configuration)/InfiniteSpace.Framework.Build.Tasks.dll"

This worked for me in Xamarin Studio on the mac, and it sort of worked in Visual Studio on windows. However in both IDE’s if you want to change the Task code you need to close and re-load the IDE since the assembly gets loaded during the build process and cannot be overwritten after that.

So with the msbuild task stuff hooked in, you should now be able to add a .png file to your assets folder and have it produce a .etc1 and .alpha for that file in your .apk. After that you can just load the .etc1 and .alpha files as you would any other resource. The code for this blog entry includes a sample project so you can see exactly how to load the files and use them for alpha.

As mentioned already the CompressTextures task could be improved. Some ideas might be

  • Add the ability to compress other formats (PVRTC, ATITC, S3TC)
  • Add the ability to read additional properties from the TaskItem to control if it needs to be compressed or not
  • Add support for Resizing to a Power of 2 (POW2) ETC1 only supports POW2 textures I think.. PVRTC certainly only supports POW2.
  • Add support for Colour Key, this wouldn’t save the .alpha file.
  • Add support for compressing the alpha channel to etc1.

I’ll leave these ideas with you at the moment, I might look at implementing them myself at some point in the future. I know I mentioned looking at PVRTC , ATITC and S3TC texture support in my last article and I assure you I will get to that soon. In the meantime have fun playing with the code and I hope you find it useful.

The code for this blog entry can be downloaded from here.

When developing for iOS or Windows Phone you don’t really need to take into account different screen resolutions. Yes the iPhone and iPad do scale things differently but on the iPad you provide some special content which will allow the system to scale you game and still get it to look nice.

On android its a different environment, there are many different devices, with different capabilities not only with screen resolutions but also CPU, memory and graphics chips. In this particular post I’ll cover how to write you game in such a way that it can handle all the different screen resolutions that the android eco-system can throw at you.

One of the nice things about XNA is that its been about a while and when you develop for Xbox Live you need to take into account screen resolutions because everyone has a different sized television. I came across this blog post which outlines a neat solution on handling this particular problem for 2D games. However rather than just bolting this code into a MonoGame android project I decided to update the ScreenManager class to handle multiple resolutions. For those of you that have not come across the ScreenManager class , it is used in many of the XNA samples to handle transitions of screens within your game. It also helps you break up you game into “Screen” which make for more maintainable code.

The plan is to add the following functionality to the ScreenManager

  1. The ability to set a virtual resolution for the game.This is the resolution that you game is designed to run at, the screen manager will then use this information to scale all the graphics and input so that it works nicely on other resolutions.
  2. Expose a Matrix property called Scale which we can use in the SpriteBatch to scale our graphics
  3. Expose a Matrix property called InputScale , which is the inverse of the Scale matrix so we can scale the Mouse and Gesture inputs into the virtual resolution.
  4. Expose an Vector2 property called InputTranslate  so we can translate our mouse and gesture inputs. This is because as part of the scaling will will make sure the game is always centered, so we will see a boarder around the game to take into account aspect ratio differences.
  5. Add a Viewport property which will return the virtual viewport for the game rather than use the GraphicsDevice.Viewport

We need to define a few private fields to store the virtual width/height and a reference to the GraphicsDeviceManager.

private int virtualWidth;
private int virtualHeight;
private GraphicsDeviceManager graphicsDeviceManager;
private bool updateMatrix =true;
private Matrix scaleMatrix = Matrix.Identity;

Next we add the new properties to the ScreenManager, we should probably have local fields for the these as it will save having to allocate a new Vector2/Viewport/Matrix each time the property is accessed. But for now this will work, we can optimize it later.

public Viewport Viewport {
  get{ return new Viewport(0, 0, virtualWidth, virtualHeight);}
}
 
public Matrix Scale {get;private set;}
 
public Matrix InputScale {
  get { return Matrix.Invert(Scale); }
}
 
public Vector2 InputTranslate {
  get { return new Vector2(GraphicsDevice.Viewport.X, GraphicsDevice.Viewport.Y); }
}

The constructor needs to be modified to include the virtual Width/Height paramerters and to resolve the GraphicsDeviceManager from the game.

public ScreenManager(Game game, int virtualWidth, int virtualHeight):base(game)
{
  // set the Virtual environment up
  this.virtualHeight= virtualHeight;
  this.virtualWidth= virtualWidth;
  this.graphicsDeviceManager=(GraphicsDeviceManager)game.Services.GetService(typeof(IGraphicsDeviceManager));
  // we must set EnabledGestures before we can query for them, but
 
  // we don't assume the game wants to read them.
  TouchPanel.EnabledGestures= GestureType.None;
}

Next is the code to create the Scale matrix. Update the Scale property to look like this. We use the updteMatrix flag to control when to re-generate the scaleMatrix so we don’t have to keep updating it every frame.

private Matrix scaleMatrix = Matrix.Identity;
public Matrix Scale { 
  get {
    if(updateMatrix) {
      CreateScaleMatrix();
      updateMatrix =false;
    }
    return scaleMatrix;
  }
}

Now implement the CreateScale method, this method will return a Matrix which we wil use to tell the SpriteBatch how to scale the graphics when they finally get drawn.

protected void CreateScaleMatrix() {
  scaleMatrix = Matrix.CreateScale((float)GraphicsDevice.Viewport.Width/ virtualWidth, (float)GraphicsDevice.Viewport.Width/ virtualWidth, 1f);
}

So what we have done so far is coded up all the properties we need to make this work. There are a few other methods we need to write. These methods will setup the GraphicsDevice viewport and ensure that we clear the backbuffer with a Color.Blank so we get that nice letterbox effect.

First thing to do is to update the Draw method of the ScreenManager to call a new method BeginDraw. This method will setup the Viewports and Clear the backbuffer.

public override void Draw(GameTime gameTime) {
  BeginDraw();
  foreach(GameScreen screen in screens)
  {
    if(screen.ScreenState== ScreenState.Hidden)
      continue;
    screen.Draw(gameTime);
  }
}

The BeginDraw method calls a bunch of other methods to setup the Viewports. Here is the code

protected void FullViewport ()
{ 
	Viewport vp = new Viewport (); 
	vp.X = vp.Y = 0; 
	vp.Width = DeviceManager.PreferredBackBufferWidth;
	vp.Height = DeviceManager.PreferredBackBufferHeight;
	GraphicsDevice.Viewport = vp;   
}
 
protected float GetVirtualAspectRatio ()
{
	return(float)virtualWidth / (float)virtualHeight;   
}
 
protected void ResetViewport ()
{
	float targetAspectRatio = GetVirtualAspectRatio ();   
	// figure out the largest area that fits in this resolution at the desired aspect ratio     
	int width = DeviceManager.PreferredBackBufferWidth;   
	int height = (int)(width / targetAspectRatio + .5f);   
	bool changed = false;     
	if (height &gt; DeviceManager.PreferredBackBufferHeight) { 
		height = DeviceManager.PreferredBackBufferHeight;   
		// PillarBox 
		width = (int)(height * targetAspectRatio + .5f);
		changed = true;   
	}     
	// set up the new viewport centered in the backbuffer 
	Viewport viewport = new Viewport ();   
	viewport.X = (DeviceManager.PreferredBackBufferWidth / 2) - (width / 2); 
	viewport.Y = (DeviceManager.PreferredBackBufferHeight / 2) - (height / 2); 
	viewport.Width = width; 
	viewport.Height = height; 
	viewport.MinDepth = 0; 
	viewport.MaxDepth = 1;     	
	if (changed) {
		updateMatrix = true;
	}   
	DeviceManager.GraphicsDevice.Viewport = viewport;   
}
 
protected void BeginDraw ()
{   
	// Start by reseting viewport 
	FullViewport ();   
	// Clear to Black 
	GraphicsDevice.Clear (Color.Black);   
	// Calculate Proper Viewport according to Aspect Ratio 
	ResetViewport ();   
	// and clear that    
	// This way we are gonna have black bars if aspect ratio requires it and     
	// the clear color on the rest 
	GraphicsDevice.Clear (Color.Black);   
}

So first thing we do is reset the Full viewport to the size of the PrefferedBackBufferWidth/Height and then Clear it. Then we reset the viewport to take into account the aspect ratio of the virtual viewport and calculate the virtical/horizontal offsets to center the new viewport and then Clear that just to be sure.That is all the code changes for the ScreenManager. To use it all we need to do is add the extra parameters when we create the new ScreenManager like so

screenManager=new ScreenManager (this, 800, 480);

You will need to pass in the resolution your game was designed for, in this case 800×480.
Then in all the places where we call SpriteBatch.Begin() we need to pass in the screenManager.Scale matrix like so

spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null, ScreenManager.Scale);

Note that the SpriteBatch has a number of overloaded Begin methods, you will need to adapt your code if you use things like SamplesState, BlendState, etc. Each of the Game screens should already have a reference to the ScreenManager if you follow the sample code from Microsoft. Also if you make use of GraphicsDevice.Viewport in your game to place objects based on screen size (like ui elements) that will need to be changed to use the ScreenManager.Viewport instead so they are placed within the virtual Viewport. So in the MenuScreen the following call would change from

position.X= GraphicsDevice.Viewport.Width/2- menuEntry.GetWidth(this)/2;

to this

position.X= ScreenManager.Viewport.Width/2- menuEntry.GetWidth(this)/2;

This should be all you need. In the next post we will look at the changes we need to make to the InputState.cs class to get the mouse and gesture inputs scaled as well.
You can download a copy of the modified ScreenManager class here.

So …. supporting compressed textures on android is a right pain. There are a number of different graphics chips in play and they all support different texture compression formats, not very helpful. The only common format that is supported by all devices is the ETC1 format (ETC2 is GLES 3.0 only), the problem with this format is it doesn’t support an alpha channel. Most game developers out there will know that the alpha channel is kinda important for games. So how can we make use of ETC1 and get a alpha channel at the same time?

There are a number of different methods that I’ve found on the net mostly have to do with storing the alpha channel separately and combining them in the fragment shader, this article seems to contain the most accepted solutions. But personally I hate the idea of having to have two textures or mess about with texture coordinates to sample the data from different parts. If only there was another way… well lets turn back the clock a bit an use a technique we used to use before such a thing as an alpha channel even existed.

I’m talking about using a ColorKey, this is where you define a single colour in your image as the transparent colour, its used allot in movies these days for green screen work. We used to use this technique allot back in the day when video hardware didn’t event know what an alpha channel was, so you just skip over the bits of the image that match the colour key and hey presto , you get a transparent image :).

So lets take a look at this image. 
f_spot

it has a nice alpha channel. But if we replace the alpha with a constant colour like so

f_spot_rgb

we can then re-write our fragment shader to just detect this colour and set the alpha channel explicitly without having to mess about with other bitmaps, or changing texture coordinates. So our fragment shader simply becomes

uniform lowp sampler2D u_Texture;
varying mediump vec2 v_TexCoordinate;
 
void main()
{
  vec4 colorkey = vec4(1.0,0.0,0.96470588,0.0);
  float cutoff = 0.2;
  vec4 colour = texture2D(u_Texture, v_TexCoordinate);
  if ((abs(colour.r - colorkey.r) &lt;= cutoff) &amp;&amp;
      (abs(colour.g - colorkey.g) &lt;= cutoff) &amp;&amp;
      (abs(colour.b - colorkey.b) &lt;= cutoff)) {
       colour.a = 0.0;
  }
  gl_FragColor = colour;
}

In this case I hardcoded the colour I’m using for the ColorKey but this could easily be passed in as a uniform if you wanted the flexibility of being able to change it.

With this in place we can now use ETC1 textures across all android devices and get a alpha channel. While its not a full alpha channel (with semi transparency) it will probably be enough for most games. You can generate your compressed textures using the ‘etc1util’ tool provided with the android sdk. Its located in the tools folder of the sdk and you can just call

  etc1tool  infile -encode -o outfile

you can then include the resulting outfile in your Assets folder and set its build action to ‘AndroidAsset’ then use the following code to load the texture

 static int LoadTextureFromAssets (Activity activity, string filename)
{
 
  using (var s = activity.Assets.Open (filename)) {
    using (var t = Android.Opengl.ETC1Util.CreateTexture (s)) {
 
      int tid = GL.GenTexture ();
      GL.ActiveTexture (All.Texture0);
      GL.BindTexture (All.Texture2D, tid);
      // setup texture parameters
      GL.TexParameter (All.Texture2D, All.TextureMagFilter, (int)All.Linear);
      GL.TexParameter (All.Texture2D, All.TextureMinFilter, (int)All.Nearest);
      GL.TexParameter (All.Texture2D, All.TextureWrapS, (int)All.ClampToEdge);
      GL.TexParameter (All.Texture2D, All.TextureWrapT, (int)All.ClampToEdge);
      Android.Opengl.ETC1Util.LoadTexture ((int)All.Texture2D, 0, 0, (int)All.Rgb, (int)All.UnsignedShort565, t);
 
      return tid;
    }
  }
}

Note you’ll need to add using clauses for the various OpenTK namespaces used in the code.

Now there is a problem with this technique, because of the way ETC1 works you will more than likely get some compression artefacts on the resulting image. In this my case I ended up with a purple/pink line around the image I was rendering. So perhaps that colour isn’t the best choice in this case.

Screenshot_2014-05-19-09-55-01

So I tried again this time with a black colour key. This might help reduce the compression artifacts around the edges of the image. But I had to make some changes to the shader to make it a bit more generic and to handle a black colour key. The resulting shader turned out to be as follows.

uniform lowp sampler2D u_Texture;
varying mediump vec2 v_TexCoordinate;
 
void main()
{
  float cutoff = 0.28;
  vec4 colour = texture2D(u_Texture, v_TexCoordinate);
  if ((colour.r &lt;= cutoff) &amp;&amp;
      (colour.g &lt;= cutoff) &amp;&amp;
      (colour.b &lt;= cutoff)) {
       colour.a = colour.r;
  }
  gl_FragColor = colour;
}

You can see we are just using a rgb cutoff value to detect the black in the image and turn that into a transparency. Note that I’m using the red channel to set the transparent colour rather than just using 0.0, hopefully this will help with the blending. This produced the following result.

Screenshot_2014-05-19-09-56-31

There is a slight black edge around the image, but it is probably be something you can get away with. The only issue with this is you can’t use black in your image 🙁 or if you do it has to be above the colour cutoff in the shader, that will require some testing to see what values you can get away with.

Well hopefully this will be useful for someone. Next time around I’ll be going into how to support texture compression formats like PVRTC, ATITC and S3TC on your android devices. These formats should cover most of the devices out there (but not all) and support a full alpha channel, but if you are after full compatibility then ETC1 is probably the way to go.

In the mean time the code from this article is available on github.