McMyAdmin 2.1 and Minecraft 1.3 Vanilla Servers

July 30, 2012 at 11:29 AMPhonicUK

A few changes in Vanilla Minecraft 1.3 are going to necessitate changes to existing functionality in McMyAdmin. These are only temporary measures until the modding API is available.

Prior to 1.3, the Minecraft server logged all attempts to execute commands to the console. This allowed McMyAdmin to intercept commands being run so it could perform actions requested by the user.

As of 1.3, it now only does this for opped users, this means that McMyAdmin can no longer delegate commands to non-op users or implement its own commands this way. It also shows a 'Command Not Recognised' message when using non-standard commands.

The workaround is that in 1.3, McMyAdmin commands will be usable as !command instead of /command - this does have the unfortunate side-effect that everyone on the server can see you using commands. Again this is only temporary until the modding API is available and McMyAdmin can handle its commands as normal.

Note that this only affects Vanilla servers, and those using CraftBukkit are unaffected.

Posted in: McMyAdmin

Tags:

Mono - Checking if your application is bundled with mkbundle or similar.

April 4, 2012 at 10:40 PMPhonicUK

'mkbundle' is a utility that ships with Mono that allows you to embed the Mono runtime into your application so you are left with just a single executable file that doesn't require Mono to be installed on the target system. You can either embed just certain parts of Mono or the entire thing.

There are cases where you may want to know at runtime whether you are running as part of a bundle or not, thankfully this is extremely simple:

IsBundled = (typeof(int).Assembly.Location == "mscorlib.dll");

When the application is not bundled, the assembly location for the standard objects will be something like /usr/lib/mono/2.0/mscorlib.dll - but when it is bundled then it's just mscorlib.dll since that file is embedded in the current executable.

Posted in: C# Development | Linux | Mono Development

Tags: , , , ,

Internet Explorer 9 - Better, but not good enough.

April 3, 2012 at 2:22 PMPhonicUK

The impression Microsoft seem to be trying to give with their new "The browser you loved to hate" adverts basically comes across like this:

"Hey guys! Look we know we used to suck, but look how good we are now! Look we can do HTML5 and CSS3 and all the awesome and cool stuff you like!"

If some recent statistics are accurate, it would seem to have worked, at least for now. I would wonder how many of those users are simply people 'trying it out' only to go back to another browser later.

The problem Microsoft have is that it is not enough simply to be 'as good as' the other browsers - you must be better than them if you want to win market share, and they're not even keeping up with some of of the basic functionality. It's 'nearly as good as' if you're a tad generous.

It's not enough to simply support the newest technologies. While that will win you a few friends from developers giving it flack for being too behind to develop for, the end user experience is still a major issue.

I'm just going to outline some of the features IE most needs that are standard in other browers: 

1. A mechanism for extending browser functionality that does not rely on executing native code and instead operates in a scripted, sandboxed environment. 

You can write Chrome and Firefox extensions that provide meaningful and useful functionality purely in Javascript. Not only does this mean extensions are cross platform (not that Microsoft cares about this aspect) and some components of extensions can be shared between different browsers (which they would benefit from) - having extensions isolated in this manner is a major security benefit over native code that has much greater access to the system.

2. Friendly browser configuration.

IE's configuration page doesn't appear to have changed... well ever. It's clunky, difficult to navigate, and some of the most important settings are buried in a long, unhelpful list in the 'Advanced' tab. On a slow mobile connection and don't want your browser to download images to save on bandwidth and time? Have fun finding that.

3. Tab 'Pinning'

For a lot of people, their browser isn't just an application. It's an operating environment all on its own.

The ability to quickly and easily access frequently used web applications without using significant screen real-estate is very important to the power user. I can 'pin' my 3 Gmail tabs and other frequently used web applications while only consuming as much space in the top as one regular tab. Especially important for things like GMail that are left open constantly.

4. Themes and appearance customization.

IE9 goes very much for the 'one size fits' all approach to its appearance. You'd be forgiven for dismissing this point as purely cosmetic, but aside from benefiting users who really need to alter the browsers appearance (both Chrome and Firefox have themes available with extra large, high contrast buttons for users with mobility or visual impairments) - it improves the experience for everyone else who wants to customize the application that they likely spend more time using than any other.

IE9 would need to implement all of the above to simply 'catch up' with the other major browsers. And to go back to my original point, it needs to do more than that. 

So the above points aside, the top features I'd like to see in Internet Explorer that other browsers don't have before I'd consider recommending it to everyone:

1. Prevent 3rd party applications from installing browser plugins/extensions without the users consent.

A users web browser is a major vector for spyware/adware/malware to expose itself to the user, often in the form of toolbars, or inserting ads into pages.

A mechanism where by a 3rd party extension was not loaded unless the user explicitly gave permission to (via something like a UAC prompt so an application can't give permission to itself) would help defeat this as an exposure vector. Better yet if there was a public rating and commenting system so users could get some idea about how a plugin will actually behave before allowing it to run.

2. Better bookmark management and searching.

Chrome almost comes close to ideal bookmark management. I can select large numbers of pages and drop them all in a folder. But it's missing one thing: Content searching.

Letting me search through just the titles of the bookmarks isn't massively useful in itself, but the ability to find all my bookmarks where the pages contain a phrase or a set of keywords (and then group them into a folder) would be massively useful.

3. Graphical history and bookmark browsing.

Mobile browsers have been doing this for a while, why don't desktop browsers?

If I remember that a page I bookmarked had a blue background with white text (or some other visually distinct feature) - being able to look through my recent bookmarks or history with thumbnails of the pages would make it far faster than just looking at the names and titles.

Chrome does this for your 'most visited' page but nowhere else. Storage space is vast enough that 4K worth of thumbnail for each bookmark and for the last weeks or so of browser history isn't going to leave many people clamouring for disk space. Especially if it's sensible about it and lets the about of data stored be configurable and not taking the snapshots when disk space is low.

If Microsoft can implement all of the above (and more) then I'd possibly consider using it myself and advising the general public to. In the mean time? Meh.

Posted in: Web Development | Software

Tags: , , , ,

McMyAdmin 2 Development Update

March 31, 2012 at 3:40 PMPhonicUK

When I released the first McMyAdmin 2 beta, the plan was that I'd have additional betas with fewer restrictions as things stabilized. However due to time constraints (and the beta demonstrating that it didn't collapse in any unexpected ways) - what I decided is that I'd release no more public betas and instead just jump straight to the 'retail' release. This also reduced the strain on GSPs from their users to be trying to use beta versions of MCMA (The first beta had a 8 player limit to discourage use on production servers)

Unfortunately this has meant on skipping a couple of features for the time being (namely Multiworld and the File Manager) to get a sensible release time. But those features will be added fairly quickly after the main release.

So as McMyAdmin 2 becomes closer to completion, I thought I'd give everyone a rundown of the remaining things that need doing before the retail release:

  • Adding new users to McMyAdmin (currently requires modifying a file to add users)
  • Hiding features from users that they don't have permission to use (at the moment they can see them, but will get access denied errors when they try to use them)
  • Testing MCMA_Compat r20A formally (New unit tests need writing etc)
  • First-start wizard (Like MCMA1 has)
  • glibc 2.5 builds
  • Pentesting (Checking how secure MCMA2 is)
  • Stress testing (MCMA1 flakes after around 400 simultaneous online users, MCMA2 should cope with > 1000)
  • More API documentation
  • Final deployment testing
  • RTM builds (for service providers)
The McMyAdmin Enterprise partners will have access to the main release slightly before it is publicly available so they can start working on their migration path before users start demanding V2 en-mass.
 
Release date wise, I'd love to give a fixed date and call it that. But I couldn't honestly say that I'd be able to stick to it. I could do the glibc 2.5 builds and find that something breaks completely and have to spend more time figuring out what. While the list isn't very long, there are still a lot of unknowns.
 
I think it would be accurate to say though that I can release before the end of April. Hopefully within the first week or two of it (and we're at the end of March now). Those who know me well will regard this as pretty unprecedented because I almost never give any indication of even vague release times unless I think there's a reasonable chance of actually hitting it.
 
While doing a final retail release means a longer delay than if I'd just released betas as smaller increments with just minimal changes to the retail release, I think the end result will be worthwhile for everyone.
 
And just to make it clear for everyone - Professional Licences purchased for McMyAdmin 1 will be usable on McMyAdmin 2 at no extra cost. No upgrade fee, nothing. You buy the software, not a specific version of it.

Posted in: McMyAdmin

Tags: , ,

.enterPressed() for jQuery

March 17, 2012 at 5:58 PMPhonicUK

A simple and reusable way to trap when enter is pressed on an element...

$.fn.enterPressed = function (callback) {
	this.keypress(function (event) {
		if (event.keyCode == '13') {
			event.preventDefault();
			callback();
		}
	});
};

$("#myElement").enterPressed(function(){alert($(this).val());});

Simple enough, isn't it?

Posted in: Web Development

Tags: , , ,

On Windows "8" Server

March 13, 2012 at 8:24 PMPhonicUK

I recently decided to try out the new Windows Server "8" Beta. I can only assume from Microsofts insistence on putting quote marks around the 8 indicates that they intend to change the name at some point and the 8 is purely a nod to the fact it uses the same Metro interface used in Windows 8.

I'm running it in VMware ESXi 5 (With the patch from November, there's a good guide over at virtuallyGhetto on getting it installed) with 2 virtual CPU cores and 8GB of RAM. Despite objections from the installer, you can in fact use the VMware guest OS tools for 2008 R2 quite happily (just run the installer in compatibility mode)

At a first thought, the idea of having the Metro UI on a server operating system sounds incredibly daft. But in practice, it makes very little odds one way or the other.

If you're like me, you use Server 2008/7 primarily by hitting the start key on your keyboard, typing the first few letters of the application you want to run or the setting you want to change, and selecting the item you want from the list of results. In practice this doesn't change much with 8, it just looks different.

The only practical difference is that Settings are shown separately from Apps, and it's not quite clever enough to automatically show you the list of settings when there are results there but no matching apps.

That minor detail aside, it's not a significant impact. The solid colour background means it behaves very nicely over Remote Desktop.

One of the things that did mildly irk me is the lack of the Desktop Experience feature. On previous Windows Server versions this allowed you to enable Aero, which was nice if you either wanted to use the server OS as a development machine / workstation, or to give remote clients the Aero effects when using Remote Desktop services. 

On the subject of the user interface though, it remains characteristically minimal but clearly leans more towards the style found in the rest of Windows 8. The option for a 'classic' user interface (which while missing from Windows 7, remains present in Server 2008 R2) is nowhere to be seen and by all appearances has been completely replaced with this new style with very thick borders...

Amongst the welcome changes though are a new server manager, which has a host of features to make it much more convenient to manage multiple servers.

You can add servers either via Active Directory or just by adding its Hostname/IP. It's very simple and convenient so you'll find no objections here.

I was surprised to find that the IIS manager remains exactly the same. It's the same version of IIS found in 2008 R2 making it look somewhat out of place compared to the new server manager.

 

The new task manager is entirely pleasant, no complaints here. Anyone who's used the 'current' task manager on anything with 8 or more logical processors will attest to how useless it becomes without taking over your entire screen when you want to examine the load on individual cores/processors. It's nice for having a quick glance at going on without having to use the comparatively heavy Resource Monitor (Which like IIS, remains unchanged at the moment)

I'm sure the changes in user interface will prompt the usual truckload of complaints, but I'm hard pressed to find anything actually 'wrong' with the changes. I was expecting to end up on IIS 9 but maybe that's for the final release. If the Beta is at all representative of the direction the final release is moving in then I'm reasonably confident that the result will be decent.

Posted in: Servers

Tags: , , , , , ,

Binding outgoing .Net web service calls to a specific IP

March 13, 2012 at 5:51 PMPhonicUK

If you're accessing a remote .Net web service where the remote server checks the IP address of the incoming request, either for authentication or some other reason, you can quickly run into some interesting problems on servers with multiple IP addresses, either on different physical interfaces or multiple addresses routed to the same NIC.

So there are some cases where you need to change which of a machines local IP addresses a request will be made from.

While doing this with .Net web services isn't massively complicated, it is tricky finding out any documentation on how to do it.

In this example, I have a remote .Net web service called StatsReporter. I've added the reference and have the generated code. Now I need to extend it:

using System.Net;
using System;

namespace MyProject.StatsReporter
{
    public partial class StatsReporter : System.Web.Services.Protocols.SoapHttpClientProtocol
    {
        protected override System.Net.WebRequest GetWebRequest(Uri uri)
        {
            HttpWebRequest Request = (HttpWebRequest)WebRequest.Create(uri);
            Request.ServicePoint.BindIPEndPointDelegate = BindIPEndPointCallback;
            Request.Proxy = null;
            return (Request);
        }

        public IPEndPoint BindIPEndPointCallback(ServicePoint servicePoint, IPEndPoint remoteEndPoint, int retryCount)
        {
            IPAddress IPEndPoint = IPAddress.Any;

            try
            {
                IPEndPoint = IPAddress.Parse(GetBindIPAddress());
            }
            catch{}

            return new IPEndPoint(IPEndPoint, 0);
        }
    }
}

The original generated code is implemented as a partial class, so the above code can be added to a new file. You'll need to implement GetBindIPAddress() yourself to return the IP that you actually want to use. But aside from that it works.

So what's going on? All you're doing is replacing the WebRequest object that gets created when a service call is made and replacing it with your own. One that has its BindIPEndPointDelegate set to one that will return the IP address to bind to.

In this example I deliberately set the proxy to null to avoid having the service go and 'look' for a proxy server during the first call (which can cause a significant delay) - if your code is going to operate in an environment where non-transparent proxies are being used, you'll want to remove that line.

Posted in: C# Development

Tags: , , , , , , ,

When 140 characters just isn't enough...

March 13, 2012 at 5:28 PMPhonicUK

Every now and then I find myself wishing I had somewhere to publish my thoughts that are either too long for a twitter post, or too useful to just be a comment somewhere on Reddit.

And so the blog was born.

Posted in:

Tags: