• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Sunday, 22 Jun 2014 03:18
As many of you know, I now work for Google on the Play Games Team. We provide APIs for game developers, implementing useful features like leaderboards and achievements, so the developer doesn't have to. While many Android developers are using our services, adoption on the web could be better, so lets take a look at how to integrate the Google Play Games Services achievements into a web game.

Getting ready

Become a Google developer and have added the game to the Developer Console. I've done this with my Snake game: Add Game Screen Link the game to at least one web URL: Add Achievement Screen Create a couple of basic achievements for your application: Add Achievement Screen Set your Google+ email up as a trusted tester (or publish your game so anyone can log in): Testing Screen Here is the achievement setup screen: Testing Screen If you have an incremental achievement, check the optional box (I recommend testing one incremental and one non-incremental): Testing Screen

How do it…

To use play services, players must first authenticate with G+[2]. For the simplest method, include the following in the header of the page:
<meta name="google-signin-clientid" content="YOUR_CLIENT_ID_HERE" />
<meta name="google-signin-cookiepolicy" content="single_host_origin" />
<meta name="google-signin-approvalprompt" content="auto" />
<meta name="google-signin-callback" content="YOUR_JS_CALLBACK_FUNCTION" />
<meta name="google-signin-scope" content="https://www.googleapis.com/auth/games" />
And the G+ button:
<span id="signinButton"><span class="g-signin"></span></span>
And the necessary G+ JavaScript:
<script type="text/javascript">
    (function() {
        var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true;
        po.src = 'https://apis.google.com/js/client:plusone.js';
        var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);
    })();
</script>
This will load the G+ script, show the login button, and if the user clicks on the login button, authenticate players to use the Games services. When they authenticate G+ will call the JavaScript function specified by google-signin-callback:
function signinCallback(authResult) {
    if (authResult['status']['signed_in']) {
      document.getElementById('signinButton').style.display = 'none';
      // Safe to use API here (gapi.client.games).
    } else {
      // Update the app to reflect a signed out user
      // Possible error values:
      //   "user_signed_out" - User is signed-out
      //   "access_denied" - User denied access to your app
      //   "immediate_failed" - Could not automatically log in the user
      console.log('Sign-in state: ' + authResult['error']);
    }
  };
Once authenticated you can begin to use the Games API[3]. To fetch and store a list of the achievement definitions:
var oAchievementDefinitions = {};
gapi.client.games.achievementDefinitions.list().execute(function(oResponse) {
    var aItems = oResponse.items || [];
    for (var i = 0, j = aItems.length; i < j; i++) {
        oAchievementDefinitions[aItems[i].id] = aItems[i];
    }
});
Before calling APIs on behalf of the player, fetch the player object for the authenticated user:
gapi.client.games.players.get({playerId: 'me'}).execute(function(oPlayer) {
  // Use the player to make API calls here, oPlayer.playerId
});
To fetch a player's achievement progress:
gapi.client.games.achievements.list(
        {forceRefresh: true, playerId: oPlayer.playerId})
    .execute(function(oResponse) {
        // oResponse.items will have the state and player progress for each definition
    });
To complete an achievement:
gapi.client.games.achievements.unlock({achievementId: 'ID_OF_ACHIEVEMENT'})
    .execute(function(oResponse) {
        if (oResponse.newlyUnlocked) {
            // First time the achievement unlocked, show something in the UI
        }
    });
To increment an achievement:
gapi.client.games.achievements.increment(
        {achievementId: 'ID_OF_ACHIEVEMENT', stepsToIncrement: 1}))
    .execute(function(oResponse) {
        if (oResponse.newlyUnlocked) {
            // First time the achievement unlocked, show something in the UI
        }
    });

How it works…

This is a dense article and mostly everything is shown through examples above, so I will keep the writeup brief. The first step is to become a developer on Google[1] and use the console to create the artifacts for your game. It’s pretty straight forward, but will cost a one-time fee of $20. The two tricky parts to setup are linking an app to a game and setting up trusted testers, if you do not publish the game making it public. For the purposes of a web game, linking an app means entering the URL for the game. Each URL will have its own entry and therefore its own client ID (which goes in the header meta information). You can have any number of linked apps, so have at least one for production and one for development. If you have trouble authenticating G+, make sure the URLs are correct and that you are using the right client ID. Trusted testers are the email address of people who can authenticate against your game, even if the game isn’t published yet. These emails should be the G+ account of testers, and allows developers to authenticate before the app is publicly available. Unfortunately, since Google Play Games Services is a Google product, it currently only uses G+ for authentication. I think this is limiting, but hopefully a better authentication story can be found in the future. Once, the user authenticates you need to create/fetch a player ID for them from the Games service (shown above). You will have a limited amount of information about the player on the player object, but more importantly can now start calling Games services APIs for your game on the player’s behalf. The Games services API endpoints are a lot like database tables with definitions and instances. For achievements there is the list of achievement definitions (achievementDefinitions.list()), which are unique per game, and a list of achievement instances (achievements.list()), which are unique per player. The achievement instances reference the achievement definition ID and must be mapped together in your code to properly indicate the state of an achievement for the current player. Incrementing, setting, revealing, or unlocking an achievement are all calls that the developer makes against the API to indicate the player’s current progress in the game. To see a working web implementation, see version 3 of my Snake game. I have shown how to work with Achievements, but leaderboards and turn-based multiplayer work much the same. Additionally, once you adopt our services, you have already done the hard work and will be ready for new features which we are frequently adding to the API. Currently, all APIs are REST and the only optimizations built into the APIs are URL caching through headers. I have been playing with automatic client-side storage of GET requests, but there is a lot of room for improvement. Additionally, it would be cool to build a more developer-friendly wrapper around the REST API to increase adoption. If anybody takes this task up, feel free to reach out to me for reviews and help.

References

  1. Google Developer Console
  2. G+ Client Setup
  3. Play Games API Reference
Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 20 Jun 2014 21:45
I apologize that the site has been down for a couple of days. It was a perfect storm of events. For starters, Amazon told me about two weeks ago that my server was going to be terminated and that I should do something about it (they do this now and again, as they upgrade equipment). And like any good developer, I was procrastinating until the last minute (Wednesday this week). Then on Tuesday, I received some terrible news about a family member dying and headed out to see her before she passed. Naturally, while I was away, the date to terminate the server came and went, and Amazon shut down the server. I received a notification about it early Thursday morning (thanks Pingdom), but didn't have time to fix things until this morning. Again, I apologize for the outage and hope to have a new article out later today or tomorrow at the latest.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 05 Jun 2014 23:38
One of the less understood, but powerful feature of browser events are their phases. According to the W3C level 2 spec there are three phases[1]: AT_TARGET=2, BUBBLING_PHASE=3, and CAPTURING_PHASE=1. Most browsers also implement a fourth phase[2]: NONE=0.

Getting ready

Just a quick note that everything discussed in this article is for modern browsers (all browsers except IE <9). Prior to IE 9, Internet Explorer used its own event system, instead of conforming to the W3C spec. Additionally, while we may show some JavaScript for attaching listeners and stopping events, it is important to know that JavaScript is a separate system from browser events. To understand the capture and bubble phase, assume we have two elements element2 inside of element1. We assign a click event to element1 and click on element2. The DOM would look something like:
-----------------------------------
| element1                        |
|   -------------------------     |
|   |element2               |     |
|   -------------------------     |
|                                 |
-----------------------------------
Now the capture phase is when the browser searches the DOM starting at the root node (usually DefaultView), until it reaches the node where the event was triggered. The capture phase would look something like:
----------------|  |---------------
| element1      |  |              | <-- event assigned node
|   ------------|  |---------     |
|   |element2    \/         |     | <-- triggering node
|   -------------------------     |
|                                 |
-----------------------------------
Now the bubble phase is when the browser searches the DOM starting at the node where the event was triggered, until it reaches the root node. The bubble phase would look something like:
---------------- /\ ---------------
| element1      |  |              | <-- event assigned node
|   ------------|  |---------     |
|   |element2   |  |        |     | <-- triggering node
|   -------------------------     |
|                                 |
-----------------------------------
Putting both phases together for the W3C model, you get:
---------------|  |-- /\ ----------
| element1     |  |  |  |         | <-- event assigned node
|   -----------|  |--|  |---------|
|   |element2   \/   |  |         | <-- triggering node
|   ------------------------------|
|                                 |
-----------------------------------
The target phase happens between the capture and bubble phases.

How do it…

When attaching an event, the browser defaults to the bubble phase (if the event supports bubble):
document.getElementById('element1').addEventListener('click', function(event) {
    // handle bubble event
});
To attach an event to the capture phase, set the third, optional argument of addEventListener to true:
document.getElementById('element1').addEventListener('click', function(event) {
    // handle capture event
}, true);
Besides event phases, some events also have default actions, such as follow the href of an anchor tag on click. These default actions may occur before or after the DOM event phase cycle and may be prevented using (even ones that occur before the cycle):
document.getElementById('element1').addEventListener('click', function(event) {
    event.preventDefault();
});
Lastly, to stop the event system from completing the phase cycle, call:
document.getElementById('element1').addEventListener('click', function(event) {
    event.stopPropagation();
});
This will end the current phase and all subsequent phases. Here is a simple demo that attaches two events (one to bubble and the other the the capture phase), and prints the target, targetElement, and phase, to the console:

See the Pen Fun With Event Phases by Matt Snider (@mattsnider) on CodePen.

How it works…

Before discussing the event cycle, lets summarize the steps that happen when the user or browser triggers an event:
  1. Event interface instance created
  2. Put event onto the queue (developer events skip this step)
  3. Event loop processes the event
  4. DOM path to triggering element set
  5. Default action (if applicable)
  6. Capture phase (can be skipped, CAPTURE_PHASE=1)
  7. Target phase (can be skipped, AT_TARGET=2)
  8. Bubble phase (can be skipped and if applicable, BUBBLING_PHASE=3)
  9. Default action (if applicable)
The event is triggered and its instance is created and added to the event queue. There is a single event loop per DOM that pulls the next events off the queue. The browser then calculates the path from the root element to the triggering element. The event may trigger a default action next or after the event phase cycle, and these actions may be prevented during the event phase cycle using event.preventDefault() (not all events may be prevented, such as unload). The developer can check booleans to see if the event is cancellable using event.cancelable, or if it was cancelled using event.defaultPrevented. Then the browser begins the capture, then target, and bubble phases, triggering any necessary callbacks. The phase cycle my be short-circuited at anytime by calling event.stopPropagation(). Lastly, there are events that don’t bubble (such as blur, focus, or scroll). The developer can check this by looking at the boolean event.bubbles. Events that don’t bubble only execute the first two phases. Calling either event.stopPropagation() or event.stopImmediatePropagation() will cause the the remainder of the capture, target, and bubble phases to stop, triggering the immediate execution of the default action (if one exists for the event). So, when called during the capture phase, the rest of that phase and the other two phases will be skipped, but when called during the bubble phase, just the remaining elements in the bubble phase are skipped. The difference between the two, is event.stopPropagation() affects only the flow for the current event listener, while event.stopImmediatePropagation() will stop propagating for all other event listeners as well. The event interface passed into callback functions will have two useful DOM pointers, event.target and event.currentTarget. The event.target is the element that the event was assigned to and event.currentTarget is the element currently pointed to by the event phase. When event.target === event.currentTarget you will be in the phase AT_TARGET=2, otherwise you should be in either CAPTURE_PHASE=1 or BUBBLING_PHASE=3 phases. If a default action triggers before the event phase cycle, and is prevented during the event phase cycle, the user may see weird behavior. For example, a checkbox will trigger a default action that checks the box before the event phase cycle, and if the developer prevents this behavior during the event phase cycle, then it will become unchecked. There is nothing the developer can do to prevent this, so it is important to know what events have default actions and when those actions execute. Lastly, the default action of some events is to trigger other event, such as hitting the enter key while in an input element inside a form to submit the form, but that is a discussion for another article.

References

  1. W3C Events Interface Spec
  2. MDN - event.eventPhase
  3. JavaScript Event order
Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 19 May 2014 23:55
I was reviewing the browser event stack the other day and was reminded of a rarely used feature of addEventListener that allows developers to autobind the execution context object, instead of requiring a call to bind or using a library, that is worth sharing, if you weren’t already aware.

How do it…

Typically, when attaching an event, we write:
var myObj = {
    handleEvent: function (evt) {
        // 'this' will be scoped to the element, instead of myObj.
    }
};
document.getElementById('<MY_ID>').addEventListener(
    'click', myObj.handleEvent);
However, addEventListener also supports passing an object as the second argument, as long as the object defines handleEvent:
var myObj = {
    handleEvent: function (evt) {
        // 'this' will be scoped to myObj.
    }
};
document.getElementById('<MY_ID>').addEventListener('click', myObj);
The event is removed in the same fashion:
document.getElementById('<MY_ID>').removeEventListener('click', myObj);
For a full-fledged implementation, I would do something like this:
var myObj = {
    keyPressEvent: function (evt) {
        // handle key only events
    },
    clickEvent: function (evt) {
        // handle click only events
    },
    handleEvent: function (evt) {
        switch (evt.type) {
            case 'click':
                this.clickEvent(evt);
                break;
            case 'keypress':
                this.keyPressEvent(evt);
                break;
        }
    },
    init: function() {
        var el = document.getElementById('<MY_ID>');
        el.addEventListener('click', myObj);
        el.addEventListener('keypress', myObj);
    }
};
myObj.init();

How it works…

The spec for addEventListener allows developers to pass, as the second argument, either a function or an object with a function named handleEvent attached to it. The rest of the signature is identical between the two implementations. If an object is passed, then the execution context of the callback function will be set to the object, instead of the triggering element, which is usually the desired behavior. The biggest advantage here is writing less, clearer code that does exactly what you want. It is supported by all modern browsers and has been around since addEventListener was added. Unfortunately, the first modern browser produced by Microsoft was IE9, so if you have to support IE < 9, then you’ll need a polyfill[1]. One other advantage to this style of event management, is the implementation of the event specific callbacks can change or be overwritten without the need to remove and/or reattach event listeners.

References

  1. addEventListener, handleEvent and passing objects - has a decent IE polyfill
  2. An alternative way to addEventListener
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 07 May 2014 04:44
I have not been doing much web development lately, so its been difficult to come with interesting topics. However, I have been doing a lot of android development and since many engineers have to work cross discipline, I think an android article be relevant. This article will discuss how to run unit tests against your android code, directly on the android device or emulator.

Getting ready

You will need to install the Android SDK. The SDK includes ADB (Android Debug Bridge), which is used to access the android device and run tests. Lastly, you need an android device in developer mode (Settings -> About -> click on Build number repeatedly until developer mode is enabled) or to have setup an emulator. Examples in this article will use an emulator. I named my emulator android_4.4.2.

How do it…

Start the emulator:
cd <pathToAndroidSdk/tools>
./emulator -avd android_4.4.2 -shell-serial tcp::4444,server,nowait
You can check the name of your emulator by running:
$ adb devices
List of devices attached
emulator-5554	device
Connect to the device for fun and profit:
adb -s emulator-5554 shell
# futz around
exit
Install your latest APK onto the device:
adb -s emulator-5554 install -r <path to your APK>
To run tests on the device:
adb -s <emulator name> shell am instrument -w -e <path to test file> <test package>:<test runner>
# Here is an example from GmsCore
adb -s emulator-5554 shell am instrument -w -e class com.google.android.gms.games.broker.PlayerAgentTest com.google.android.gms.test/android.support.test.runner.AndroidJUnitRunner
To connect a debugger, use:
adb -s <emulator name> shell am instrument -w -e debug true -e <path to test file> <test package>:<test runner>
# Here is an example from GmsCore
adb -s emulator-5554 shell am instrument -w -e debug true -e class com.google.android.gms.games.broker.PlayerAgentTest com.google.android.gms.test/android.support.test.runner.AndroidJUnitRunner

How it works…

Following these instructions you can run any tests from the command-line on your android device or emulator. The adb service will connect to the device and run the tests directly on the device, giving you the most realistic environment for executing test code. You do need to ensure that the latest APK for your application and test code are copied to the device before running the test, or you may see unexpected discrepancies. For instructions on how to best write tests, please read Android Testing, especially the Activity Testing tutorial.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Saturday, 12 Apr 2014 16:40
In Git, it is often useful to merge one specific commit from one branch into another. Frequently this happens when Using Git Interactive Rebase With Feature Branches, as you develop in the branch, you realize that one or more of your commits should be added to master right away, but not all the CLs. Enter git cherry-pick <changelist hash> for the win! The cherry pick command allows you to merge one CL from a branch, or even straight from the reflog into another branch.

Getting ready

For this article, lets assume you have a commit history that looks like:
fc64d55 -- d0e6ac9 -- 1edbe95 -- 24c685b -- 2f0514a  [master]
                         \
                          a70d41c -- b45af4a -- 38967f5 [branch feature1]
And we want to merge some important change in #b45af4a into master.

How do it…

The code to cherry pick the change into master is:
git checkout master
git cherry-pick b45af4a
Now rebase master on to the feature1 branch:
git checkout feature1
git rebase master
You may have to resolve conflicts, but notice that git is smart enough not to reapply the cherry picked changelist:
Applying: a70d41c
Applying: 38967f5

How it works…

It is important to note that cherry-pick behaves just like a merge. If there are merge conflicts that git cannot resolve, then it will ask you to manually resolve them. Consequently, the new changelist in master will not necessarily be identical to the cherry-picked CL, and it will definitely have a new hashcode. Fortunately, git tracks cherry-picked CLs, so when you rebase back onto the cherry-picked branch, it won’t duplicate the CLs and will restructure your history according to master. Also, while cherry-pick is great for merging one or two commits, it is not ideal from many commits.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 28 Mar 2014 23:06
Last week I was busy at GDC and have not had time to put together a detailed article, so in the spirit of GDC, I thought I would share the latest iteration of my HTML5 gaming engine (still very rough). There has been a lot of progress around the Game class to support stages (or levels) and a score board to track the player’s score. The stages are demoed by a new version of the Snake game where collecting five powerups will cause a new and slightly more difficult map to render. This showcases a fully functioning Snake game: a movable snake, score incremented and snake grown by picking up powerups, and levels changing over time. Please try the Snake game demo and let me know your thoughts or any issues you experience. Eventually, I will revisit Snake to add enemies and eventually some type of multiplayer, but my next step for the platform will be to try out one or two other games to see how it holds up. I’ll share them with you when they are ready. You can read more about the original release at Introducing Gaming Engine - Snake Demo v1.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 05 Mar 2014 05:01
Today’s article will be short and will cover a bash topic that frustrates me to no end. Please don’t use ; to separate commands, when you mean &&. There is an important difference between the two and many developers never realize that they want to be using && in their scripts.

How do it…

Here is a common oneliner that you might use to compile a package:
./configure ; make ; make install
In this example, we intend to configure the build, make it, and then install it (in that order). There is no reason to run make if the configuration step failed, and no reason to run make install, if either of the previous steps failed. However, that is exactly what the above code will do. What you really mean to write is:
./configure && make && make install
Now each command will only execute if the previous command was successful.

How it works…

When the ; operator is used as a separator between commands, bash will execute each command in order, but it doesn’t care about the result of the previous command. When the && operator is used as a separator between commands, bash will execute each command in order, but will continue only when the previous command return zero (errors are non-zero). For that reason, anytime command a is expected to execute successfully before running command b, then && should be used instead of ;. The reason that this gets under my skin, is that almost every time I see commands chained like this, the developer intended that command a executed sucessfully, before running command b, and every now and then something really bad happens if b runs and a did not. Please be conscientious of your fellow developers, who may have to consume your scripts, and make sure they do what you intend by using &&.

There’s more…

There is another separation operator ||, which will execute each command in order, but will continue only when the previous command returns a non-zero value. I like to use this operator with testing commands, where command a will fail and now command b needs to be run to fix things, or maybe command a was a platform test and command b needs to be run on other platforms.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 21 Feb 2014 03:58
This article showcases a useful caching strategy for static data that is fetch via AJAX. We will use jQuery to setup a promise and cache the data in the localStorage for subsequent page loads or data loads.

Getting ready

A modern web browser supporting localStorage and JSON. Also, a basic understanding of promises[2] is helpful.

How do it…

Here is the code:
(function($) {
  var oKeyDeferredMap = {};
  
  function fnReadData(sKey) {
    var sValue = window.localStorage.getItem(sKey);
    return sValue ? JSON.parse(sValue) : sValue;
  }
  
  function fnWriteData(sKey, oData) {
    var sValue = JSON.stringify(oData);
    window.localStorage.setItem(sKey, sValue);
  }
  
  $.cachedAjaxPromise = function(sUrl, oAjaxOptions) {
    var oDeferred = oKeyDeferredMap[sUrl];
    var sValue;
    
    if (!oDeferred) {
      oDeferred = new jQuery.Deferred();
      oKeyDeferredMap[sUrl] = oDeferred;
      sValue = fnReadData(sUrl);
      
      if (sValue) {
        oDeferred.resolve(sValue);
      }
      
      if (!oAjaxOptions) {
        oAjaxOptions = {};
      }
      
      $.extend(oAjaxOptions, {
        error: function(oXHR, sTextStatus, sErrorThrown) {
          console.log('customer info request failed: ' + sErrorThrown);
          oDeferred.resolve(null);
        },
        success: function(oData) {
          // making assumption that data is JSON
          fnWriteData(sUrl, oData);
          oDeferred.resolve(oData);
        }
      });
      
      $.ajax(sUrl, oAjaxOptions);
    }
    
    return oDeferred.promise();
  };
}(jQuery));
Here is how to use the code:
oPromise = $.cachedAjaxPromise('/someURL').done(function(o) {
  console.dir(o);
});
Here is a codepen demoing the code:

See the Pen HtJcD by Matt Snider (@mattsnider) on CodePen.

How it works…

The code snippet adds a new function to jQuery cachedAjaxPromise which should be passed a URL returning a static JSON and returns a promise that will be resolved with the JSON object. The function checks the local storage for a value stored at the key of the url. If a value exists, then resolve the promise. If the value doesn’t exist, then fetch it from the server, cached into localStoage, and the promise is resolved or rejected based on the AJAX response cycle. All cached values are marshalled and unmarshalled using the JSON infrastructure to and from strings. Lastly, the jQuery deferred object, is also cached, to prevent duplicate AJAX requests or calls to the localStorage when promises for the same url are created. To use cachedAjaxPromise, provide a url and chain a done or then function. The success callback provided will be pass the JSON data. If you look at the example pen, you will see that the first time the pen is loaded (or after you clear the localStorage) it takes about two seconds (simulated AJAX request), but subsequent page loads resolve the data in millisecond (from localStorage). You may pass a second argument, as the configuration options to pass into $.ajax, but the success and error functions will be overwritten for use with the promise system. This a very simple example that is dependent on modern web browser technology. If you need to support legacy browsers, then you may need a polyfill the JSON[3] library and localStorage[4].

References

  1. jQuery Deferred
  2. Promises in Wicked Detail
  3. jQuery JSON Polyfill
  4. PersistJs - a localStorage Polyfill
Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 18 Feb 2014 07:24
I must apologize for the dearth of articles these last couple of weeks. As some of you know, I recently joined Google and there is a lot of technology to come up to speed on. Specifically, I have joined the Google Play Games team, which is a core service of the android operating system. We provide APIs to help game developers build better games with less work, and I am super excited to be joining the team. Anyway, the google training weeks are over and my schedule is returning to normal, so I expect to get back to regularly posting articles this week.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 03 Feb 2014 06:58
Today we’ll cover how to connect to github and EC2 through a draconian proxy allowing only port 80 and 443. Github uses SSH, so like EC2 it can be connected to using SSH tunnelling. This article is based on a blog post by tachang[1], which needed some additional explanation and changes to work behind my proxy. I will be explaining how to connect on a unix-based machine, but these settings should also work on windows (see tachang’s article for windows setup[1]).

Getting ready

You will need to install corkscrew[2] on your machine for tunneling SSH through the proxy, and git (if you don’t have it already). You will also need superuser access on your own machine and any EC2 instance that you want to connect to.

How do it…

Once corkscrew is installed, simply edit or create ~/.ssh/config with the following:
ProxyCommand /usr/local/bin/corkscrew proxy.<yourCompany>.com 8080 %h %p

Host github.com
	User git
	Port 443
	Hostname ssh.github.com
	IdentityFile "/Users/msnider/.ssh/rsa_id"
	IdentitiesOnly yes
	TCPKeepAlive yes

Host <ec2PublicDNSForYourServer>
    Port 443
    User ubuntu
	IdentityFile "/Users/msnider/.ssh/rsa_id"
	IdentitiesOnly yes
	TCPKeepAlive yes
Changes to ~/.ssh/config happen immediately, so at this point we can check to see if github connectivity has been restored:
>ssh github.com
Hi mattsnider! You've successfully authenticated, but GitHub does not provide shell access.
Connection to ssh.github.com closed.
When outside of the proxy, SSH to your EC2 instance and update the sshd_config file (mine was located at /etc/ssh/sshd_config on ubuntu) to also listen on port 443:
# Package generated configuration file
# See the sshd_config(5) manpage for details

# What ports, IPs and protocols we listen for
Port 22
Port 443
...
Then restart the server so that the changes take effect. Also, log into EC2 and update the DMZ of the server to allow connections on port 443. Then check to see if SSH connectivity to your server has been restored:
>ssh ubuntu@<ec2PublicDNSForYourServer>
If you are not behind the proxy you can force SSH to use port 443 for testing:
>ssh -p 443 ubuntu@<ec2PublicDNSForYourServer>

How it works…

Looking at the ~/.ssh/config, the ProxyCommand tells SSH to tunnel through a proxy when connecting to any host. Change the location of corkscrew to where yours is installed, replace the proxy server domain (don’t put http:// in front of the name) with yours, and change the proxy port 8080 to the one you connect to your proxy on. The %h and %p are special variables that will be populated by SSH dynamically (target host and port respectively). Next the file defines all the hosts to connect to. We want to be able to connect to github.com and the EC2 public DNS for the instance. Under each Host definition the Hostname, Port, User, IdentityFile are important. The Hostname is the server DNS to connect to, if the Hostname is different from the Host; notice that the one under github.com has been changed to ssh.github.com (the github server that allows port 443). The Port is the port to connect on, and since your proxy blocks everything but 80 and 443, you need to use one of those two. The User should be changed to the user used to SSH into the server. And IdentityFile is the location of your private identify file used for SSHing to the server. I used RSA keygen[3] and have the same key on my server and github, but you can use any number of identify files and other cipher formats supported by SSH. Lastly, for SSHing to your EC2 server, you need to modify the sshd_config file on the server. This file configures the SSH service on the server, and needs to be told to also listen to port 443. The SSH service can listen to as many ports as you want, so simply add the port 443 line under the port 22 line. I tried to just restart the SSH service, but Ubuntu wouldn’t let me, since I was logged in over SSH, so I ended up restarting the machine. Since, you cannot connect to this server when you are behind the proxy, you will need make this change outside the office. Also, port 443 is typically reserved for secure HTTP connections, so adding this port may conflict with an existing HTTP service. The best way to get around this is to have a second server (without HTTP services running) that you SSH into when at work and connect through that machine to port 22 on the machine running the webserver.

References

  1. USING GITHUB THROUGH DRACONIAN PROXIES (WINDOWS AND UNIX)
  2. Corkscrew
  3. SSH Keygen
Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 24 Jan 2014 08:08
In my not so copious spare time over the past few months, I’ve been working on a game engine to power two dimensional board-based games. The engine has a long way to go, but I have reach the first demo milestone and wanted to share it with you. Here is a basic version the snake game written using the game engine. It illustrates a working main thread, responsiveness to keyboard commands, interaction between a user controlled character and the game environment, and a simple map building engine. I chose the Snake game, because it is relatively simple, but has some unusual characteristics (growing character size, collision detection, power ups) that forced me to consider more complex behaviors in the game engine. The idea behind the engine is to allow developers to build a two dimensional board-base game using a combination of HTML/CSS/JavaScript. For most games, I expect a user to control a sprite and move it around the board, but it should eventually be versatile enough to even support multiplayer board games. See the source files on github for more information. Obviously, there is still a lot missing from the engine, so I won’t go into its details yet, but the engine and the Snake game are at a point where you can begin to play around with it. The next demo will have a finished version of Snake, then I’ll use the engine to build a second game, a third game, and so on. By the completion of the third game, I believe the engine will be ready to be consumed by other game developers looking to build some simple browser games.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 19 Jan 2014 00:58
In my experience, it is rare to assign only a change event to a text input, as any callback that would be executed for the change event should also be called on a key event as well, but with a slight delay (think how an autocomplete shows results as you type). This is a common pattern and I was surprised to not immediately find a jQuery plugin implementing it, so I decided to add one myself.

How do it…

The jQuery plugin is available at http://plugins.jquery.com/changeOrDelayedKeyListener/. Here is the basic way to use it (just like other jQuery events):
$('someSelector').changeOrDelayedKey(function(e) {
    // your event callback
});
You can pass a data object as the first argument, like other jQuery events:
$('someSelector').changeOrDelayedKey({/* a data object*/}, function(e) {
    // your event callback
    // e.data is the data object
});
There are two optional arguments for specifying the delay and the key event to use (keydown is used by default):
$('someSelector').changeOrDelayedKey(function(e) {
    // your event callback
}, 400, 'keyup');
And here is the code:
$.fn.changeOrDelayedKey = function(fn, iKeyDelay, sKeyEvent) {
	var iTimeoutId,
		oEventData;

	// second signature used, update the variables
	if (!$.isFunction(fn)) {
		oEventData = arguments[0];
		fn = arguments[1];
		iKeyDelay = arguments[2];
		sKeyEvent = arguments[3];
	}

	if (!iKeyDelay || 0 > iKeyDelay) {
		iKeyDelay = 500;
	}

	if (!sKeyEvent || !this[sKeyEvent]) {
		sKeyEvent = 'keydown';
	}

	// non-delayed event callback, should clear any timeouts, then
	// call the original callback function
	function fnExecCallback() {
		clearTimeout(iTimeoutId);
		fn.apply(this, arguments);
	}

	// delayed event callback, should call the non-delayed callback
	// after a short interval
	function fnDelayCallback() {
		var that = this,
			args = arguments;
		clearTimeout(iTimeoutId);
		iTimeoutId = setTimeout(function() {
			fnExecCallback.apply(that, args);
		}, iKeyDelay);
	}

	if (oEventData) {
		this.change(oEventData, fnExecCallback);
		this[sKeyEvent](oEventData, fnDelayCallback);
	}
	else {
		this.change(fnExecCallback);
		this[sKeyEvent](fnDelayCallback);
	}

	return this;
};
The following example shows the event callback (the delay has been increased to 1000ms so it is easier to trigger the change event):

See the Pen jQuery on Change or Delayed Key Event Listener by Matt Snider (@mattsnider) on CodePen.

How it works…

The changeOrDelayedKey function has two signatures (this is common with jQuery functions). The first signature is a required callback function, an optional integer in milliseconds to delay before firing the callback on key events, and an optional string name of the key event to target. By default the key delay is 500 milliseconds and the keydown event is used. The second signature has an object, followed by the callback function, and then both optional arguments. And like most jQuery functions, changeOrDelayedKey is chainable. Looking under the hood, the implementation is fairly simple. The changeOrDelayedKey function first figures out which signature was sent, assigning the correct values to local variables, and sets any defaults. There are two inner functions, the fnExecCallback simply clears any existing timeout and calls the original callback function, while fnDelayCallback waits the delay time before calling fnExecCallback. Some implementations my avoid the extra fnExecCallback, but I want to make sure that the internal timeout is always cleared before delegating to the provided callback functions. Lastly, the fnExecCallback function is assigned to the change event and the fnDelayCallback function is assigned to the desired key event.

There’s more…

There are two caveat when using the changeOrDelayedKey function. The first is that it doesn’t work well with the jQuery event off function for unsubscribing from events. You can use the off function to remove all change and keydown events, but you can’t remove just the callback function that you provided, because the actual callback function passed into change and keydown listeners is an anonymous function. I don’t foresee a big demand for this feature, so I haven’t included it. The second is that I haven’t wired it into the jQuery event on function. The signature for the on function adds more complexity that I did not want to support unless absolutely necessary. And since listeners can be called directly, I don’t see a need yet for supporting the on function.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Jan 2014 03:55
Continuing to evaluate efficient sorting algorithms, today we’ll look at merge sort. Merge sort[1] is a comparison sort using a divide and conquer algorithm, developed by John von Neumann[2] in 1945. It recursively divides the list into smaller sublists of length one, then repeatedly merges the sublists in order until there is only one sublist left. It has a worst case runtime of (O(nlogn)), making it worst-case more efficient than Quicksort.

How do it…

Merge sort needs two functions, the first is a function to merge two ordered arrays together:
function fnMerge(aLeft, aRight){
	var aResult = [],
		iLeft = 0,
	    iRight = 0;

	// iterate until one array is empty
	while (iLeft < aLeft.length && iRight < aRight.length) {
		// push the next ordered value on the resulting array
		// and increment its counter
		if (aLeft[iLeft] < aRight[iRight]) {
			aResult.push(aLeft[iLeft++]);
		}
		else {
			aResult.push(aRight[iRight++]);
		}
	}

	// join any remaining values from the two arrays into the results array
	return aResult.concat(aLeft.slice(iLeft)).concat(aRight.slice(iRight));
}
The second function is the public merge sort function that recursively splits the array into smaller halves before calling the merge function:
function fnMergeSort(aList) {
	// zero or 1 length arrays are already sorted
	if (2 > aList.length) {
		return aList;
	}

	var iMiddle = Math.floor(aList.length / 2);
	return fnMerge(
        fnMergeSort(aList.slice(0, iMiddle)),
        fnMergeSort(aList.slice(iMiddle)));
}
This visual is a little harder, but it shows each step of the splitting and merging functions:
Restart merge sort With a New Array If we take a 10,000 element array and sort it, here is how merge sort compares to native array sort (prints to the console): Merge sort Comparison Tests

How it works…

The merge sort algorithm recursively breaks the array into smaller subarrays by splitting the arrays in half, until all subarrays are length one or zero. By their nature (having only one or zero element), these subarrays are sorted. The algorithm then recursively merges the sub arrays together in order. As each subarray is combined the new subarray is ordered from the two merging subarrays, resulting eventually in one final sorted array. Since the same steps need to be taken regardless of how sorted the original array was, merge sort has an average, best, and worst case runtime of O(nlogn). The merging function accepts two sorted subarrays and merges them together into a new sorted array. We solve this by iterating until one array is fully merged, using index trackers on each array, and inserting the greater value into the merged array on each iteration. Since both lists are in order, we know that when the iteration stops, all uncompared elements in the second array are greater than the already joined elements, so we can just concatenate the remaining elements at the end[3]. Rather than using an if statement to concatenate the right subarray, we just concatenate the remaining elements of both arrays, so the fully merged array will concatenate zero elements, and the other will concatenate all unmerged elements. The merge sort function accepts a single array and splits it into two relatively even subarrays (if the array has an odd length, one subarray may be an element larger than the other). Those subarrays are each passed to the merge sort function and the results of the recursive merge sorts are joined using the merge function. Thus eventually, returning a single sorted array. The recursive calls stop when the subarrays are all of length one or zero.

There’s more…

If it is important that your merge sort function modifies the passed in array, you can use the following technique to splice the new elements into the original array[3]:
function fnMergeSort(aList) {
    // zero or 1 length arrays are already sorted
    if (2 > aList.length) {
        return aList;
    }

    var iMidIndex = Math.floor(aList.length / 2);
    var aParams = fnMerge(
        fnMergeSort(aList.slice(0, iMidIndex)),
        fnMergeSort(aList.slice(iMidIndex)));

    // put arguments one and two of the splice function into the
    // front of aParams, remaining arguments are elements to splice in
    aParams.unshift(0, aList.length);
    aList.splice.apply(aList, aParams);
}
Here we’ve used the splice function to replace all the elements in the provide array with the sorted elements, so that the originally provided list is modified, instead of returning a new sorted array. The signature of the splice function is the start index, the length to splice, and any number of elements to insert to replace the range we are splicing over. As such, we unshift the start index and length for the splice to the front of the array of sorted elements, so that we can use the apply function to properly pass everything into the splice function, which has a signature startIndex, lengthOfSplice, firstElementToSpliceIn, …, nthElementToSpliceIn.

References

  1. Merge sort Wikipedia article
  2. John von Neumann
  3. Computer science in JavaScript: Merge sort
Author: "--"
Send by mail Print  Save  Delicious 
Quicksort   New window
Date: Wednesday, 11 Dec 2013 07:04
We’ve looked a variety of in-efficient sorting algorithms, today we’ll look at Quicksort (aka. partition exchange sort), as a first foray into faster and more frequently used sorting algorithms. Quicksort[1] is a comparison sort using a divide and conquer algorithm, developed by Tony Hoare[2] in 1960. It recursively divides the list into smaller lists around a pivot value and sorts them, which means much smaller data sets when actually sorting. It has a worst case runtime of (O(n2)), but an average and more common runtime of (O(nlogn))

How do it…

We first need an algorithm to determine the pivot value. There are many algorithms, but we’ll use a recommended technique, taking the median of the first value, last value, and middle value in an array:
function getPivot (arr) {
	var iValFirst = arr[0],
			iLast = arr.length - 1,
			iValLast = arr[iLast],
			iMid = Math.floor(arr.length / 2),
			iValMid = arr[iMid];

	// first position is lowest
	if (iValFirst < iValLast && iValFirst < iValMid) {
		return iValMid < iValLast ? iMid : iLast;
	}
	// mid position is lowest
	else if (iMid < iValLast && iMid < iValMid) {
		return iValFirst < iValLast ? 0 : iLast;
	}
	// last position is lowest
	else {
		return iValFirst < iValMid ? 0 : iMid;
	}
}
Here is a the quicksort algorithm in JavaScript:
function quicksort (arr) {
	var aLess = [],
        aGreater = [],
        i,
        j,
        iPivot,
        iPivotVal;

	// array of length zero or one is already sorted
	if (arr.length <= 1) {
		return arr;
	}

	iPivot = getPivot(arr);
	iPivotVal = arr[iPivot];

	// the function to process the value and compare it to
	// the pivot value and put into the correct array
	function compVal(iVal) {
		(iVal <= iPivotVal ? aLess : aGreater).push(iVal);
	}

	// compare values before the pivot
	for (i = 0, j = iPivot; i < j; i++) {
		compVal(arr[i]);
	}

	// compare values after the pivot
	for (i = iPivot + 1, j = arr.length; i < j; i++) {
		compVal(arr[i]);
	}

	// create a new list from sorted sublists around the pivot value
	return quicksort(aLess).concat([iPivotVal], quicksort(aGreater));
}
It is a little hard to follow, but this demo chooses a pivot (blue), then puts the values into the two sub-arrays (red), before merging the arrays around the pivot, placing the pivot in its final position (black):
Restart Quicksort With a New Array If we take a 10,000 element array and sort it, here is how quicksort compares to native array sort (prints to the console): Quicksort Comparison Tests

How it works…

The Quicksort algorithm breaks the array into two parts around a pivot value, pushing smaller values into one array and larger values into a second array. These two arrays will then be quicksorted recursively, before concatenating them around the pivot value. Generally, this strategy will quickly sort the array, but the efficiency of the algorithm depends on how well the pivot value is calculated. The more equal in size the smaller and larger value arrays are, the more efficient the algorithm. There are many strategies for choosing a pivot value, such as picking a dedicated position (first or middle are common), or randomly choosing a position. However, none of these do anything to increase the likelihood that the two arrays of values are about the same size. The recommended strategy for finding the pivot is to take the median of the first value, middle value, and last value in the array. This is more likely to get a value that is near the middle when sorting. These values are also better than randomly choosing three, because they sort already sorted arrays faster. For a random distribution quicksort tends to perform better than most sorting algorithms, because it quickly divides the array into smaller arrays that tend to sort quickly. However, it will perform quadratically on array with mostly duplicate values, as the pivot does not help much and the split arrays are unbalanced. If you compared it to the built in sorting algorithm (above), you⁏ll notice only a few milliseconds of difference between the two on a large array. Lastly, Quicksort is not very efficient on small data sets since its constant operation (slicing and merging arrays) is more expensive than other sorting algorithms. One of the best optimizations is to determine a threshold k where the smaller arrays are sorted using Insertion Sort instead. The k value is best determined by experimenting on your data set.

References

  1. Quicksort Wikipedia article
  2. Stony Hoare
Author: "--"
Send by mail Print  Save  Delicious 
Date: Saturday, 23 Nov 2013 23:26
This technique has been around for a while, but it’s powerful and worth sharing. Using the filter CSS property you can apply visual effects to your elements, including the grayscale we’ll be discussing here. For my CV I wanted my image muted most of the time, but pop when it becomes the focus of the viewer (ie. they mouse over it), so I used a filter to apply grayscale by default and remove grayscale on hover. This article will explain how to do this technique.

How do it…

To start choose markup or an image that should be grayscaled. I will use the following image for this demo: photo Next define your CSS:
.grayscale {
    filter: url(../files/filters.svg#grayscale); /* Firefox 3.5+ */
    filter: gray; /* IE6-9 */
    -webkit-filter: grayscale(1); /* Google Chrome, Safari 6+ & Opera 15+ */
}

.grayscale-hover:hover {
    filter: none;
    -webkit-filter: grayscale(0);
}
For FireFox, we also need to define the SVG filter file:
<svg xmlns="http://www.w3.org/2000/svg">
 <filter id="grayscale">
  <feColorMatrix type="matrix" values="0.3333 0.3333 0.3333 0 0 0.3333 0.3333 0.3333 0 0 0.3333 0.3333 0.3333 0 0 0 0 0 1 0"/>
 </filter>
</svg>
For FireFox, instead of a separate file, you can also use a data-uri:
filter: url("data:image/svg+xml;utf8,<svg xmlns=\'http://www.w3.org/2000/svg\'><filter id=\'grayscale\'><feColorMatrix type=\'matrix\' values=\'0.3333 0.3333 0.3333 0 0 0.3333 0.3333 0.3333 0 0 0.3333 0.3333 0.3333 0 0 0 0 0 1 0\'/></filter></svg>#grayscale");
Here is the image with the new CSS applied: photo

How it works…

The filter property[1] is still experimental and only implemented fully in webkit browsers with the webkit- prefix. FireFox and IE 10+ supports only the url(…); value, which can reference a filter.svg file to define any number of filters using the SVG filter element[2]. IE 6-9 implemented a non-standard filter property. For browsers properly supporting the filter property, the following values may be used: blur, brightness, contrast, drop-shadow, grayscale, hue-rotate, invert, opacity, saturate, sepia, and url[3]. So for Chrome, Safari, and Opera, we use the property -webkit-filter with the grayscale(N) value set, where N ranges from 0 to 1, indicating how much to gray the content. For IE 6-9 we use the non-standard gray value and none to remove the graying. Also, IE 6 does not support the hover pseudo-class for non-anchor elements, so deal with that how you will (I choose to ignore it). For FireFox we define what grayscale means using the SVG feColorMatrix filter and also none to remove the filtering. Unfortunately, in IE 10 Microsoft decided to remove legacy filters and not implement the working standard, so this technique does not work in IE 10+. My take is to just ignore IE 10+ users, because supporting them is not worth the effort, but there are three solutions to support them: using an inline SVG element[4], applying grayscale to the image using a photo editor and swapping images on hover, or changing compatibility mode back to IE 9. Not of them are great solutions, but I think making two images is probably the simplest. If you choose to change the compatibility mode, here is the markup (keep in mind you may loose IE 10+ features by using this):
<meta http-equiv="X-UA-Compatible" content="IE=9">

References

  1. filter – css | MDN
  2. SVG filter element
  3. Filter Effects 1.0
  4. Grayscale JSFiddle by Karl Horky
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 30 Oct 2013 21:54
Lately, I have been interviewing many engineers who are interested in a CSS contractor position, and am thoroughly disheartened by the number of candidates who put CSS expert on their resume, but don’ even know the basics of CSS. This article will discuss the ten questions I usually ask, including the answer and why I ask the question. My hope is not to just give the answer, but to educate as well.

Questions

Each question has a point value associated with it and the sum of all points adds up to 12. When I ask the question, I rate the candidate’s answer. They receive positive points for correct or partially correct answers (up to the question’s point value), zero points for lame answer, but not blatantly wrong answers, and negative points for incorrect answers (up to the question’s point value). At the end of the interview I add up all the points. If the score less than an 8, I will pass, an 8-10 I will consider, and a 10+ is a good score.
How do you setup your CSS to minimize cross-browser issue? (1 point)

This question explores the candidate⁏s understanding of the CSS development ecosystem, how up-to-date on CSS technology they are, and if they have any bad habits.

The best answer is to apply a set of normalization CSS1, ensuring the standard elements have a consistent default behavior across all browsers. For bonus points they may mention that it is a good idea to create a baseline CSS file setups the default element and class behavior for their project.

Alternatively, they may mention using a CSS framework, which takes care of cross-browser issues. It’s best if the candidate understands what the framework is doing to normalize, but I will let this kind of answer slide. They may also use * to target all elements and reset padding, margins, etc., but this is over-broad and inefficient.

Sometimes candidates will say they use browser specific CSS files and various methods for targeting each browser (usually using the user agent). It is never good to use the user agent, and this should always be a last resort. This answer is not acceptable. It is okay to occasionally use IE conditional comments, if the candidate understands they should be used sparingly and only if normalization does not work.

Create a CSS rule that removes the default underline applied to anchor elements, and then create a rule restoring the underline on hover (followup: are there issues with using hover on non-anchor elements or ask about other pseudo classes). (1 point)

This question will show the candidate understands the basics of CSS and has some knowledge of a commonly used property. Additionally, they have used and basically understand pseudo classes.

        a {
            text-decoration: none;
        }
        a:hover {
            text-decoration: underline;
        }
        

Sometimes candidates will understand how to target anchors and apply the hover pseudo-class, but not know the text-decoration property and use some bogus property instead. I won’t give zero points if they use the wrong property, but correct selectors.

How would you center a block level element inside it’s parent container? (1 point)

This question will show the candidate understands how to center block level elements, which is commonly used to center a website on a page. This question tends to trick people up and separates good CSS engineers from poor ones.

        
CENTER ME

The candidate needs to know the margin property and a width property are required. The element needs a width smaller than the parent container for this to work (otherwise, it will fill the available width by default and apply no left/right margin). I may give a bonus point if they know the cross-browser CSS to center a page (IE < 6):

        body {
            text-align:center;
        }
        a:hover {
            margin:0 auto;
            text-align:left;
            width: 980px;
        }
        

Usually, candidates answer this wrong by trying to use the properties text-align or position, or JavaScript. All of these answers get negative points.

What is the difference between visibility:hidden and display:none? (1 point)

Both of these properties will be used frequently in DOM scripting and it is important the candidate understands how both behave, so they use them correctly.

Using display:none will cause elements to render as if the element and its content does not exist on the page. Removing display:none or applying another display will cause the page to reflow. While visibility:hidden will cause the element to be invisible, but the element and its content will flow normally.

What are three different ways to write the color property to set it to blue (followup: which way do you prefer and why)? (1 point)

This checks if the candidate understands the three ways to apply color rules, and the followup ensures they know the correct way to do it.

The following CSS shows the five different techniques for applying color (but the three key answers are: by name, hex, and rgb). They should prefer using rgb or hex values, because they have more control over which flavor of the color they really want to use and not be dependent on the browser defaults (browser support of named colors is no longer an issue). I may give a bonus point if they know the hex short-hand and rgba without prompting.

        div {
            color:blue;
            color:#0000FF;
            color:#00F;
            color:rgb(0,0,255);
            color:rgba(0,0,255,1);
        }
        

I will not give any positive points if they don’t listen and apply a color other than blue; it indicates the candidate is a poor listener.

Write a selector to apply 5px padding to all input elements of type text? (1 point)

Shows the candidate has experience with more advanced selectors. I sometimes write a form with a text and button input to illustrate this question.

        input[type=text] {
            padding: 5px;
        }
        
What color will the text inside the div be? (2 point)
        <div class="foo" id="bar" style="">Color?</div>
        <style>
            div {color: red;}
            #bar {color: green;}
            .foo {color: blue;}
        </style>
        

Does the candidate understand how specificity2 rules apply to a single element.

The id attribute is more specific than class is more specific than element, so the text will be green.

Any answer talking about order of attributes or order of rules in the stylesheet automatically gets zero or less points, as order of attributes has nothing to do with specificity and order in stylesheet only applies to rules with the same specificity.

If an element has an inline style making it red, how can you override it in your CSS file (followup: when should you use this technique)? (1 point)
        <div class="foo" id="bar" style="color:blue">Make Me Red</div>
        <style>
            div {color: red;}
            #bar {color: green;}
            .foo {color: blue;}
        </style>
        

Does the candidate understand that inline styles are the most specific, but they can be overwritten using the important declaration.

Adding the !important declaration will supersede the specificity rules, except versus other properties with the !important declaration. However, it prevents other CSS rules from modifying the property, so !important should be used rarely, if at all.

            div {color: red!important;}
        
Which of the following is the most specific (what color is the text and why)? (2 point)
        <div class="bar">
            <span class="foo">
                Which color am I and why?
            </span>
        </div>
        
        div .foo {color: red;}
        .bar span {color: green;}
        span.foo {color: blue;}
        

This is a more advanced specificity2 question, it ensures the candidate understands specificity and knows the last rule applies when rules have the same specificity.

This is kind of a trick question, because all three rules have exactly the same specificity so the last rule applies and the color is blue. All three rules have one element and one class in the selector.

Sometimes the candidate will get the right answer for the wrong reason. It is important they understand each rule has exactly the same specificity and the last rule applies for that reason only.

Which of the following produces more maintainable and scalable CSS and why? (1 point)

A)

        <ul class="myList">
            <li></li>
            <li></li>
            …
        </ul>
        
        .myList {
            /* list styles here */
        }
        .myList li {
            /* list item styles here */
        }
        

B)

        <ul class="myList">
            <li class="myList-item"></li>
            <li class="myList-item"></li>
            …
        </ul>
        
        .myList {
            /* list styles here */
        }
        .myList-item {
            /* list item styles here */
        }
        

Has the candidate worked on a large CSS project before or do they understand SMACCS3 principles.

While A may produce more concise JavaScript, B is better on large, shared projects for the following reasons:

  1. The CSS parser and renderer processes rules from right to left, so longer selectors require more tree traversals causing the page to render slower.
  2. Using all classes separates the design from the markup, so the markup can be changed to different elements later or used in other places, without having to change the CSS.
  3. There are no accidental style inheritances to prevent, as happens most often when nesting lists or tables.

I may only give zero points, instead of negative points if they choose A, but can come up with some good reasons, such as they like to namespace styles or they use a CSS preprocessor.

Conclusion

These are the questions I like to ask CSS-focused candidates and I think they demonstrate if the candidate has good understanding of CSS and related principles. Additionally, this article should be used as a training tool for anyone needing to come up to speed on CSS. I have included links below for concepts that were beyond the scope of this article. Please let me know if you need more explanation or if any of my answers are confusing.

References

  1. Normalize CSS
  2. Specificity
  3. Scalable and Modular Architecture for CSS
Author: "--"
Send by mail Print  Save  Delicious 
Date: Saturday, 05 Oct 2013 01:05
In an earlier article we covered Variable Hoisting in JavaScript. At the time, I did not expect to face a hoisting related code bug so quickly. And given the circumstances of the bug, I expect it to be one of the most common hoisting problems, so I wanted to share it.

How do it…

Let’s jump right into the setup. You have a function that is defined in one JavaScript file (this file is expected to load first, but may not). Here is simple cross-browser safe logger function:
function logger() {
    if (window.console) {
        console.log.apply(console, arguments);
    }
}
In another JavaScript file (expected to load later asynchronously), the existence of the first function is checked. This second file creates a temporary dummy function, if the original function does not exist:
if (!logger) {
    function logger () {};
}
The problem can also be triggered directly in a single file:
<script>
function test () {
    alert('failed to hoist');
}

if (! test) {
    function test() {
        alert('hoisted');
    }
}

test();
</script>
Everything loads in the correct order, but for some reason the logger function is the second function. What gives? hoisting

How it works…

Here is what happens. The first file loads as expected and defines the the debugger function. Then the second file loads and the if statement does not execute, but nonetheless, the logger function inside of the if statement is hoisted to the top of the current scope. In this example the execution context is the window and the function declaration is hoisted to the top of the window scope. Additionally, since functions are hoisted completely defined, the empty logger overrides the original function. This really only becomes a major problem, because we are in the global scope, so hoisting problems are not contained. There are many ways to prevent this, but we’ll cover two. The first (and my favorite) is to write all scripts inside an enclosing function to contain any hoisting to the enclosing functions execution context:
(function() {
    if (!window.logger) {
        window.logger = function() {};
    }
}());
The second option is to just use window as an object literal, as we did inside the enclosing function above:
if (!window.logger) {
    window.logger = function() {};
}
The first is just good practice, for writing all scripts, as it forces you to do the right thing automatically and prevents accidental hoisting. Dont’t let global hoisting issue bite you, never work directly in the global execute context.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 17 Sep 2013 07:24
This is a proof of concept widget that I demoed for work. The desire is to update some text according to a regex and replacement, when an input field changes. This will allow developers to show a message and/or format the input value, so users understand they do not need to enter meta characters and see the result of their input (think phone or identification numbers). I built a simple jQuery plugin that can be used by anyone, and here is a demo app with the code.

How do it…

I'm not going to show a sample of JavaScript code, because it is not terribly complicated, but I will discuss how it works it in the next section. Here is an example of the markup:
<div>
  <label>Test US Phone</label>
  <input id="id_test_us_phone" data-placeholder="0" data-rxmatcher="(\d{3})(\d{3})(\d{4})" data-replacement="$1-$2-$3" maxlength="10" type="text" value="" />
  <span class="regexInputNote"></span>
</div>
If you include the plugin, then you can activate this markup using:
$('#id_test_us_phone').regexInputNote();
All hardcoded attributes can be changed by passing options to the regexInputNote. Here are the defaults:
$('#id_test_us_phone').regexInputNote({
  dataAttrMatcherRegex: 'rxmatcher',
  dataAttrPlaceholder: 'placeholder',
  dataAttrReplacement: 'replacement',
  invalidText: 'Invalid input',
  replacementFunction: null,
  targetClass: sPluginMarker
});

How it works…

When the script runs it searches the siblings following the span for an element with the class defined as targetClass (default is regexInputNote). This indicates the element that will have its content replaced by the script. The script uses the dataAttrMatcherRegex (defaults to data-rxmatcher) to create a regex to evaluate anything typed into the input element, and then it uses the dataAttrReplacement (defaults to data-replacement) as the replacement string. So if you type 1234 into the input it will insert the equivalent of '1234'.replace(<value of data-rxmatcher>, <value of data-replacement>) into the sibling element. There is one caveat, because we are using regex replacement, the input string must always match the regular expression for the replacement to work. Normally, 1234 would not match a phone number regular expression (since it does not have 7 or 10 digits), so I introduced the dataAttrPlaceholder (defaults to data-placeholder) which is the value to be used to fill the user input to the necessary length before running the regex replacement. This requires you also specify a maxlength attribute, so we know how long the string should be and what length the regular expression is expecting. Continuing with the phone example, if 1234 is entered, then the output would be 123-400-0000 showing the input formatted as a phone number. Finally, if the transformation is sufficiently complex, it may not be possible with a regular expressions. For those cases, instead of using the attributes, the developer can simply set a function to the replacementFunction option. This function should accept the text input as its only argument and return whatever value should be written to the sibling element. If you look at the JavaScript code, you will see that it is pretty straight forward. When the plugin initializes, it reads the data attributes and finds the sibling element before subscribing a key and change listener to the input. The callback handler just applies the regular expression or replacement function to the input and inserts it into the next sibling.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 29 Aug 2013 21:27
Hopefully, your job does not have to support corporate customers whose IT departments do not keep the companies browsers up-to-date, and therefore do not need to support older version of IE. If however, like me, you need to support older IEs, then your companies designers have probably asked you to support rounded corners in IE. The three most common techniques to solve rounded corners are to use JavaScript [1] or an HTC Access[2] file to produce VML (Vector Markup Language[5]) on demand, or a JavaScript solution that inserts positioned divs to emulate rounded corners[3]. However, all these solutions are slow (search the DOM for rounded elements on load or round on demand), kinda hacky, and require recalculations as rounded elements are repositioned. The core of the first two strategies is to dynamically inject VML into the document using scripts, but there is no reason you have to use scripts to do this. You can render the VML directly into your HTML and ship the rounded corners with the HTML. This allows the browser to render the rounded corners in the page flow as the page is parsed for a faster page loads and proper document flow. Today’s article will describe how to included rounded corners directly in your HTML for support in IE and all other browsers and is based on work originally done by Jonathan Snook[4].

How do it…

Enable VML on the body of your page (namespaced as v):
<body>
<xml:namespace implementation="#default#VML" ns="urn:schemas-microsoft-com:vml" prefix="v">
<!-- page body will go here -->
<xml:namespace>
</body>
Use the rounded rectangle VML tag to create a rounded rectangle:
<v:roundrect arcsize=".04" fillcolor="#000">
Lorem ipsum dolor sit amet, consectetuer adipiscing 
</v:roundrect>
Enable the VML via CSS for IE and other browsers:
v\:roundrect {
  color: #FFF;
  display: block;
  background-color: #000;
  -moz-border-radius: 10px;
  -webkit-border-radius: 10px;
  border-radius: 10px;
  padding: 20px;
  height: 100%;

  /* IE-specific */
  behavior: url(#default#VML);
  /background-color: transparent;
}
Unfortunately, this doesn’t work in IE 8 standards mode, so we have to degrade IE 8 to behave like IE 7:
<!--[if IE 8]>
<meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />
<![endif]-->

How it works…

For starters we are using xml to pull in the VML[5] implementation, which this example wraps around the whole body, so that roundrect can be used anywhere on the page. However, you need only wrap parts of the page that contain roundrect elements. The rounded rectangle (roundrect) VML element has a number of properties, including the arcsize and fillcolor used here. The arcsize controls the size of the arc on the corners and is unfortunately relative to the box size. The fillcolor controls the background color of the element and cannot be controlled via CSS in IE <9. This means that you need to duplicate the background-color defined in the CSS as the fillcolor attribute for older IEs. And you define border-radius in the CSS for modern browsers, but the arcsize attribute in VML for older IEs. This duplication is unfortunate, and it breaks the separation of design and layout rule, but is a necessary evil of this solution. We have defined some CSS informing modern browsers to render the unknown v:roundrect element as as block element with rounded corners. The later part of the CSS declaration tells IE to use the default VML behavior for rendering (as defined by microsoft). Modern browsers will treat the v:roundrect element as other elements, so you can apply additional attributes like id, class, and data- for further control. Out of the box, the VML solution works great in IE 6 and 7, but Microsoft changes the VML implementation in IE 8, so it breaks in standards mode. To work around this, we tell IE 8 to render as IE7 using the X-UA-Compatible meta tag. Most enterprise companies are already doing this, so they do not have to maintain different CSS for IE 7 and 8. In summary, the pros of this solution is that you hve rounded corners in IE 6-8 that render with the page like other elements and need no special scripts for activating. The drawback is you need to duplicate some design in the markup and render IE 8 as IE 7. Overall, I would recommend gracefully degraded to non-rounded corners in older IEs, as this is not a perfect solution, but if you must support rounded corners, consider putting the VML directly into your markup, instead of relying on a script.

References

  1. CSS3 Pie
  2. Cross Browser Border Radius Rounded Corners (HTC access)
  3. jQuery Rounded Corners
  4. Rounded Corners Experiment IE
  5. VML Proposal
Author: "--"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader