Automated Testing for WordPress, Part 4: Practical Considerations

In part 3, we saw how to write and run Behat tests. In this final post, I'll discuss some specific implementation problems we encountered and how we were able to solve them.

Authentication

Out of the box, Behat isolates scenarios from each other, i.e. each scenario gets a fresh instance of the context class with a fresh browser session. While this is generally a good thing, it was a little inconvenient in our case, since our site requires users to be authenticated. With complete scenario isolation, we'd have to go through the login process for every scenario, which would be time-consuming and needlessly repetitive.

Tip: The solution I came up with was to store a hash table of authentication cookies in a static class variable keyed by user name. (It has to be static because, as noted above, each scenario gets a fresh context class instance.) That way, you only have to log in as a given user once during your test suite (which also serves as a test of the login page) and the user's authentication tokens can be re-used for subsequent scenarios requiring authentication as that user:

// in FeatureContext.php

// Store session tokens for reauthentication
private static $credentialStore = array();

// ... other code ...

/**
* Authenticates a user with password from configuration.
*
* @Given /^I am logged in as "([^"]*)"$/
*/
public function iAmLoggedInAs($username) {
  if ($authCookie = $this->getCredentials($username)) {
	echo "Reusing authentication tokens for $username";
	$this->setAuthCookie($authCookie);
  }
  else {
	$password = $this->fetchPassword($username);
	$this->doLogin($username, $password);
  }
}

public function doLogin ($username, $password) {

	// Visit login page, enter credentials, submit login form...

	// ... after successful login:

	$this->storeCredentials($username);
}

public function storeCredentials ($username) {
	if ($credential = $this->getAuthCookie()) {
		self::$credentialStore[$username] = $credential;
	}
}

public function getCredentials ($username) {
	return array_key_exists($username, self::$credentialStore) ? self::$credentialStore[$username] : null;
}

public function getAuthCookie () {
	// Return key=>value pair for session authentication token
}

public function setAuthCookie ($authCookie) {
	$this->getSession()->setCookie($authCookie['name'], $authCookie['value']);
}

public function fetchPassword ($username) {
	// Return user's password, looked up from configuration
}

The only slight snafu was that WordPress doesn't use a consistent name for its authentication cookies. (It's usually "wordpress_logged_in_<big hash value>".) "No problem", I thought. "I'll just grab all of the cookies and iterate over them until I find one starting with 'wordpress_logged_in_'", except that the Mink API only allows getting a single cookie by name. There's no way to fetch all of the cookies.

Further complicating matters, each driver has its own way of storing cookies and its own data model to represent them. Some are objects, some are associative arrays, sometimes the values are URL-encoded, sometimes not, etc. (FYI: The value should not be URL-encoded when setting the cookie using the Mink interface's setCookie() method.)

I ended up having to figure out how each driver I wanted to use stores its cookies and implement custom code to retrieve all the cookies for each driver, normalizing the cookie data representation:

public function getAllCookies ($driver) {
	$cookies = array();		

	if ($driver instanceof Behat\Mink\Driver\BrowserKitDriver) {
		$cookies = $this->getBrowserKitCookies($driver);
	}
	else if ($driver instanceof Behat\Mink\Driver\Selenium2Driver) {
		$cookies = $this->getSeleniumCookies($driver);
	}
	else if ($driver instanceof Behat\Mink\Driver\ZombieDriver) {
		$cookies = $this->getZombieCookies($driver);
	}
	else if ($driver instanceof Zumba\Mink\Driver\PhantomJSDriver) {
		$cookies = $this->getPhantomJSCookies($driver);
	}

	return $cookies;
}

private function getBrowserKitCookies ($driver) {
	$cookies = array();

	$cookieJar = $driver->getClient()->getCookieJar();

	foreach ($cookieJar->all() as $cookie) {
		$cookies[] = array(
			"name"=>$cookie->getName(),
			"value"=>$cookie->getValue()
		);		
	}

	return $cookies;
}

private function getSeleniumCookies ($driver) {
	$cookies = array();

	$wdSession = $driver->getWebDriverSession();

	return $this->urlDecodeCookies($wdSession->getAllCookies());
}

private function urlDecodeCookies ($cookiesIn) {
	$cookies = array();
	foreach ($cookiesIn as $cookie) {
		$cookie['value'] = urldecode($cookie['value']);
		$cookies[] = $cookie;
	}

	return $cookies;
}

private function getZombieCookies ($driver) {	

	$cookies = json_decode($driver->getServer()->evalJS("JSON.stringify(browser.cookies);"), true);			
	return $this->urlDecodeCookies($cookies);
}

private function getPhantomJSCookies ($driver) {
	$cookies = array();

	foreach ($driver->getBrowser()->cookies() as $cookie) {
		$cookies[] = array(
			"name"=>$cookie->getName(),
			"value"=>$cookie->getValue()
		);		
	}

	return $cookies;
}

Another complication was that, when setting a cookie value, the Phantom JS driver internally uses the page's current URL to determine the cookie's domain. Ordinarily this wouldn't pose a problem, but at the start of the session, when I'm trying to set the authentication cookie, the page's current URL is "about:blank", which causes the Mink API's setCookie() method to fail. I had to implement custom code to get the domain from the test suite's base_url parameter and pass it to the Phantom JS driver's internal cookie setting method directly:

public function setWPAuthCookie ($authCookie) {
	$session = $this->getSession();
	$driver = $session->getDriver();

	if ($driver instanceof Zumba\Mink\Driver\PhantomJSDriver) {
		$url = parse_url($this->getMinkParameter("base_url"));
		$authCookie['domain'] = $url['host'];
		$driver->getBrowser()->setCookie($authCookie);
	}
	else {		
		$session->setCookie($authCookie['name'], $authCookie['value']);
	}
}

(The other drivers don't explicitly set a cookie domain and the Mink API doesn't provide any way of setting it, as far as I can tell.)

AJAX

The site we were testing features reports with html "select" controls that are sometimes dynamically populated by AJAX requests. For example, in a report allowing results to be filtered by school, the specific schools you can select might be constrained by the currently selected district. Each time the district selection changes, the school options are populated via AJAX with that district's schools. The problem this poses for automated browser testing is ensuring the AJAX request is complete before attempting to select a dynamically populated option.

The Mink API provides a wait() method, which pauses execution until some JavaScript expression you provide becomes "truthy" or a timeout you specify is reached. (Naturally, this assumes you're using a JavaScript-capable driver.) With this method, I was able to wait until the desired option appeared in the select element, which I determined using jQuery (which the site uses):

public function waitForElement ($ancestor, $descendant, $seconds=5) {		
	$js = "jQuery($ancestor).find($descendant).length;";
	if (!$this->getSession()->wait(1000 * intval($seconds), $js)) {
		throw new Exception("$descendant failed to appear in $ancestor after $seconds seconds.");		
	}
}

This worked well enough for schools, which all have distinctive names, but most of the schools in our test data use the same names for classrooms (e.g. "101", "102", etc.). If I was selecting a classroom by name (which I preferred to do, since this is what a human user would do), waiting for classroom "101" to appear in the classroom select wouldn't necessarily work, since the previously selected school would most likely have a classroom with the same name. What I really needed was some general way of determining if any AJAX requests were still in progress so I could wait for them to finish.

Tip: jQuery.active to the rescue: As it turns out, jQuery exposes a property called "active" (or jQuery.ajax.active in some versions of jQuery), which tracks the number of currently active AJAX requests. When all AJAX requests are complete, it's zero. It's not officially documented, but it's there, and it works:

public function waitForAjax ($seconds=5) {
	if(!$this->getSession()->wait((1000 * $seconds), "jQuery.active == 0")) {
		throw new Exception("Ajax calls still pending after $seconds seconds.");			
	}
}	

If the site you're testing doesn't use jQuery, you'd have to find some equivalent in whatever library you're using. I wouldn't introduce jQuery just for this and I don't think there's any browser-native way of doing this.

Of course, the Right Thing™ would be to have some visual indicator that an AJAX request is in progress and check for the visibility of that indicator. Lesson learned!

Conclusion

Automated testing can certainly improve the reliability of your code, but it does require some work to set up and use. Hopefully, our experiences using Behat with WordPress will help you to get started and overcome some of the pitfalls we encountered.

Automated Testing for WordPress, Part 3: Writing and Running the Tests

In part 2, I discussed installing Behat and setting up the environment. In this post, I'll discuss how the tests actually work.

Test Definitions

A typical Behat workspace looks something like this:

bin/
  behat
  behat.bat
features/
  bootstrap/
    FeatureContext.php
  test1.feature
  test2.feature
  (...more feature test files...)
vendor/
  (...lots of directories for dependencies...)
  autoload.php
behat.yml
composer.json
composer.lock

In Part 2, we discussed the configuration directives defined in behat.yml under Configuring Behat.

The tests themselves are defined in the *.feature files in the features directory using a language called "Gherkin". Here's what a Behat feature test looks like:

@javascript @administrator @reports @sample-school
Feature: Administrator Sample School Report

  Background:
    Given I am logged in as "administrator"
    And I am on the "sample-school" report

  @1
  Scenario: 2015, by district
    When I select "2015" from "School Year"
    And I select "District" from "Aggregate by"
    And I check "1" under "Grade"
    And I check "2" under "Grade"
    And I check "3" under "Grade"
    And I press "Update"
    Then the report data should match the data in the "administrator-sample-school-1.json" file.

  @2
  Scenario: 2015, by school
    When I select "2015" from "School Year"
    And I select "School" from "Aggregate by"
    And I select "Test District 1" from "District"
    And I press "Update"
    Then the report data should match the data in the "administrator-sample-school-2.json" file.

Each file describes a feature of the application to be tested and defines one or more "scenarios". Each scenario has a number of "steps" identified with certain keywords: "Given" establishes the context for the scenario, "When" describes the actions taken, and "Then" asserts the expected results. In our case, we just wanted to compare the current report output to previously captured data in a file, but you can really do anything you want, since as we'll see, you'll be writing the code to implement the tests.

Each of these steps can be extended with an arbitrary number of "And" or "But" steps. Behat doesn't actually distinguish between these keywords. They're just there to provide semantics for human readers. You can also define a "Background" to establish a shared context for all of a feature's scenarios. The background steps will be executed before each scenario.

Features and scenarios can have tags assigned prefixed with the "@" symbol. This allows you to run some subset of your tests with Behat's --tags parameter. Scenarios inherit tags from their feature. The @javascript tag is a special one telling Behat that a feature or scenario requires JavaScript.

You can read more about Gherkin here: http://behat.org/en/latest/user_guide/gherkin.html.

Test Implementation

The code that implements the tests is defined in a "feature context" class. By default, this is called FeatureContext and lives in features/bootstrap/FeatureContext.php. (You can also define custom suite-specific context classes in behat.yml.) Context classes should implement the Behat\Behat\Context\Context interface.

For each step defined in your testing scenarios, Behat looks for a corresponding context class method by matching the step text against a pattern you define in a PHP annotation for the method. PHP annotations, if you're not familiar with them, are specially formatted comment blocks, which are like "DocBlocks" except that they "[influence] the way the application behaves".

When a matching method is found, Behat calls it, passing in any arguments it parsed out of the step text. So, for example, this scenario step:

Given I am on the "sample-school" report

would match this context class method:

/**
 * @Given /^I am on the "([^"]+)" report$/
 */
public function iAmOnTheReport ($report) {
	if (!$this->onReport($report)) {
		$this->visitPath(sprintf($this->getParameter("reportsPath"), urlencode($report)));
	}
}

with the captured sub-pattern match passed as the $report argument. Behat also has its own pattern matching scheme, but regular expressions work too, which I prefer for familiarity and clarity.

The "Given" keyword doesn't have to match, so this method would also match When I am on the "sample-school" report, or "And…" or "But.." and even "Then…". although semantically "Then" is usually used for assertions.

If no match is found, Behat throws an exception and will prompt you to add the missing method. It can even stub out the method and write it into your context class source file for you, if you want.

! Gotcha: Although "And" and "But" are valid *.feature file step keywords, they are not recognized for matching context class source file method annotations. Only @Given, @When and @Then are recognized as tags for annotation patterns. So for example, @And /^I am on the "([^"]+)" report$/ in a context class file method annotation won't match the text And I am on the "sample-school" report in a feature file. However, @Given /^I am on the "([^"]+)" report$/ (or @When... or @Then...) will match that text. Also note that the tag isn't part of the pattern. It's only there to identify the rest of the line on which it appears as a pattern to be matched against step text.

Tip: By default, arguments are passed in whatever order they're encountered in the step text. However, you can use "named capturing groups" in your regex patterns to match arguments by name.

Here's an example from the Behat Mink Extension's MinkContext class:

/**
 * Checks, that element with specified CSS contains specified text
 * Example: Then I should see "Batman" in the "heroes_list" element
 * Example: And I should see "Batman" in the "heroes_list" element
 *
 * @Then /^(?:|I )should see "(?P<text>(?:[^"]|\\")*)" in the "(?P<element>[^"]*)" element$/
 */
public function assertElementContainsText($element, $text)
{
    $this->assertSession()->elementTextContains('css', $element, $this->fixStepArgument($text));
}

The arguments $element and $text are passed in the reverse order they appear in the pattern. (Note also the use of the non-capturing parens and pipe to effectively make "I " optional.) If the step definition is an assertion, throwing an exception fails the test.

Tip: Subclass Behat\MinkExtension\Context\MinkContext in your feature context class, and you'll inherit the above method plus lots of other useful assertions.

You can read more about writing step definitions here: http://behat.org/en/latest/user_guide/context/definitions.html

Running Tests

Tests are run from the root directory with the command bin/behat (or on Windows, bin\behat).

Note: In Part 2, we suggested specifying bin/ for Composer's bin-dir parameter. If you skipped that part, your bin-dir will be vendor/bin making your Behat command vendor/bin/behat.

If you're using the Selenium driver, you'll have to start the Selenium server process first by running java -jar selenium-server-standalone-<version>.jar from wherever you downloaded Selenium. (I find it easiest to do this in two separate shell windows, but if you're savvy enough you can also send the first process to the background.)

By default, all the tests in features are run, but you can also specify some subset of tests to run with command line parameters. For example, the --tags parameter runs only features or scenarios with the specified tags: bin/behat --tags reports.

The default output looks something like this:

>bin/behat --tags sample-school

@javascript @administrator @reports @sample-school
Feature: Administrator Sample School Report

  Background:                               # features/test1.feature:4
    Given I am logged in as "administrator" # FeatureContext::iAmLoggedInAs()
      Logging in as administrator
    And I am on the "sample-school" report  # FeatureContext::iAmOnTheReport()

  @1
  Scenario: 2015, by District # features\test1.feature:9
    When I select "2015" from "School Year"
          # FeatureContext::selectOption()
    And I select "District" from "Aggregate by"
          # FeatureContext::selectOption()
    And I check "1" under "Grade"
          # FeatureContext::iCheckOptionUnder()
    And I check "2" under "Grade"
          # FeatureContext::iCheckOptionUnder()
    And I check "3" under "Grade"
          # FeatureContext::iCheckOptionUnder()
    And I press "Update"
          # FeatureContext::pressButton()
    Then the report data should match the data in the "administrator-sample-school-1.json" file. 
          # FeatureContext::theReportTableDataShouldMatch()

  @2
  Scenario: 2015, by School  # features/test1.feature:19
    Given I am logged in as "administrator"
          # FeatureContext::iAmLoggedInAs()
      Reusing authentication tokens for administrator
    When I select "2015" from "School Year"
          # FeatureContext::selectOption()
    And I select "School" from "Aggregate by"
          # FeatureContext::selectOption()
    And I select "Test District 1" from "District"
          # FeatureContext::selectOption()   
    And I press "Update"
          # FeatureContext::pressButton()
    Then the report data should match the data in the "administrator-sample-school-2.json" file.
          # FeatureContext::theReportTableDataShouldMatch()

2 scenarios (2 passed)
16 steps (16 passed)
1m8.16s (9.22Mb)

>

It echoes out the contents of each feature file it's executing, showing the file and line number where each scenario is defined and the matching method for each step, and then prints a summary of the results. If your shell supports it, there's color coding for passing and failing tests. Also, any output from your code is printed on the command line.

There are also a number of command line parameters to control the output. You can read documentation of the command line interface here: http://docs.behat.org/en/v2.5/guides/6.cli.html. Note however, that this is not for latest version of Behat. (The documentation for the latest version doesn't seem to be as descriptive.) Run bin/behat -h to see the command line options for whatever version you've installed.

Next: Practical Considerations

Automated Testing for WordPress, Part 2: Setting up the Environment

In part 1, I set the context for doing automated “behavior-based” testing of a WordPress application we developed and the available tools we discovered. In this post, I’ll talk about installing those tools (Behat, Selenium) and setting up the environment to start testing.

Installing Behat and its Dependencies

I found it best to install Behat using Composer, PHP's main package management tool. Although there is a stand-alone "phar" ("PHP archive") version, Behat has a lot of dependencies, so using Composer was definitely the way to go. Also, using Composer makes it really easy to add other libraries or drivers you might want to use, or to remove ones you've decided not to use.

Once you've a installed Composer (and assuming you've put it on your system path), run the following in the root directory of your project to install Behat, the Mink extension, and the Selenium2 browser driver:

composer require behat/behat behat/mink-extension behat/mink-selenium2-driver

Tip: If you have an existing project using Composer to which you want to add Behat tests, you'll probably want to run the above with the --dev flag, so that Behat and friends will be listed as development dependencies of your project.

Composer will download the specified packages and all of their dependencies into the "vendor" folder under your project root (creating it if it doesn't exist). It will also create a composer.json file for you (or update your existing one) listing the packages you required with minimum version constraints, and create/update a composer.lock file with the exact versions of everything it downloaded. (It may take a while to install. You'll end up installing about 30 packages with thousands of source files under your "vendor" directory once this is all done.)

Composer will also let you know if there are any required extensions that are missing from your PHP environment (e.g. mbstring, curl) or if you need a different version of PHP altogether.

Tip: You can start with the following minimal composer.json in your project root directory to put the executable directory at your project root instead of under "vendor":

{
  "config": {
    "bin-dir":"bin/"
  }
}

That way, you can run Behat with bin/behat instead of vendor/bin/behat.

Setting up Selenium

To set up Selenium:

You can put the browser driver files in the same directory as Selenium to make them discoverable. (There's also some environment variable you can define for the paths, but it's easiest to put them in the same directory.)

Configuring Behat

Behat's configuration directives are specified using YAML in the behat.yml file in the root directory. A minimal behat.yml file looks something like this:

default:
  extensions:
      Behat\MinkExtension:
          base_url: http://test.example.com
          sessions:	            
            selenium:
              selenium2: ~	          
          browser_name: 'chrome'      
  suites:
    default:
      contexts:
        - FeatureContext:
            parameters:
              parameter1name: parameter1value
              parameter2name: parameter2value

Here we've set up a single default profile using the Mink extension, for which we've defined the base_url of the site to test and a single named session ("selenium") using the Selenium2 driver. Note that I've specified "chrome" for the browser_name parameter since I couldn't find a working driver for Firefox, its default browser.

Our profile also has a single default test suite with a single context class called FeatureContext for which we've defined some parameters to be passed to its constructor as key/value pairs.

You can read more about configuring Behat here, although we'll learn a lot more about how Behat works in the next part of the series.

! Gotcha: The syntax for the behat.yml file has changed somewhat from Behat 2 to Behat 3, the current version, so examples you find on the web for Behat 2 might not work in Behat 3. The version 2 documentation is more detailed than that for the latest version, so you can get an idea of what the various fields mean, even if the syntax is different.

Next: Writing and Running the Tests.

Automated Testing for WordPress, Part 1

We’ve been using WordPress for some time as a general website development platform, but until now haven’t done too much in the way of automated testing. When we decided to re-implement some code we had written to generate reports on one website, we wanted to be certain the new implementation would return the same data as the old implementation. Since the website had quite a few reports with lots of parameters for each one, trying to test manually wasn’t feasible. The tests had to be automated. Also, the report implementation featured UI components using JavaScript and AJAX, so we needed a testing framework that would allow us to verify not only the report data, but also the high-level UI functionality, i.e. to simulate what an actual user would do.

For this task, we discovered Behat, a PHP “behavior-driven development” (BDD) testing framework. Behavior-driven development is similar to test-driven development, except that the tests are of high-level user interactions as opposed to individual units of code. (In our case, we had already developed the application, so we weren’t truly doing test or behavior “driven” development.)

What we found most useful about Behat for our purposes was its ability to automate browser interactions. Behat achieves this using Mink, a PHP browser emulation library. Using Mink, we were able to automate collection of report data using the same GUI interactions as a human user, and then replay those interactions in Behat to verify both the report output and the UI functionality.

Browser Drivers

As we said, Behat emulates browser behavior using Mink through its “Mink Extension”. Mink accomplishes this using various “browser drivers”. Browser drivers can be “headless”, meaning they don’t have a GUI, or can launch an actual browser using a “browser controller”, or can even use on-line browser emulation tools like BrowserStack. The crucial thing is that Mink provides a uniform API for all of them, so the same code can be used with multiple drivers.

There are a number of Mink browser drivers available:

  • Goutte: If you don’t need JavaScript, you can use one of Mink’s headless browser drivers like the “Goutte” driver, which is implemented entirely in PHP so requires no external binaries and saves you the overhead of having to launch a GUI browser.
  • Selenium: If you need JavaScript support, you can use the “Selenium2” driver, which uses Selenium, a Java application, to launch an actual GUI browser, which in turn requires a native binary driver to control each browser you want to run.
  • Headless JavaScript Drivers: While Mink’s “headless” browser drivers generally don’t support JavaScript, there are a couple that do:

Since our reports required JavaScript, we had to choose from among the JavaScript-enabled drivers. I liked the idea of using a headless driver, since I figured it would be faster to run, and could be run in an environment where a GUI browser isn’t available (i.e. on a server). I tried using the Zombie driver, but wasn’t able to get it working on Windows. The Phantom JS driver worked ok, but it didn’t seem to be much faster than Selenium, so in the end I opted for Selenium.

! Gotcha: The latest Windows version of the Selenium Gecko driver for Firefox (0.14.0, as of this writing) seems to be broken. The Google Chrome driver works fine.

Next: Setting up the environment.

The effectiveness of Boston College’s City Connects program

City Connects was recently mentioned in a New York Times article describing its effectiveness. Quoting from the article:

“City Connects, which operates in 79 elementary schools mainly in the Northeast, has erased two-thirds of the achievement gap in math and half the achievement gap in English, compared with the Massachusetts statewide average. Students were substantially less likely to be chronically absent or held back, and the high school dropout rate was cut nearly in half. Other nationwide models, such as Communities in Schools, have succeeded in substantially reducing dropouts and raising graduation rates.

City Connects costs less than $800 per student annually — about 6 percent on top of the typical cost to educate one. An analysis of the program carried out by the Center for Benefit-Cost Studies in Education at Columbia found that it generates a return of at least $3 for every dollar spent. “Providing the program to 100 students over six years would cost society $457,000 but yield $1,385,000 in social benefits” — higher incomes, lower incarceration rates, better health and less reliance on welfare, according to the analysis. If City Connects were a company, Warren Buffett would snatch it up.”

Read the full article here: http://nyti.ms/2aqa4AB

Boston College “City Connects” Fidelity Monitoring System

As our project portfolio shows, we’ve developed many long-term collaborative partnerships with clients that span several years and multiple projects. We gain an in-depth understanding of our clients’ organizations, their key stakeholders, their data/content and systems that allows us to serve as a trusted advisor and partner when new challenges and opportunities arise.

A recent example was with the “City Connects” program at Boston College’s Lynch School of Education. As the program has expanded, the program team recognized a need to monitor “fidelity of implementation,” or the degree to which delivery of the intervention adheres to the program model. City Connects developed a Fidelity Monitoring System in order to compile and report information on fidelity in specific schools and across districts, manually collecting (from surveys and checklists) data and ‘scores’ across a range of indicators that provide evidence of adherence to the City Connects model.

The program team saw an opportunity to streamline the fidelity system and improve efficiency by using the existing performance support system that we had developed (the Student Support Information System – SSIS) for all fidelity system data entry and reporting. We worked closely with the City Connects team to:

  • augment SSIS to allow the collection of fidelity system data which had previously been external to SSIS
  • provide automated scoring against fidelity system metrics
  • enhance reporting at different aggregations (e.g. school, district)
  • provide a system for year to year modification and versioning of the fidelity system model and associated surveys that preserved the ability to make year to year comparisons

We developed intuitive interfaces to improve the workflow both for those who implement the system and for School Site Coordinators and Program Managers who complete the fidelity system surveys and checklists. For example, the survey editing interface allows administrators to add, edit and remove survey questions, and sequence them by dragging and dropping them into the desired order. The editing form adjusts dynamically based on user input. Surveys are presented to respondents in a modal “pop-up” dialog to clearly differentiate survey completion from other site tasks, helping to organize their workflow.

These improvements meant that the City Connects research and evaluation team can easily extend and modify the fidelity system model and surveys year to year, preserving archival data and allowing extended year to year comparisons. Also, incorporating the fidelity system data collection and reporting requirements into the SSIS has greatly simplified the completion process for the survey and checklist respondents.

Together, this improves the team’s understanding of the impact of the program across schools and districts through rigorous and continuous evaluation, directly supporting its mission of improving student outcomes.

Introducing the gnlms plugin: A WordPress shell implementation of the SCORM runtime environment

When we developed the SCORM compliant  Express Connect course for Summit Healthcare, Summit also required an environment to host it — managing access, registration and tracking user performance. At the time, available free LMS systems (e.g. Moodle) were stand-alone and included extraneous functionality that was not needed. For our client’s purposes, we needed a SCORM compliant runtime environment (RTE) for the course, and a flexible content management system for the surrounding infrastructure.

WordPress is the #1 CMS in use today, so we decided to create a plugin to provide the needed “LMS functions” as a simple add-on to the existing first class content management functions.

Coder’s corner:

Toward that end, we developed shell LMS code to interact with the course and save the course SCORM data in WordPress’s MySQL database as a serialized JSON string. The nice thing about this approach is that the data is already in a browser-native format. You just retrieve the JSON from the database, parse it, and you’re ready to go. Serialization is easily done client-side with the JSON.stringify() method (or this jQuery implementation). Also, starting with version 5.2, PHP supports JSON parsing and serializing with its json_decode() and json_encode() functions, so you can easily examine the SCORM data on the server side to do anything that’s required there.

With the basic course data saving and retrieving in place, we added code to track user-course interactions (e.g. registering for, starting, accessing and completing a course), and added reports for administration. We also implemented a system to automatically send email notifications of various user/course events to site administrators.

We’ve extracted the essentials into a stand-alone WordPress plugin, and added the capability to upload course files in ZIP archives through the WordPress administrative back end.

The plugin code is available on GitHub here: https://github.com/GnaritasInc/gnlms

We’ve found this code useful for a number of projects, however it is just a shell and does not implement a fully SCORM compliant RTE.  In the next few months, we plan to implement SCORM type checking and a number of other functions which will enable the plugin to pass the SCORM RTE self-test.

Everything You Wanted to Know About Pension Plans But Were Afraid to Ask

ppd_300Retirement planning and pensions are becoming increasingly important issues for many people and organizations, and so access to comprehensive and accurate pension data is critical for policy makers, analysts, journalists and many others. Gnaritas recently completed a major development project to enhance and expand the “Public Plans Data” (PPD) interactive website and database, which provides an overview of all public sector pension plans through aggregated data and statistics at national, state and plan levels.

The PPD was created through a collaboration of Boston College’s Center for Retirement Research (CRR), the Center for State and Local Government Excellence (SLGE) and the National Association of State Retirement Administrators (NASRA), and Gnaritas worked to ensure that the goals of the content and data providers, and the data users, were fully addressed by a highly interactive and intuitive approach to the design and functionality of the site and database content.

On the site, users can:

  • Browse and download the full data set, which currently contains plan-level data from 2001 through 2013 for 150 pension plans. This sample covers 90 percent of public pension membership and assets nationwide.
  • Generate and embed popular charts on their own website or blog
  • Search and download a wide range of resources such as issue briefs, working papers and research published by the CRR, SLGE and NASRA
  • Refer to ‘quick facts’ at the national, state and plan level, such as assets, costs, fiscal health and investments.
  • Download Comprehensive Annual Financial Reports (CAFRs) and Actuarial Valuations (AVs) for the plans in the PPD sample

The data is used for a variety of purposes, so to maximize the PPD’s flexibility we developed an interactive data browser; this allows the user to select specific variables to produce customized tables to suit their particular requirements.

We also developed full documentation, including a wide range of variables and filters to review the data set,  and also direct data access for programmers via an XML-based API, allowing users to connect directly to the PPD database and receive updates as they are made.

Non-programmers can also make use of real time data feeds by simply copying a few lines of code into their blog or websites to easily embed visualizations of PPD data, further increasing the site’s reach as a comprehensive and current resource.

Our approach is always to fully understand the needs of all user groups and then partner collaboratively with our clients to develop the final product, and the PPD was a great example of how complex, continually changing data can be made not only accessible, but engaging to a wide range of audiences.

Summit Healthcare: “Express Connect” e-learning course

A new case study is posted describing exciting work with Summit Healthcare:

Summit Healthcare: “Express Connect” e-learning course

Interview with Joanna Maunder featured in EyeWorld

An article in EyeWorld magazine features content from an interview with Joanna Maunder — Director of Educational Content here at Gnaritas.

http://www.eyeworld.org/article-glimpse-into-education-program-development