Magazine article Talent Development

New Learning Analytics for a New Workplace: Learning and Development Organizations Historically Have Borrowed Models for Measuring Learning from an Increasingly Archaic Education System. the Learning Profession Today Is No Different-It Continues to Focus on Metrics That Provide a Binary Assessment of Learning in the Form of Pass-Fail, Complete-Incomplete, and Started-In Progress

Magazine article Talent Development

New Learning Analytics for a New Workplace: Learning and Development Organizations Historically Have Borrowed Models for Measuring Learning from an Increasingly Archaic Education System. the Learning Profession Today Is No Different-It Continues to Focus on Metrics That Provide a Binary Assessment of Learning in the Form of Pass-Fail, Complete-Incomplete, and Started-In Progress

Article excerpt

[ILLUSTRATION OMITTED]

Today our analytics are becoming irrelevant and misleading as learning becomes more fluid. The traditional "push model" derived from a regulatory, compliancedriven industry is giving way to a learner-centric "pulling world" where mere training completion has little meaning. We need to rethink learning analytics with a focus on value as opposed to learning as a key benchmark. Our analytics must be aligned with the business's metrics, and we must demonstrate value through the synthesis of a variety of business systems.

What do our analytics aspire to measure?

We typically gather three types of measurements in today's learning environment: learning, satisfaction, and impact. The tools used to obtain these metrics are relegated to some form of binary assessment of student performance on tests designed against a set of learning outcomes. The resulting analytics show how individuals and groups of learners have scored. Other metrics include the amount of time spent on a course, the number of attempts taken on a test, the kinds of modules accessed, and a host of other peripheral data that inform the one all-important statistic: having learned versus not having learned.

The second type of data learning professionals tend to collect is a measurement of learner satisfaction typically obtained via a smile sheet. Often this tool is more about measuring the quality of course design than whether the course produced the desired outcomes. It also is the basis from which we inform our judgment about whether a course has been valuable for learners.

Once we have our base metrics of student performance, we then aspire to measure whether a specific initiative had the desired result on the business for which the program was designed (as suggested by the upper levels of the Kirkpatrick model). It is fair to say that most organizations do not even try to create the tools necessary for this final type of measurement, arguing correctly that there are far too many influential factors that can affect a business, many of which are too difficult to isolate for objective measurements. Therefore, "impact on the business" may very well be a good indicator that what we want from the training has taken place.

So what's broken?

In a blog post written last year titled "Fundamental Design of Learning Activities," Aaron Silvers provides a vision for learning activities based on the notion of experiential design. At the heart of his post is the idea that a learning activity doesn't create a universal experience for everybody, nor can a designer predict how people will experience the design. Two employees' experiences of the same learning activity may be different, and so the resulting learning is based on the individual.

Consider the notion that learning never happens in the moment of the experience itself; instead, it only happens after the experience during an "aha moment." We certainly never experience (see, hear, touch, taste, or feel) the learning itself, and we don't feel the change of learning. We simply find that at some point after our experience, we have changed.

Learning 2.0 practitioners have been arguing for some time that the metrics previously used for formal learning are insufficient for capturing any data from informal initiatives such as online chats. Some would argue (and I used to be one) that there is no relevant data to suggest that informal learning has any effect on a business, and they would be right. However, the conclusion reached doesn't mean that there is no value, only that the instruments we use to measure how training affects an organization are insufficient. A learning culture that thrives based on the fluidity of content, in which learning is stimulated based on a series of experiences that are shared, and where "learning 2.0" flourishes, requires different criteria for success.

Additionally, in today's corporate learning landscape, Google and its competitors have forever changed our expectations of how to acquire knowledge and skills. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.