Rapid ID and Rapid Dev? Yes! But…

Thanks (yet again) to Twitter, I came across a post by Karen from Langevin Learning Services (@karencar_ID) who shared an offering from the folks at Bottom Line Performance in Indianapolis (@BLPIndy). The substance of this post was whether or not Rapid Design and Rapid Development were possible. BLP’s inspiration was another entry from the widely read Rapid e-Learning Blog.

My immediate reaction was, “Yes, but, there are some caveats.”

(For sake of disclosure, from early 2009 to mid-2011, I served as the platform and process whiz for the SH!FT rapid e-learning ecosystem so I have some first hand knowledge of the journey from ‘traditional’ learning design to facilitating rapid output. I also want to note this is not a critique of the article, just some observations based on my own experiences in the ‘Rapid’ world and they are offered as food for thought)

Tools like Articulate and SH!FT are really labour-saving devices for learning asset development. Where complex programs like Flash require specialized expertise to generate finished output, rapid development tools take a lot of the “guesswork” out of some of the repeatable efforts, like simple text and graphic/media positioning.

The other element of the process is design. Those coming from the Instructor-led world are familiar with ADDIE, or may have some expertise in other instructional design models. The challenge with these linear models is them waterfall/gated method for moving from one phase to another. BLP also hits the nail squarely on the head with this observation:

Review cycles. If we can go from design to alpha with no script reviews, then we can be very rapid. If the client wants three review cycles on scripts/storyboards before we move into programming, we are in trouble in terms of rapid.

While all that is said by the Rapid e-Learning Blog and the BLP entry is correct, I believe there are a couple of unspoken assumptions that will facilitate rapid development the way in which it’s laid out above.

Assumption 1: Content and Objectives

If you’re passing on information about a new corporate policy, or just aiming for awareness, content can be designed and developed rapidly. If you’re building simple assets that share or showcase SME expertise, they can also be done relatively quickly. If you start digging deeper into Bloom’s Taxonomy for your objectives (e.g. beyond “describe”, “explain”, “identify”, etc.) then things get more challenging. If the objective the content is to equip the participant with decision-making skills, or to provide more exploratory learning, then the design process (and subsequent development) becomes much harder to accomplish in a “rapid” fashion with most of the current “rapid development” tools.

Assumption 2: Interactivity

Over the past couple of years, I spent a lot of my time in IMI Level 1 and Level 2 content design, particularly during my time with the SH!FT platform. When the learner’s control over their environment is reasonably low, the design becomes simpler because there’s almost no consideration for branching activities, conditions, loop points, and detailed feedback. Even things like Software simulations and interactions become incredibly simple to produce as long as a solid workflow exists and the various tasks are broken down clearly, the designer can manage the creation of the simulation in relatively short order.

Assumption 3: Concurrent Activity (e.g. Design with Development)

While frameworks like ADDIE have some small elements of concurrent activity (e.g. two designers could be working on stuff at the same time during the design phase), each phase is really distinct and most folks in the L&D community usually prefer (not without reason) to get things correct at one stage before moving on to the next stage. In the days of instructor-led training, this worked very well. However, with the advent of multimedia development as part of the technology-based training, the variables on the development side became significantly more complicated than the well-established practices of building manuals, reference guides, and the like.

One of the culprits of this new wrinkle was the use of the paper-based storyboard. There exists a chasm between the medium used for detailed planning & scripting and the actual output. With a linear design process, a team of designers and reviewers could spend months hammering out a storyboard and then find – once the design of the user interface and content began – that the output may not match the individual visions. This chasm was often manifested in a collective “Meh” as the reaction on first sight. By moving to models like Rapid Prototyping (e.g. Tripp & Biechelmeyer) or Successive Approximation, design and development can proceed concurrently with incremental changes/edits addressed along the way. The result is that much of the time spent in design is managed more effectively and the solution evolves in lock-step with the design.

Assumption 4: Stakeholder Culture

This is likely the biggest variable in any design/development effort. If we can assume that all of the key stakeholders are engaged in the process and understand their roles, then the review process can become more manageable. The key to success is getting the buy-in from reviewers (particularly sign-off authorities) to commit to the time required for the reviews. Otherwise, as BLP correctly notes (and as I have experienced in painful fashion) this is where things can get very bogged down, essentially negating all the time saved in the rapid design & development process.

As solid project management approach can mitigate this risk to the project, but people are human…so it often remains ‘the great unknown’ in a design & development processes.

To sum up, there are tools and processes out there to help make “rapid” a real possibility, but in a number of respects, we’re still playing “catch-up” on a few things if we want to extend those design & development gains beyond the simplistic Level 1 & 2 offerings. There’s also a requirement to re-think how we approach and manage projects as a whole and communicate this approach to stakeholders so they know what’s at stake, but also what is to be gained by following the “rapid” path. The folks at Bottom Line Performance raise some excellent points and I agree that Rapid Design and Rapid Development are both possible. However, the tools are only one piece of the puzzle; the underlying processes and approaches deserve equal consideration.


About Mark L. Sheppard

learning geek, lifelong learner, terminally curious, recovering blogger and Ed Tech explorer.

Posted on September 6, 2012, in commentary, Instructional Design, opinion, PLN, rapid development, reblog, Twitter and tagged , , , , . Bookmark the permalink. 4 Comments.

  1. Great article and follow up observations from Karen’s post. Assumption #4 is probably the beast we have yet to tame. A well-thought out project plan, timeline, and the initial commitment and excitement from all stakeholders tends to quickly diminish once the “work” begins.

    As you pointed out, things tend to bogg downhill from there. All good intentions but it’s usually due to experienced elearning designers and developers who know what needs to be done that bump up against an organization’s culture, attitudes, and stakeholders who have more than one project they’re balancing.

    I may use Rapid Authoring Tools, but I have not developed a Rapid course yet mostly due to Assumption #4.

    Thanks Mark!

    • Many thanks for the commentary and the kind words, Kevin. I agree that #4 is the proverbial ‘elephant in the room’ when it comes to instructional design and about as easy to influence at times. Sharon’s recent Twitter comment about “Awareness” content might be the key to success. For the small, discrete pieces of learning that everyone needs access to, rapid tools and processes may be the way to go and you can probably get away with a high-level plan for implementation (particularly if you’re re-purposing existing flat content). It may also be the low-cost/no-cost method for rapidly populating a corporate Learning Library with “all the stuff employees should know” as well as being a good hook for On-Boarding activities.

  2. There are a couple of critical considerations folks don’t tend to think about in solution alignment, design and development.

    Concern 1: Release dosage / scope. It’s really difficult to rapidly develop a gargantuan treatment. It’s exponentially more difficult the larger the scope of effort. This is concern number one I’d have with efforts flagged as “rapid”. Mediating expectations with smaller output doses helps to make things truly rapid. Smaller input considerations should result in faster outputs. To me, this is the largest challenge to going rapid. Large scope efforts make this type of approach difficult or impossible. Bite off small chunks. Break large chunks down to smaller chunks. It works for project management. It works for project execution.

    Concern 2: Energy management. The energy of the team and the stakeholders changes relative to the amount of time that has passed since the beginning of the effort. Design / development teams can only control its own production. Schedules drag on when 1) feedback isn’t timely and / or 2) feedback makes outputs into a constantly moving target. These connections and relationships take energy and effort. If nobody feels like driving forward at full speed, there’s a serious problem somewhere in the works.

    Concurrent development, small release doses, and mindful energy management aren’t just useful for time constrained efforts, these could also be immensely useful for larger efforts. Likewise for successive approximation and prototyping.

    Rapid tools seem to color the conversation around rapid outputs. I get that “removing the need for a programmer” is a value proposition, but I think the exclusive consideration that a tool is the panacea that enables rapid outputs is wrong-headed.

    Small dosing, managing energy, and making sure all of the participants in the process are in the same basket until they earn their way off of the balloon speeds up the process and reduces risk. We should be doing things this way as often as possible not just because it might end up being faster. It’s probably going to give you a better product in the end.

    • Steve:

      Many thanks for your thoughtful and detailed comment; it’s really a blog post unto itself. Size/scope and the Energy Management pieces are equally critical and something we should all consider. Your parting comment on the tools is also something that resonates with me. When I was teaching the SH!FT platform, I spent a fair amount of time orienting users to the importance of a new model as having equal (or greater) importance than the tool itself. Great tools with lousy processes is definitely a recipe for frustration.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: