Monday, December 5, 2011

JHMI Internship: Phase 4 - Week 8 (starting 11/14/11)

     This week was dedicated primarily to working further on the new MRI project I described last week.  As mentioned previously, the concept I am considering revolves around a guided tour.  I got the idea from a template I found through the Articulate community forums, although it is a bit limited in scope and will require significant additions. 

     Anyway, I met with my onsite supervisor early in the week to discuss my ideas and got the concept approved.  However, my supervisor also brought to my attention that some of the things I wanted to do may not be possible in Articulate.  For example, as the course is broken down into 5 main content areas, I wanted to give users the freedom to navigate the course in any order they choose.  Allowing users some sense of control in their learning has been proven to help with motivation.  Additionally, this would give the course the appearance of being more dynamic as the experience could vary from user to user based on their selections (although all the same content would be covered).
 
     The problem with this is that if users are given the freedom to jump from one content area to another in no particular order, there is no simple way to track which ones they’ve completed.  Hence, ensuring they’ve completed all the content areas by the end of the course is difficult.  This fact causes me to grow wearier with Articulate and its limitations and start thinking about alternatives which may allow more flexibility such as Adobe Captivate and Trivantis Lectora.  However, as time is somewhat limited, changing the technology solution is not really feasible at this time but something I will definitely keep in mind for future projects.
     Following our meeting, my supervisor asked me to start working on a design document for the course.  As I began to work on the document and accompanying storyboard, I noticed that the slides we had been given from the MRI group were in need of serious reorganization so I spent a good amount of time doing that first to have all the subject matter in order based on the 5 main content areas.  I then proceeded to develop a storyboard using a flowchart in Microsoft Visio (similar to the Language of Caring project).  I particularly like using flowcharts for storyboarding because they’re fairly easy to create and easy for others (e.g. SMEs, supervisor) to understand.

     Besides the MRI project, this week was also special because my supervisor was sponsoring a special lecture as part of the ISD Club she coordinates for the company.  The ISD club is basically a group of all the instructional designers working throughout different entities who get together every so often (monthly, bi-monthly) to sit in on a guest lecture and discuss issues of interest.  For this particular meeting, the guest speaker was Joe “Captivate” Ganci who is an Adobe Captivate (a course authoring tool) Certified Expert and is also a member of the Adobe eLearning Advisory Board (learn more about Joe here: http://www.joeganci.com/).
 
     The title of his presentation was "The Top 10 Blunders in Developing e-Learning".  Joe mentioned several good points, but there were a couple in particular that resonated with me the most.  The first of these points was “Tip #4: Not estimating the work correctly”.  This tip deals with the fact that a common problem faced during an ID project is scheduling.  Joe discussed how work is often underestimated and can lead to issues later on during the ISD process.  In order to do the best job estimating, Joe advised to be weary of several factors including quality assurance, money vs. time, and choosing the proper team members (e.g. instructional and graphics designers, field testers, etc.).  This point caught my interest because, as I mentioned last week, I am beginning to feel a bit of a time crunch as my internship is coming to a close and I still haven’t completed all my tasks.  Granted this is my first such project and I am now simultaneously trying to work on a second project, it has been a bit of a challenge.  However, as with all things, I think it’s something that will become easier and more familiar with time.
 
     The second point that struck a chord with me was “Tip #10: Not evaluating the results”.  Joe stressed the fact that in a lot of organizations, it’s very typical for training to be released and never evaluated for effectiveness (usually because of time/resource constraints).  However, Joe mentioned that this is a mistake and some effort should be made on evaluation in order to measure results to see if and how training can be improved (or even scrapped altogether).  As a solution for evaluation, Joe talked about Kirkpatrick’s Evaluation Model (KEM).  The model consists of 4 levels: Reaction (of student to training), Learning (of student following assessment), Transfer (of skills by student into work-related tasks), and Results (of company due to worker performance following training). 
 
     I think KEM is a pretty straightforward model to follow and can help provide insight into various areas that trainers may not think about.  I am also particularly interested in this issue because, as previously mentioned, I would also like to do some level of evaluation for my project.  Using KEM, I think I will try and devise a survey for the SME (and any other interested parties) to fill out.  With the limited time I have left, whether I will actually be able to make any changes to my modules based on the feedback is another story.

No comments:

Post a Comment