Monday, December 5, 2011

JHMI Internship: Looking Forward (to Phase 5)

     This being the final journal entry for me for during this internship which will officially end in a couple weeks, I just wanted to give a brief overview of what items I have left to complete during Phase 5.  Probably the most outstanding item which I may have a chance of getting done is the completion of the course evaluation survey by the SME.  If time permits, again, I hope to be able to revise the modules and include any requested changes for the final technology product I will submit (a major deliverable for this internship). 
 
     Otherwise, as mentioned previously, the other remaining items are probably all out of reach within the timeframe left.  This primarily involves getting all the Language of Caring modules deployed onto the LMS which will not happen until all information (i.e. survey link and contact information) has been defined.  Another feature which I unfortunately didn’t have time to research was the creation of supporting materials through mobile learning.  
 
     That being said, besides the final technology product I need to submit, I am looking forward to the final reflection paper which is another major deliverables for this internship.  In the paper, I hope to review the Language of Caring project in further detail while reflecting upon what I learned throughout the course of this internship experience in general.   
 
     As an additional note, although the internship will officially end in a couple weeks, I will continue working in order to see the MRI training to completion.  Although the Language of Caring modules provided me with good beginner experience on an ISD project, I think the experience with designing/developing a completely online course such as the MRI Safety training will be really invaluable.  Not to mention, it will give me another project to add to my growing portfolio.

JHMI Internship: Phase 4 - Week 10 (starting 11/28/11)

     The first highlight of this week was attending the training review meeting with MRI client/SME.  This was special for me because my onsite supervisor let me present my solution to the group.  Using my design document as a guide, I first explained how I reorganized the slides and broke the course up into 5 main areas.  I then went onto to describe the concept of the guided tour and how the objectives and assessments were based on the content areas.  The client/SME took well to my proposal and was confident that the training solution would not only meet, but exceed their expectations.  A tentative date for a functional prototype was set for January 10, 2012.
 
     Needless to say, this favorable feedback was a real confidence booster for me.  I now plan to go through each of the content areas one by one and develop the necessary slides/interactions.  One thing that may present a challenge though is that I proposed the inclusion of some animations.  I had found several videos online depicting the animations I was interested in having, but it appears that obtaining the permissions to include them may be tricky.  Therefore, I need to find a way to recreate the animations using the images available to me.  I don’t feel that the end product will look as professional, but I will do my best.  This actually raises a point I’ve come to learn during this internship: in addition to having knowledge of ISD processes/models, having some graphical design experience is extremely helpful in the field as well.
     
     The second highlight for me this week was finally being able to deploy the first module of the Language of Caring course onto the LMS.  Similar to the publishing process, the deployment process was also pretty straightforward and required checking a bunch of different settings, adding necessary descriptions and uploading the packaged course in the end.  As my supervisor walked me through the process, we only touched on a small portion of all the capabilities available in the LMS so I’m hoping future iterations will give me exposure to some more advanced features.  This is because I’ve noticed from past employment searches that LMS experience is a highly sought after skill.  Furthermore, my supervisor explained to me that throughout her career, she’s worked with several LMSs and that they’re all similar so by learning one, you’ll be able to learn others with ease.
 
     Once I had the module successfully deployed, I proceeded to develop a course evaluation survey.  While searching for examples on the internet, I actually came across a really handy tool called Training Check.com which is designed specifically for generating training evaluations.  What’s also nice about this tool is that it provides a huge resource bank of questions to choose from, all based on the Kirkpatrick’s Evaluation Model (KEM) which I got some exposure to a couple weeks back during the ISD Club presentation.  As mentioned before, KEM consists of 4 levels including Reaction, Learning, Behavior (or Transfer), and Results.

     The questions I decided to focus on dealt with the first (Reaction) and third (Behavior/Transfer) levels.  As the second level (Learning) follows assessment results, I decided not to include any related questions as the modules don’t include any assessment.  Additionally, the fourth level deals with how employee performance following the training affects the business as whole (e.g. return on investment or, ROI).  As the training module has yet to be implemented, trying to make such predictions on how it may affect the business seemed to be a bit beyond scope.  I ended up drafting a 15-question survey (10 level 1, and 5 level 3 questions) based on 5-point Likert scale ratings, as well as a couple open-ended questions for additional feedback.  Once I have it reviewed by my supervisor, I will forward it to the SME to complete.

     Just as Joe mentioned in his presentation a couple weeks back, completing this evaluation was really beneficial because it allowed me to think about various issues (especially related to the user experience) that never crossed my mind during the design phase.  Although the modules I developed are only supplements to f2f training (and not full-fledged courses), the issues may not be so relevant now.  However, they are definitely things I will consider in the future.

JHMI Internship: Phase 4 - Week 9 (starting 11/21/11)

     As this was a shortened week due to the Thanksgiving holiday, time was of the essence.  My first goal was to try and get all my Language of Caring modules published and deployed onto the LMS.  However, after discussing with my onsite supervisor, it didn’t make sense to go through the process of publishing and deploying all 9 modules because not all the required information is available at this time.  Namely, the survey link (for employees to fill out upon completion of each module) and the point-of-contact information have yet to be defined.  Instead, my supervisor suggested that I publish/deploy only the first module for the SME to review and give their feedback.  Since all 9 modules follow the same format, and changes requested can easily duplicated to the remaining modules.
 
     The publishing process was pretty straightforward which Articulate automates based on the settings one chooses.  The setting of utmost importance during the process is the SCORM (Shareable Content Object Reference Model) setting.  SCORM is a technical specification for packaging and deploying eLearning course.  It is the SCORM wrapper that makes courses universal and able to run in any LMS that is SCORM compliant.  There are currently three versions of SCORM including 1.1 (the first and most buggy), 1.2 (most widely adopted with numerous 1.1. fixes) and 2004 (aka 1.3, the newest but not yet widely adopted).  The company LMS works best with SCORM 1.2. 
 
     This experience was beneficial as I got a better understanding of SCORM (a term that I heard many times in the past, but never really understood), and the publishing process in general so I will be able to do it on my own in the future.  Unfortunately, due to time constraints, there wasn’t enough to deploy the course onto the LMS so that will have to wait until next week.

     In addition to going through the publishing process, I also had to complete the first draft of the MRI design document I had been working on in preparation for a meeting next week with the client/SME.  The remaining sections of the documents that needed to be done dealt with the defining the learning objectives (both terminal and enabling) and the matching assessments.   As the course will have 5 main content areas, I decided to have an enabling objective for each particular one, with an assessment to accompany it. 
 
     I’m a bit excited about this project because not only will developing such a course concept (i.e. a guided tour) be a new learning experience for me, but the matching assessments as well.  A couple of the assessments I have planned include things like sorting images (e.g. whether they pose an MRI danger or not) and branching scenarios (e.g. If patient X has such and such symptoms, what would be the best course of action to take?).  I hope the client/SME will approve everything.

JHMI Internship: Phase 4 - Week 8 (starting 11/14/11)

     This week was dedicated primarily to working further on the new MRI project I described last week.  As mentioned previously, the concept I am considering revolves around a guided tour.  I got the idea from a template I found through the Articulate community forums, although it is a bit limited in scope and will require significant additions. 

     Anyway, I met with my onsite supervisor early in the week to discuss my ideas and got the concept approved.  However, my supervisor also brought to my attention that some of the things I wanted to do may not be possible in Articulate.  For example, as the course is broken down into 5 main content areas, I wanted to give users the freedom to navigate the course in any order they choose.  Allowing users some sense of control in their learning has been proven to help with motivation.  Additionally, this would give the course the appearance of being more dynamic as the experience could vary from user to user based on their selections (although all the same content would be covered).
 
     The problem with this is that if users are given the freedom to jump from one content area to another in no particular order, there is no simple way to track which ones they’ve completed.  Hence, ensuring they’ve completed all the content areas by the end of the course is difficult.  This fact causes me to grow wearier with Articulate and its limitations and start thinking about alternatives which may allow more flexibility such as Adobe Captivate and Trivantis Lectora.  However, as time is somewhat limited, changing the technology solution is not really feasible at this time but something I will definitely keep in mind for future projects.
     Following our meeting, my supervisor asked me to start working on a design document for the course.  As I began to work on the document and accompanying storyboard, I noticed that the slides we had been given from the MRI group were in need of serious reorganization so I spent a good amount of time doing that first to have all the subject matter in order based on the 5 main content areas.  I then proceeded to develop a storyboard using a flowchart in Microsoft Visio (similar to the Language of Caring project).  I particularly like using flowcharts for storyboarding because they’re fairly easy to create and easy for others (e.g. SMEs, supervisor) to understand.

     Besides the MRI project, this week was also special because my supervisor was sponsoring a special lecture as part of the ISD Club she coordinates for the company.  The ISD club is basically a group of all the instructional designers working throughout different entities who get together every so often (monthly, bi-monthly) to sit in on a guest lecture and discuss issues of interest.  For this particular meeting, the guest speaker was Joe “Captivate” Ganci who is an Adobe Captivate (a course authoring tool) Certified Expert and is also a member of the Adobe eLearning Advisory Board (learn more about Joe here: http://www.joeganci.com/).
 
     The title of his presentation was "The Top 10 Blunders in Developing e-Learning".  Joe mentioned several good points, but there were a couple in particular that resonated with me the most.  The first of these points was “Tip #4: Not estimating the work correctly”.  This tip deals with the fact that a common problem faced during an ID project is scheduling.  Joe discussed how work is often underestimated and can lead to issues later on during the ISD process.  In order to do the best job estimating, Joe advised to be weary of several factors including quality assurance, money vs. time, and choosing the proper team members (e.g. instructional and graphics designers, field testers, etc.).  This point caught my interest because, as I mentioned last week, I am beginning to feel a bit of a time crunch as my internship is coming to a close and I still haven’t completed all my tasks.  Granted this is my first such project and I am now simultaneously trying to work on a second project, it has been a bit of a challenge.  However, as with all things, I think it’s something that will become easier and more familiar with time.
 
     The second point that struck a chord with me was “Tip #10: Not evaluating the results”.  Joe stressed the fact that in a lot of organizations, it’s very typical for training to be released and never evaluated for effectiveness (usually because of time/resource constraints).  However, Joe mentioned that this is a mistake and some effort should be made on evaluation in order to measure results to see if and how training can be improved (or even scrapped altogether).  As a solution for evaluation, Joe talked about Kirkpatrick’s Evaluation Model (KEM).  The model consists of 4 levels: Reaction (of student to training), Learning (of student following assessment), Transfer (of skills by student into work-related tasks), and Results (of company due to worker performance following training). 
 
     I think KEM is a pretty straightforward model to follow and can help provide insight into various areas that trainers may not think about.  I am also particularly interested in this issue because, as previously mentioned, I would also like to do some level of evaluation for my project.  Using KEM, I think I will try and devise a survey for the SME (and any other interested parties) to fill out.  With the limited time I have left, whether I will actually be able to make any changes to my modules based on the feedback is another story.