0229590_C6FC0_solomon_negash_michael_e_whitman_amy_b_woszczynski_handbook-1
.pdfOnline Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
1938). The BM approach was first applied in the training of interpersonal communication and management skills (Decker & Nathan, 1985). Gist, Schwoerer and Rosen (1989) further applied the training to the context of information technology.
BM may be readily employed in face-to-face instruction, but cannot be easily simulated in online asynchronous instruction, which lacks the interactive immediacy necessary for optimally effective instructor demonstration and correction. The richness of information media in online synchronous instruction is another constraint and may also have less enforcement force than F2F instructiontothelearningoutcomes.Forexample, in a live training class, the instructor is able to demonstrate a software process and immediately ask the students to repeat the activity under the instructor’s close supervision. However, in an online asynchronous situation where there is no liveinstructor,thedemonstrationlosesthebenefit of that immediate feedback. In the same token, in an online synchronous situation bandwidth constraints and compromised reciprocity may undermine the enforcement force of the demonstration. In both online environments, enforcement forces can be further compromised with the missing of
“learning by doing,” another key element of F2F
BM training (McGehee & Tullar, 1978). Therefore, there is a strong possibility that the
BM approach cannot be fully replicated in either the online synchronous or asynchronous situation and will not be as effective a method in online training as in the traditional environment.
knowledge transfer
Knowledge transfer is the application of acquired skills and knowledgeinto different situations. Unless the transferring process occurs, learning has littlevalue.Theappliedsituationscouldbesimilar or novel to the learning situation. Depending on the situation, knowledge transfer can take place
in different formats. In general, there are four different types of knowledge transfer.
Positive Transfer vs. Negative Transfer
Positive transfer of learning means that learning in one situation stimulates and helps learning in another situation. Negative transfer of learning hinders the application of learning in one situation to other situations. Positive learning experience canbeenhancedviaanalogy,informedinstruction (Paris, Cross & Lipson, 1984), tutorial (Morris, Shaw & Perney, 1990) and so forth. Learning effectiveness can be improved by triggering positive learning and mitigating negative learning experience.
Near Transfer vs. Far Transfer
Salomon and Perkins (1988) argued that transfer of learning could have a differential degree of transfer. The effectiveness of near-transfer learning depends on the learner’s ability to solve problems similar to those encountered in the learning context. For instance, learning how to add two digit numbers allows learners to add three digit numbers. Near-transfer learning occurs in two similar situations and at a lower level. Therefore, the level of learning is more easily acquired and applied. In contrast, applying the acquired skills and knowledge in two dissimilar and sometimes novel situations is much harder to achieve. For instance, a table tennis player can apply skills of playing pinball to playing tennis. Although both sports look similar on the surface, the techniques to control pinballs and tennis balls are very different. The learning transfer is much harder to be acquired and retained. Therefore, the transfer is defined as far-transfer learning. Near-transfer and far-transfer of knowledge seem to be the most widely used measures of learning outcomes in thefieldofinformationtechnologysincelearners must utilize the knowledge learned in a computing environment.
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
Specific Transfer and General Transfer
Depending on learning content, there are two different learning transfers: specific transfer and general transfer (Bruner, 1996). The former refers to theextension and associationofhabit and skills. The latter refers to the transfer of principles and attitudes that can be used to deepen the understanding of basic concepts.
Lateral Transfer and Vertical Transfer
Gagne (1992) asserted that the transfer of learning includes lateral and vertical transfers. Lateral learning is to apply one domain of knowledge to another domain. Lateral learning does not follow step-by-step instruction and is considered as provocative learning. Vertical learning means that a higher level of learning needs to be created by integrating acquired skills, and experiences with new situations. Vertical transfer of learning is analytical and sequential.
Other Knowledge Transfer Theories
Theoriesrelatedtoknowledgetransferarenotlimited to the above mentioned ones. For instance, the theory of identical elements asserts that the more identical elements different learning contains, the moreefficientthetransfer of learning (Thorndike, 1949). Baldwin and Ford (1988) proposed a general training theory to classify three categories of factors affecting transfer of training: (1) training inputs, (2) training outputs and (3) conditions of transfer. The situated learning theory argues that individuals are affected by learning environment when trying to solve practical problems. Therefore, the interaction between learners and the environment is an important factor that needs to betakenintoaccountwhenmeasuringthetransfer of learning. Finally, the theory of formal discipline argues that knowledge transfer skills can be acquired by training learner’s sensuality, such as
thinking, judgment, classification, imagination, creation and so forth.
The objective of this study was to investigate the impacts of the learning environment in online and offline formats on the transfer of learning. The situational changes rationalize the adoption of situated learning theory. To accomplish this objective, we sought to train end-user to learn how to use Microsoft SQL server 2000 software. Therefore, we adopted the near-transfer and fartransfer measures of learning outcomes for our information technology related experiment.
hyPotheses
Hypotheses are formulated to investigate whether the BM approach is as effective in online synchronous and asynchronous environments as in the traditional face-to-face environment. We measured learning outcomes by trainees’ performances in near-transfer and far-transfer tasks, as well as overall satisfaction levels. The study also considered the importance of time variant. Hence, training and performance measurement were conducted over five weeks.
knowledge near-transfer (knt) tasks
H1: End-users trained using F2F behavior modeling perform near-transfer information system tasks better than those trained in asynchronous behavior modeling.
H2: End-users trained in F2F behavior modeling perform near-transfer information system tasks better than those trained in synchronous behavior modeling.
H3: End-users trained in synchronous behavior modelingperformnear-transferinformation system tasks better than those trained in asynchronous behavior modeling.
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
knowledge far-transfer (kft) tasks
H4: End-users trained in F2F behavior modeling perform far-transfer information system tasks better than those trained in asynchronous behavior modeling.
H5: End-users trained in F2F behavior modeling perform far-transfer information system tasks better than those trained in synchronous behavior modeling.
H6: End-users trained in synchronous behavior modeling perform far-transfer information system tasks better than those trained in asynchronous behavior modeling.
overall satisfaction
H7: End-users trained in synchronous behavior modeling have a higher overall satisfaction level than those trained in asynchronous behavior modeling.
ReseaRch design
This study applied Simon, Grover, Teng and Whitcomb’s (1996) well-constructed software training theory to experimentally test behavior modeling training in three learning environments — F2F, online asynchronous and online synchronous environments. In doing so, it should be possible to detect the effects of the single independent variable (training environment) on trainingoutcomes.Theexperimentwasconducted in a field setting that enabled the study to garner greater external validity than would be the case withalaboratoryexperiment.Afieldexperiment methodology has the merits of “testing theory” and “obtaining answers to practical questions”
(Kerlinger & Lee, 2000). The exploratory nature of the study requires that variables (e.g., training environments and subject areas of study) under investigation be manipulated.
subjects control
ThesettingforthefieldexperimentwastheTamkang University in Taiwan. The experiment was prompted by the need of 96 college sophomores, who are Management Information Systems (MIS) majors, to learn a Microsoft SQL Server 2000 software program in a database processing course. The schedule agreed on with the faculty at Tamkang University was to run the experiment for an hour training each week for four weeks. The author’s graduate assistant Ms. Lin helped administer the experiment to collect the data. The subject pool had a mean age of 22 years. Subjects who participated in the structured experiment had little database-related experience. Their intellectual levels are relatively the same because subjects scored the same range of scores in a national entrance exam. The national entrance exam system has been adopted for more than 40 years in Taiwan and is considered a relatively reliable test. Subjects’ individual backgrounds should not have influence on learning outcomes.
For the purposes of this study, subjects were chosen if they lacked a theoretical and procedural understanding of the particular subject area being tested. Participants were given a pretraining questionnaire that includes important study units on Microsoft SQL Server 2000. Two experts of the domain administered the Delphi study to finalize the study units and questionnaires. This is to improve the content validity. The subjects voluntarily answered whether they knew those study units and answered their database-related experiences. Based on their answers, a correlation test of database and usage experience of the target system showed no significant differences among three experimental groups. Subjects of the study may be considered representative of novice end-users. Many studies (Ahrens & Sankar 1993; Santhanam & Sein, 1994) support using students as experimental subjects to represent the general populations. Hence, all subjects’ questionnaires wereusedforfurtherdataanalysis.Thissegmenta-
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
tion was used to mitigate the effects of computer literacy and experience on the findings, thereby improving the internal validity of the study.
training treatments
Face-to-face BM (FBM) is instructor-centered training while online Asynchronous BM (ABM) and Synchronous BM (SBM) are learner-centered training. Course materials used in online learning environmentswerecreatedtoproperlyreflectthe key elements of a behavior modeling approach. AniCam simulation software was used to record thedemonstrationofinstruction.Hyperlinkstructure was used to help users assimilate nonlateral conceptual, and procedural knowledge.
Feedback activities of behavior modeling approach in online asynchronous environment are supported with e-mail and hyperlinks. SBM differs from ABM in providing feedback functions via real-time discussion forums. Training materials integrate key elements of behavior modeling approach: (1) control of three different learning environments, (2) demonstration of the instructor, (3) continuous feedback (verbal feedback in F2F and online synchronous environ- ments;e-mailfeedbackintheonlineasynchronous environment). Three training environments were designed to maximize the effect of size on their differences (Figure 1).
training Procedures
The experimental study lasted for four weeks. There was a 50 minute training session each week for each class. Figure 2 shows the experimental procedures used at each time period. The X’s, Y’s and Z’s represent online asynchronous BM training,onlinesynchronousBMtrainingandF2F BMtrainingmethods,respectively.Thesubscripts next to each alphabet indicate the ith observation or training session, respectively. Before executing experimental treatments (the pretest period O1), the instructor asked the subjects to complete a
short questionnaire soliciting demographic information, database software-related experience and attitudes towards learning in the subject’s assigned online learning environment (Pretest). Approximately one-third of the subjects pooled received the same experimental treatment for four straight weeks (Week1 to Week4). The assigning process was random on the class basis. Randomizing the execution of O4 and O5 in Week2 and Week3 for Group A and Group B can help avoid possible confounding results from the interactive effects of the pretest of O1 and O3. This randomization process can further ensure that difference in learning outcomes of O6 is not possibly due to the sensitization of the participants after the pretest and the interaction of their sensitization, O4 and O5 (Kerlinger & Lee, 2000).
Before or after each training session, subjects were asked to complete database design tasks using the MS SQL commands to assess their prior knowledge in the trained subjects and immediate learning outcomes that involve both near-transfer andfar-transferknowledge.Onweekfive,students were evaluated again for their attitude changes towards the e-learning sessions and performance innearandfar-transfertasks(Post-test).Thefinal exam concludes the five-week training sessions.
Training materials were designed to integrate key elements of the three training environments, as illustrated in Figure 3. Course materials used in the online asynchronous training session were stored on the school’s server for students to learn at their own pace after each training session was completed. At the end of the experiment, students were asked about their affect for their learning environments.
outcomes Measurement
Regardlessoftheteachingenvironment,computer training is intended to instill in users a level of competency in using the system and to improve their satisfaction with the system. A user’s competency in using a system is contingent upon the
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
Figure 1. Differences of behavior modeling approach in three learning modes
Online Learning Environments |
Off-line Learning Environment |
|
|
|
|
Asynchronous BM (ABM) |
Synchronous BM (SBM) |
Face-to-Face BM (FBM) |
|
|
|
• Scripted demonstration of step- |
• Webcam-delivered demonstra- |
• Demonstration of a live instructor to |
by-step instructions |
tion of step-by-step instructions |
learn step-by-step |
• Deductive/inductive comple- |
• Deductive/inductive complemen- |
• Deductive/inductive complementary |
mentary learning |
tary learning |
learning |
• Trainees choose one of two rel- |
• Instructor chooses examples that |
• Live instructor chooses examples |
evant examples to practice |
are relevant to trainees’ majors |
that are relevant to trainees’ majors |
• Without online reference |
• Without online reference sources |
• Without online reference sources |
sources |
• Trainer/trainee partially control |
• Trainer control |
• Trainee control |
|
|
|
|
|
Figure 2. Experimental procedures
GROUP |
Pretest |
Week1 |
Week2 |
Week3 |
Week4 |
Post-test |
|
|
|
|
|
|
|
Group A |
O1 |
O2 X1 O3 |
X2 |
O4 X3 O5 |
X4 |
O6 |
|
|
|
|
|
|
|
Group B |
O1 |
O2 Y1 O3 |
O4 Y2 O5 |
Y3 |
Y4 |
O6 |
|
|
|
|
|
|
|
Group C |
O1 |
O2 Z1 O3 |
O4 Z2 O5 |
Z3 |
Z4 |
O6 |
|
|
|
|
|
|
|
Oi = Questionnaire and Tests
Xi = ABM (Online Asynchronous BM Training)
Yi = SBM (Online Synchronous BM Training)
Zi = FBM (F2F BM Training)
user’s knowledge absorption capacity. Ramsden
(1988)findsthateffectiveteachingneedstoalign studentswithsituationswheretheyareencouraged to think deeper and more holistically. Kirkpatrick (1967) also suggests that learning effectiveness needs to be evaluated by students’ reactions, learning and knowledge transfer. The levels of knowledge absorbed by students, Bayman and Mayer (1988) suggest, may include syntactic, semantic, schematic and strategic knowledge. Mennecke, Crossland and Killingsworth (2000) believe that experts of one particular knowledge domain possess more strategic and semantic knowledge than novices. Knowledge levels, as Simon, Grover, Teng and Whitcomb (1996) suggest, can be categorized as near-transfer, far-transfer or problem solving. Near-transfer knowledge is necessary for being able to understand software
commands and procedures. This type of knowledge is important for a trainee to be able to use software in a step-by-step fashion. Far-transfer knowledge seeks to ensure that a trainee has the ability to combine two or more near-transfer tasks to solve more complicated problems.
Both the use of software and information systems and the satisfaction levels of using them are useful surrogates to properly measure the effectiveness of an information system (Ives, Olson, & Baroudi, 1983). The end-user satisfaction level has been widely adopted as an important factor contributing to the success of end-user software training. Since the study was to replicate Simon, Grover, Teng and Whitcomb’s (1996) research in a dissimilar environment, near-knowledge and far-knowledge transfer, and end-user overall satisfaction levels were adopted in this study to
0
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
Figure 3. Delivery mechanisms of behavior modeling approaches
|
FBM (F2F Behavior Modeling) |
ABM (Asynchronous Behavior |
SBM (Synchronous Behavior |
|
|
|
Modeling) |
Modeling) |
|
|
|
|
|
|
Course Materials |
|
Course materials covered by FBM |
|
|
|
Instructor demonstrates the use of |
was pre-recorded and stored in a |
Instructor was present, but broad- |
|
|
server. |
casted steaming video from a |
||
|
software along with PowerPoint |
|||
|
|
broadcast room. |
||
|
slides |
No instructor was present to assist |
||
|
|
|||
|
|
|
||
|
Covered three study subjects within |
the learning process of students. |
Instructor conducted the real-time |
|
|
Students learned at their own path |
discussion with students on a BBS |
||
|
forty five minutes each week |
|||
|
and completed their study within |
station. |
||
|
|
|||
|
|
forty five minutes. |
|
|
|
|
|
|
|
Information |
|
AniCam, PowerPoint and Acrobat |
AniCam, PowerPoint, Stream |
|
Systems Tools |
Instructor, PowerPoint, and PC |
Author v.2.5 (Authoring Tool) and |
||
Reader |
||||
|
|
Acrobat Reader |
||
|
|
|
||
|
|
|
|
|
Target System |
SQL Server2000 Personal Edition |
SQL Server2000 Personal Edition |
SQL Server2000 Personal Edition |
|
|
|
|
|
|
Pretest |
Learning Experience and Style |
Learning Experience and Style |
Learning Experience and Style |
|
Questionnaire |
Questionnaires |
Questionnaires |
Questionnaires |
|
The First Week |
First Training Session |
First Training Session |
First Training Session |
|
|
First Learning Outcomes Test |
First Learning Outcomes Test |
First Learning Outcomes Test |
|
|
|
|
|
|
The Second Week |
Second Training Session |
Second Training Session |
Second Training Session |
|
|
Second Learning Outcomes Test |
Second Learning Outcomes Test |
||
|
|
|||
|
|
|
|
|
The Third Week |
Third Training Session |
Third Training Session |
Third Training Session |
|
|
Second Learning Outcomes Test |
|||
|
|
|
||
|
|
|
|
|
The Fourth Week |
Comprehensive Test (Third Learn- |
Comprehensive Test (Third Learn- |
Comprehensive Test (Third Learn- |
|
|
ing Outcomes Test) |
ing Outcomes Test) |
ing Outcomes Test) |
|
|
|
|
|
|
Post-test |
Measure End-User Satisfaction |
Measure End-User Satisfaction |
Measure End-User Satisfaction |
|
Questionnaire |
||||
|
|
|
||
|
|
|
|
measure training outcomes. Cronbach’s alpha reliability for Simon et al.’s (1996) instrument to measure satisfaction is r = 0.98. Users need to use the Likert scale from one to five to answer
12 test items related to their satisfaction with the use of online system.
data analysis
Table 1 shows the means and standard deviations for the scores at each treatment period. Table 2 shows F and P values of the dependent variables (near-transfer and far-transfer task performances, and overall satisfaction) across treatment groups
and in different times. Pretest scores (Q1, Q2 and Q4) in varying weeks were used to tell apart studentswithpriorexperiencesandknowledgeonthe studied topics. After learning in a weekly session, students participated in a post-test. Their scores (Q3 and Q5) were used for KNT effectiveness comparison across training sessions.ScoresofQ6 are KFT effectiveness and end-user satisfaction levels. A cursory examination of means (Table
1) indicates that no patterns can be identified for near-transfer performance from time Week1 to Week5. Subjects in ABM performed better than those in FBM, followed by SBM at Week1 while at Week2 and Week3 the order was changed to FBM>ABM>SBM and SBM>FBM>ABM, re-
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
Table 1. Descriptive statistics - means (Standard Deviations)
|
ABM (N=40) |
SBM (N=26) |
FBM (N=30) |
Overall (N=96) |
||||
|
|
|
|
|
||||
KNT (Week 1) |
27.63 (5.77) |
25.38 (7.06) |
26.00 (7.24) |
26.51 (6.61) |
||||
|
|
|
|
|
||||
KNT (Week 2) |
28.50 (3.43) |
28.46 (3.09) |
29.83 (0.91) |
28.91 (2.83) |
||||
|
|
|
|
|
|
|
|
|
KNT (Week 3) |
56.05 |
(14.06) |
64.27 |
(17.52) |
62.27 |
(12.71) |
60.22 |
(14.98) |
|
|
|
|
|
|
|
|
|
KNT (Week 5) |
71.70 |
(16.21) |
70.50 |
(19.78) |
67.00 |
(17.27) |
69.91 |
(17.49) |
|
|
|
|
|
|
|
|
|
KFT (Post-test) |
9.63 |
(1.33) |
9.42 |
(1.63) |
8.83 |
(2.15) |
9.32 |
(1.72) |
|
|
|
|
|
|
|||
OS (Post-test) |
38.10 (6.87) |
39.38 (9.21) |
|
|
38.61 (7.83) |
|||
|
|
|
|
|
|
|
|
|
Table 2. Performance on differentlLearning outcomes over five weeks
|
F |
p-value |
Power |
|
|
|
|
KNT (Week 1) |
1.035 |
0.359 |
0.337 |
|
|
|
|
KNT (Week 2) |
2.415 |
0.095* |
0.605 |
|
|
|
|
KNT (Week 3) |
2.891 |
0.061* |
0.677 |
|
|
|
|
KNT (Post-test) |
0.634 |
0.532 |
0.246 |
|
|
|
|
KFT (Post-test) |
1.913 |
0.153 |
0.517 |
|
|
|
|
OS (Post-test) |
0.420 |
0.519 |
0.169 |
|
|
|
|
Table 3. Results for training methods
Variable |
Hypothesis |
Result in Correct |
Significant p-value? (n.s.—not |
|
Direction? |
significant) |
|||
|
|
|||
|
|
|
|
|
Week1 |
H1: FBM > ABM |
F |
n.s. (p=0.312) |
|
|
|
|
|
|
|
H2: FBM > SBM |
T |
n.s. (p=0.729) |
|
|
|
|
|
|
|
H3: SBM > ABM |
F |
n.s. (p=0.182) |
|
|
|
|
|
|
Week2 |
H1: FBM > ABM |
T |
p=0.051 |
|
|
|
|
|
|
|
H2: FBM > SBM |
T |
p = 0.069 |
|
|
|
|
|
|
|
H3: SBM > ABM |
T |
n.s. (p=0.956) |
|
|
|
|
|
|
Week3 |
H1: FBM > ABM |
T |
p=0.083 |
|
|
|
|
|
|
|
H2: FBM > SBM |
F |
n.s. (p=0.612) |
|
|
|
|
|
|
|
H3: SBM > ABM |
T |
p = 0.029 |
|
|
|
|
|
|
Week4 |
H1: FBM > ABM |
F |
n.s. (p=2.71) |
|
|
|
|
|
|
|
H2: FBM > SBM |
F |
n.s. (p=0.459) |
|
|
|
|
|
|
|
H3: SBM > ABM |
F |
n.s. (p=0.787) |
|
|
|
|
|
|
Post Test (KFT) |
H4: FBM > ABM |
F |
p=0.057 |
|
|
|
|
|
|
|
H5: FBM > SBM |
F |
n.s. (p=0.2) |
|
|
|
|
|
|
|
H6: SBM > ABM |
F |
n.s. (p=0.639) |
|
|
|
|
|
|
Post Test (OS) |
H7: SBM > ABM |
F |
n.s. (p=0.579) |
|
|
|
|
|
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
spectively.Thefindingsarenotinagreementwith a consistent pattern as predicted by Hypotheses H1 and H2. For KFT tasks, subjects in ABM performed better than those in SBM, followed by FBM. This is the reversed order of a pattern as predicted by Hypotheses H3 and H4. The measurement of overall satisfaction level somewhat follows the predicted patterns of Hypotheses H5 and H6.
We took a closer look at the mean difference at the significance level of 0.05. The study used one-way ANCOVA to analyze the effects of behavior modeling approach on learning outcomes over different time. Levene’s Test (1960) was used to examine the variance homogeneity of three groups. Its F-statistics showed that KNT was 3.04 (p=0.053) at Week1, 13.01 (p=0.000) at Week2, 1.71 (p=0.187) at Week3, and 0.47 (p=0.627) at the Post-test. In contrast, the F-sta- tistics of Levene’s Test for KFT and OS were 7.64 (p=0.001) and 1.75 (p=0.191) at the Post-test. With the exception of KNT at Week2 and KNF at the Post-test, all dependent variables met the p > 0.05 criterion for assuring homogeneity of variances. The heteroscedasticity of variances for these two exceptions suggested that the statistical test results may not be valid. As such, the following discussion will ignore these two variances and focus on KNT and OS. For other effects that show significance,thestudyadoptstheScheffepost-test to analyze data. In addition, Pearson Correlation Analysis was used to assess the carry-over effects of different training sessions.
ANCOVA was performed using the general linear model approach; the results are presented in Table 3. It shows that the treatment effects are significant for KNT (Week2) and KNT (Week3) with F-statistics of 2.415 (p=0.095) and 2.891
(p=0.061), confirming a univariate treatment effect of learning environments on the dependent variable: KNT. However, the treatment effects are not salient for other dependent variables: KFT and OS. These lacks of effect may have been due to small effect sizes.
Least-Squares Deconvolution (LSD) was used to test cross-correlations for KNT (Week2) and KNT (Week3). LSD is a cross-correlation technique for computing average profiles. LSD is very similar to most other cross-correlation techniques, though slightly more sophisticated in the sense that it cleans the crosscorrelation profile from the autocorrelation profile (Donati,
2003). For KNT (Week 2), the LSD results indicate that subjects in FBM perform better than those in ABM (p=0.051) and SBM (p=0.069). This supported the Hypotheses 1 and 2. However, Hypothesis 3 cannot be supported because the mean difference between ABM and SBM is not significant. For KNT (Week3), the LSD results indicate that (1) subjects in FBM performed better than those in ABM (p=0.083), and (2) subjects in SBM performed better than those in ABM (p=0.029). Hypotheses 4 and 6 are supported. Worthy to be noted is that H4 is upheld but in the reversed direction. This indicates that ABM is a more effective method than FBM at improving knowledge far transfer.
Four out of nine hypotheses in total are supported. Although not all hypothesized relationships are fully supported, the results obtained are interesting. The most intriguing result is that although there is statistically-justified reason for preferring FBM to ABM or SBM or software training, the pattern of results is not persistent in the long run. FBM resulted in better outcomes than ABM and SBM at Week2, and than ABM at Week3 for KNT. Although it never does so at a statistically significant level, subjects in ABM performed better than those in SBM, followed by those in FBM for KNT (Post-test) and KFT (Post-test). One interpretation of this is that either ABMorSBMtrainingisnoworsethanFBMtraining across all dependent variables. The pattern of results for FBM suggests that trainers might choose ABM or SBM, which should to be a less costly alternative to FBM, without making any significantsacrificesineitherlearningortrainee reaction outcomes.
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
Another result of interest is that, with respect to the three online asynchronous training methods, the pattern of results suggests that FBM might be the best for KNT in the short term. Of the nine hypotheses concerning relationships between these methods, four are in the expected direction, and significantly so. This indicates that use of ABM or SBM may be a better—and certainly no worse—software training strategy in the long term.
iMPlications foR ReseaRch
Thisarticlestudiedtheimpactoftrainingduration on performance and trainee reactions. Trainees were exposed to the same training methods with different degrees of social presence for different durations. These findings indicated that training duration and social presence have little impacts on learning outcomes. Despite this, the findings here raise additional questions for research.
It may be more important to investigate the impacts of information richness (Fulk, 1993) features of online training media on training outcomes. Future studies might vary the social presence features of training media or their combination with social presence features (e.g., with instructor’s feedbacks versus discussion boards, e-mailresponseorplaybackfeatures).Information richnessmaybeamoreinfluentialfactoraffecting the performance of training approaches.
Itmayalsobeusefultoreplicatetheexperimental equivalence of FBM, ABM and SBM methods of software training with different software and subjects. Since in the long term different treatments havesimilar impacts on learning outcomes, it may be practical to demonstrate the cost-based advantageofABMoverSBM,andSBMoverFBM for software training in practical settings.
Another way to improve the reliability of the study is to manipulate some useful blocking variables. A series of comparative studies can be
conducted to assess the impact of individualism asaculturalcharacteristic,computerself-efficacy, task complexity (simple tasks vs. fuzzy tasks), professional backgrounds and the ratio of the training duration to the quantity of information to be processed, among others.
Learning style may be an important factor to consider in the online learning environment. According to social learning theory, learners interact with the learning environment to change their behavior. Learning style is situational and can vary with different learning environments. Therefore, it is possible that the combination of training methods, learning style and social presence information richness (SPIR) attributes may jointly determine learning outcomes. This is not the case for BM approach in F2F environment. The self-paced online learning environment may alter the assertion. Hence, it may be necessary to conduct longitudinal studies of the influence of learning style on learning performance and trainee reaction.
iMPlications foR PRactice
The largest implication for practice is that ABM and SBM may provide cost-effective substitutes forFBMwithoutsignificantreductionsintraining outcomes in the long term. While it may still be true that FBM is still the most effective approach to improve KNT in the short term, ABM and SBM have similar leverage in KFT in the short term and KNT in the long term. Regardless of training environments, trainees have same satisfaction levels in the nearand long-term. These findings strongly indicate that the cost issue is more important than learning effectiveness. When given the options to decide which BM approach to take in the long term, nonperformance issues (teacher and facility availability, trainee’s preferences, location and convenience issues) have to be first taken into account.
Online Synchronous vs. Asynchronous Software Training Through the Behavioral Modeling Approach
conclusion
The success of an online training strategy depends on its effectiveness in improving learning outcomes. This study, built on well-accepted frameworks for training research (Bostrom, Olfman & Sein, 1990; Simon & Werner 1996), examinestherelativeeffectivenessofthebehavior modeling approach in online synchronous, online asynchronousandface-to-faceenvironments.The results from this experiment provide an empirical basis for the development of an online behavior modeling strategy: (1) FBM is more effective than ABM and SBM for knowledge transfer in the short term (KNT), and (2) ABM and SBM are as effective as FBM for knowledge transfer and overall satisfaction in the long term (KFT).
What is learned from this study can be summarized as follows: When conducting software training, it may be almost as effective to use online training (synchronous or asynchronous) as it is to use a more costly face-to-face training in the long term. In the short term face-to-face knowledge transfer model still seems to be the most effective approach to improve knowledge transfer in the short term.
Thelimitationofthisexperimentalstudyisthat it was conducted with a homogeneous group with Taiwanese cultural and educational backgrounds. Therefore, this study may be constrained with the generalizability of its findings to different cultural contexts.
Hofstede (1997) stated that the domains of education, management and organization have nurtured the values context that differs from one countrytoanother.Culturalinfluenceshavebeen discerned in the study of Internet usage (Lederer, Maupin, Sena & Zhuang, 2000; Moon & Kim, 2001; Straub, 1997) and Web site design (Chu, 1999; Svastisinha, 1999). Users from different cultures have different perceptions about the usefulness and ease of use regarding different information systems (Straub, 1994). E-learning systems may differ based on the cultural back-
grounds of the learners to improve their satisfaction levels and cognitive gains. Benefits of the congruence may include the improvement of (1) global e-learning adoption rate and (2) learning outcomes (attitude and cognitive gains). From the perspective of research design (Kerlinger & Lee, 2000), a cross-cultural study to replicate the study with American or European subjects may further validate and extend the generalizability of the findings.
The study has accomplished its major goal; it provides evidence as to the relative effectiveness of the behavior modeling approach in different learning environments for software training. This research somewhat improves the generalizability of theories on the behavior modeling approach in different learning environments.
RefeRences
Ahrens, J. D., & Sankar, C. S. (1993). Tailoring database training for end users. MIS Quarterly, 17(4), 419-439.
Aniebonam, M. C. (2000, October). Effective distance learning methods as a curriculum delivery tool in diverse university environments: The case of traditional vs. historically black colleges and universities. Communications of the Association for Information Systems, 4(8), 1-35.
Baldwin, T.T., & Ford, J.K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41, 63-105.
Bandura, A. (1977). Social learning theory. Morristown, NJ: General Learning Press.
Bayman, P., & Mayer, R. E. (1988). Using conceptual models to teach BASIC computer programming. Journal of Educational Psychology, 80(3), 291-298.
Bielefield,A.,&Cheeseman,L.(1997).Technology and copyright law. New York: Neal-Schuman Publishers, Inc.