Sunday, October 16, 2011

How much Gestalt does a flower have?

While out with my camera, I came across this happy little grouping:

How many Gestalt principles can you see? I can see:
  • Similarity
  • Grouping
  • Ground-field
  • Continuity

I'm sure there are more. What do you see? Post a comment to let me know...

Sunday, October 9, 2011

Gestalt Theory in Visual Screen Design

Principle 1:
Using Gestalt Theory in the visual design will help your interface users understand and explore your online learning resource.
After reading the literature on Gestalt Theory, the researchers distilled its various elements into eleven main principles. These Gestalt principles reflect how the mind organises images and finds patterns among groups of images or within images in order to find meaning, understanding, or balance. Examples of these principles are 'Grouping', 'Similarity' and 'Flow'. When the researchers used all of their eleven identified principles in the re-design of an educational website, the feedback in terms of design satisfaction was positive. By providing an interface that is pleasing to a mind's perception, motivation to engage with the site is increased. When combined with sound pedagogical principles, the 'Gestalt Approach' is likely to lead to better learning outcomes.

Principle 2:
Not all Gestalt principles are as beneficial as others.
While the researchers' results indicated that all respondents identified the use of the eleven Gestalt principles to provide learning benefit, there were differences in the strength of the accolades. In particular, the principle of 'figure-ground' was less effective than others, such as 'continuity', or 'symmetry'. The research does not make it clear whether this is due to the effectiveness of the 'figure-ground' principle itself (in improving learning outcome); it is possible it wasn't able to be incorporated as strongly as other principles, and so its effect was weaker. Further research needs to be done on Gestalt principles in relation to visual design of web-based learning systems, both in regards to the principles themselves and how they can be applied to different fields and types of instruction.

Reference:
Chang, D., Dooley, L. & Tuovinen, J. E. (2002). Gestalt theory in visual screen design: a new look at an old subject. Paper presented at the 7th World Conference on Computers in Education.

The Effectiveness of Nonverbal Symbolic Signs and Metaphors in Advertisements

Principle 1:
Symbolic signs require motivation and a non-trivial amount of cognitive load to process.
Symbolic signs are signs that have little or no resemblance to their physical meaning; they are metaphor, and their meaning is highly dependant on their context. An example would be an image of a few flowers: depending on the context, it could be interpreted as meaning 'spring', or 'feminine', or 'a field'. Unlike iconic signs, which physically resemble their meaning (and which are automatically recognised), symbolic signs require an amount of cognitive processing to decipher. The amount of cognitive processing performed depends on the motivation to decipher the sign. Therefore, unlike iconic signs, symbolic signs are unlikely to be deciphered by all who encounter them.

Principle 2:
Only consumers with 'moderate' motivation to decipher a sign will be persuaded by it.
Advertisers use symbolic signs to attribute non-utilitarian meaning to a product in order to change the perception of that product in consumers' minds. Consumers with low motivation to interpret the sign will not engage in the 'process of abduction', whereupon an inference is made as a result of seeing the sign. On the other hand, those with high motivation will also engage in a 'counterargument' after first deciphering the sign. An example of a counterargument is 'I know that the flower on the plane means that it will be sunny at my destination, but I also know it is a ruse to get me to use that airline.' Because of the tendancy to counterargue, it is difficult to persuade someone with high motivation using a symbolic sign. As it is known that deciphering a symbolic sign takes less germane load than counterarguing, only those with a moderate level of interest are likely to be persuaded by a symbolic sign's inference, as they will not make the extra effort required to counterargue.

Reference:
DeRosia, E. D. (2008). The Effectiveness of nonverbal symbolic signs and metaphors in advertisements: An experimental enquiry. Psychology & Marketing, 25(3): 298-316.

Nine Ways to Reduce Cognitive Load in Multimedia

Principle 1:
The 'Dual-Channel Assumption' can be successfully used to effectively manage cognitive load and improve learning outcomes in multimedia learning systems.
The 'dual-channel assumption' (Paivio, 1986; Baddeley, 1998) posits that there are two cognitive channels used when processing multimedia information: visual and verbal. Each channel extends from sensory memory through into working memory, and each has limited capacity (Chandler & Sweller, 1991; Sweller, 1999; Baddeley, 1998). Designing a multimedia system to share the load across these two channels (instead of all information passing through one channel) leads to an increase in germane cognitive load (Wittrock, 1989; Mayer, 1999, 2002) and a decrease in instrinsic and extraneous cognitive load, making better use of working memory capacity. Note that

Principle 2:
Narration is superior to written text when language is presented as part of the animation.
All animation necessarily is received and processed in working memory via the visual channel. While text is also received via the visual channel, narration is received via the verbal channel. When an animation is accompanied by text as an adjunct to the instruction, the visual channel can be overloaded easily. By using narration instead of text, the verbal channel is used instead, leaving the visual channel devoted entirely to the animation, reducing potential for intrinsic cognitive overload. (Mayer & Moreno, 2003)

References:
Baddeley, A. (1998). Human memory. Boston: Allyn & Bacon.
Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8, 293-332.
Mayer, R. E. (1999). The promise of educational psychology: Vol. 1, Learning in the content areas. Upper Saddle River, NJ: Prentice Hall.
Mayer, R. E. (2002). The promise of educational psychology: Vol. 2, Teaching for meaningful learning. Upper Saddle River, NJ: Prentice Hall.
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43-52.
Paivio, A. (1986). Mental representations: A dual coding approach. Oxford, england: Oxford University Press.
Sweller, J. (1999). Instructional design in the technical areas. Camberwell, Australia: ACER Press.
Wittrock, M. C. (1989). Generative processes of comprehension. Educational Psychologist, 24, 345-376.

Learner Control, Cognitive Load & Instructional Animation

Principle 1:
Total cognitive load is comprised of three elements: germane cognitive load, intrinsic cognitive load, and extraneous cognitive load. The goal of design is to reduce intrinsic and extraneous cognitive load, thereby increasing germane cognitive capacity.
Germane cognitive load is the amount of processing capacity performed by working memory on cognitive task, such as sorting or reorganising information - the more capacity engaged in this type of cognitive load the better. Intrinsic cognitive load is the amount of cognitive effort required to bring all parts of a problem together, and is increased as new material becomes more complex, or a person has to consider several parts of a problem at once. Extraneous cognitive load is the amount of processing resources a person has to devote to a problem as a result of elements outside of the problem itself, such as background music.

Principle 2:
When using animation as a teaching tool, learners should be given control over pacing to effect better learning outcomes.
Learner control of pacing can take the form of pause/play buttons, or breaking the animation into discrete units. When using pause/pay buttons, learners should be primed by directing them to pause the animation when they need to think about what has just been presented. Whether or not they do pause seems to be irrelevant; by using this kind of priming, the learner actively watches the material as opposed to just 'watching a movie'. Similarly, by breaking the anmation into segments, the implication is that each segment has a point that should be gleaned.

Reference:
Hasler, B., S., Kersten, B., and Sweller, J. (2007). Learner Control, Cognitive Load and Instructional Animation. Applied Cognitive Psychology. 21, p713-729.

Sunday, September 18, 2011

Addendum to previous post

In the previous article, the researchers attempted to objectively measure the cognitive structuring of new knowledge using a questionnaire, despite admitting previously that this can best be done by getting their students to actively perform a task that demonstrated their learning.

This was an example of using a tool in place of the real thing. At the time of writing, I knew of an example from the design of airline cockpits, but I couldn't find the reference so I didn't include it - how annoying! Since submitting that assessment, I found the reference.

In the paper 'Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research', (Hollan, J., Hutchins, E., & Kirsh, D., 2000) the authors describe a study performed by Hutchins and Palen (1997). This study looked at how the flight engineer interacts with the fuel gauge on his instrument panel. They write "He interacts with the panel both as if it is the fuel system it depicts, and, at other times, as if it is just a representation of the fuel system..." In this case, the tool is design well enough that it is as good as what it is depicting: looking at the fuel gauge is as accurate as physically going into the fuel tanks and measuring the amount of fuel.

 In the case of the researchers mentioned in my previous post, the tool they designed was likely not as good as what it depicted: the questionnaire they used to objectively measure structuring allowed for subjectivity by the respondents. Consequently, the results gained from this questionnaire were not as strong as they could have been.

Refs:
Hollan, J., Hutchins, E., & Kirsh, D., 2000. Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research. ACM Transactions on Computer-Human Interaction. 7(2) 174-196. (2000)

Hutchins, E., and Palen, L., 1997. Constructing Meaning from Space, Gesture and Speech. In Tools and reasoning: Essays in Situated Cognition, L. B. Resneck, R. Saljo, C. Pontecorvo, and B. Burge, Eds. Springer-Verlag, Vienna, Austria.

Friday, September 16, 2011

Assessment #3


EDGE903 – Assessment 3
Andrew Kemp

Introduction
            This essay is an analysis of a peer-reviewed article entitled “Evaluation of Learning Performance of E-Learning in China: A Methodology Based on Change of Internal Mental Model of Learners”, which appeared in The Turkish Online Journal of Educational Technology in 2010 (Zhang et al, 2010). This article primarily sought to demonstrate that two aspects of technology used in E-Learning platforms, namely Human Computer Interface (HCI) and animations, served to improve learning outcomes compared to the case when these two elements were not used. This analysis will address (a) the methodologies used, (b) the main points raised, including the pedagogical basis of the research, (c) how well the research addressed the issue, and (d) implications for the design of interactive learning environments (ILEs).


Methodology and Construct of the Research
The basic setting of this research was a web design course for students who had no prior experience in web design. In seeking to demonstrate the benefits of HCI and animations on learning outcomes, the researchers conducted two parallel web design courses; one course was delivered using a printed handbook, while the other was delivered using an HCI called ‘Virtual Campus’.

The researchers used an advertisement placed on the internet to attract participants to the study. 60 subjects were randomly selected to participate, and from this group, 2 cohorts of 30 were randomly created. Once these two cohorts were created, they were each assigned either the traditional course or the e-learning course. The course ran for 50 days, followed by an evaluation questionnaire to obtain the data used in the research.

The questionnaire used to provide the data for this research consisted of 17 questions. These questions were used to provide four groups of data that pertained to perception and structuring of knowledge. As will be described later in this essay, the researchers recognised that consideration of cognitive learning is paramount to designing an effective course delivered electronically, and that cognitive psychology is as important for the measurement of learning outcomes. Accordingly the questions were designed to measure attention, attitude, structuring (subjective measurement) and structuring (objective measurement).

The questions relating to perception and structuring (subjective measurement) were affirmative in nature and were rated using a Likert scale, where 1=strongly agree, 3=neutral and 5=strongly disagree.

The questions designed to objectively measure structuring allowed subjects to choose one or more answers pertaining to design problems that they thought were correct. The questions were designed to emulate the understanding of the process of web design.

Both the subjective and objective sets of questions were scored to measure changes to the mental model of the subjects, and these changes were compared across the two cohorts to see if the e-learning cohort demonstrated a statistically significant improvement in learning outcomes compared to the traditional cohort.


Main Points
The researchers’ over-arching hypothesis was that “the HCI and animation features of E-Learning will positively influence the learning result”. In preparing their research, though, the researchers realised that measurement of changes in knowledge structuring were vital to demonstrate that HCI and animation features cause a change in mental model that were superior to traditional learning techniques. Accordingly, two further hypotheses were tested, namely that HCI and animation will have positive effects on the learner’s cognitive perception and structuring of knowledge. Specifically, there were three main points the researchers made: (a) that E-Learning is only beneficial if sound instructional strategies are incorporated into its design, (b) Cognitive Learning Theory is important as a foundation of Instructional Design and Cognitive Psychology is fundamental to the successful measure of the effects of any ID, and (c) The inclusion of an HCI and animations improved the learning outcome of the web development course.

1. E-Learning is only beneficial if sound instructional strategies are incorporated into its design.
The researchers generalised that “the present construction of E-learning courses both in the education and business sectors put too much focus on the technological side of designing E-learning courses”, citing Attwell, Holmfield, Fabian, Karpati, et al (2003) as a supporting source. Ally also makes this point in the first chapter of the book ‘The Theory and Practice of Online Learning’ (Anderson 2008) by stating that “the reason for those benefits [of E-learning] is not the medium of instruction but the instructional strategies built into the learning materials”. Research by the United States Department of Education adds to this in its meta-analysis of the literature, by reporting that learning outcomes were only moderately better for those receiving online versus face to face instruction, with an average effect size of +0.2 (Means et al, 2009). It used research by Cohen (1988) suggesting that such an effect size was “small” (ibid) (Cohen, 1998).

The researchers go on to argue that designers of e-learning courses should not simply transfer content from traditional systems to online systems as this can be detrimental to the learning outcomes. This is supported by the work of Grubbs (2006), who suggested that any technological additions to an e-learning course be integrated into and supported by the instructional design, and not added randomly.

2. Cognitive Learning Theory is important as a foundation of Instructional Design and Cognitive Psychology is fundamental to the successful measure of the effects of any ID.
Leading on from the previous point, the researchers were very strong on the point that Cognitive Learning Theory (CLT) be used in the design of any E-learning course in order to improve its effectiveness. They based their work on Gagne’s work on the cognitive basis of learning theory (1977, 1979).

Gagne connected learning type to learning outcome and classified five types (Verbal Information Learning, Intellectual Skills, Cognitive Strategies, Attitudes and Motor Skills). The researchers noted that their web design course exemplified ‘Intellectual Skills’: develop a basic concept -> develop rules -> apply rules. They also offered that Gagne’s work “is very useful when applying the new technology in the design of the E-learning course”, as his nine stages of cognitive learning are mapped to his nine stages of instructional design (Gagne & Briggs, 1979).

The basis of the use of cognitive considerations in the design and measurement of an E-learning course lies in the nature of learning itself. Learning amounts to change in knowledge structures (Zhang et al, 2010), or cognitive ‘schemata’ in long-term memory. van Merrienboer and Sweller (2005) discuss this in the context of Cognitive Load Theory, where knowledge structures (‘cognitive schemata’) can store complex sets of knowledge in long-term memory, that can be loaded as a set into working memory as required. Importantly, this means that while working memory is limited in the number of units it can manipulate, a cognitive schemata is seen as one unit, no matter how complex it is (ibid.). By designing an E-learning course using cognitive learning considerations, the researchers suggest that the course will be more effective in allowing the learner to construct such sets of cognitive schemata and store them in long-term memory.

In addressing the measurement of change of mental model, the researchers recognised that measuring change in behaviour alone is insufficient to measure a cognitive change. The reason for this is that factors besides learning can be responsible for behaviour change. The researchers cite factors such as improvisation, flexibility, tactical astuteness, amongst others. Measurement of learning should therefore focus more on the cognitive aspects of learning, such as perception and structuring, to provide a more accurate assessment of change of mental model. As perception comprises attention and attitude in working memory, and structuring is the cognitive process of forming cognitive schemata in long-term memory, the researchers used cognitive psychology theory to measure both of these aspects of learning. In addition, both subjective and objective measurement of structuring is required (Zhang et al, 2010). Evaluation of the subjective aspect of structuring involves asking what the learner thinks; objective measurement takes the form of performance measurement (Zhang et al, 2010).


3. The inclusion of an HCI and animations improved the learning outcome of the web development course.
This was the main point of the research article. Notwithstanding the structural bias that could have been introduced into this research, as discussed below, the researchers did take time to test that there was no statistically significant relationship between the individual characteristics of the subjects and learning result, despite some others having found correlation between academic attainment, motivation and e-learning results (Hiltz, 1995). Similarly, while Webster (1997) found correlation between comfort with onscreen images and e-learning, this study did not. To the researchers’ credit, effort was made to ensure that the two courses were as similar as possible to exclude extraneous effects affecting the results, for example, where animations were used in the E-learning course, the graphical slides used in the traditional paper-based course were in also colour.


Discussion and Implications
This research does indeed show a positive effect on change of mental model as a result of using an HCI and animations compared to not using them. It is also noted that the researchers took considerable care and effort in applying appropriate cognitive considerations to their research. There are two main caveats to these results that the researchers did not discuss or elaborate on, and so to this author these results, while positive, are ameliorated to a degree.

The first consideration has to do with structural bias. While the researchers took care to randomise the creation of the two cohorts, there remain three possibilities of structural bias in this study. The first is whether or not the hypotheses were divulged to the subjects of the two groups either during the recruitment phase or prior to the courses beginning. If the groups were primed to think that the e-learning group was ‘intended’ to ‘out-perform’ the traditional group, this may have had implications for the motivation or performance of the subjects. The second is that while the two cohorts were created randomly, it appears that they were not randomly assigned to which lesson they would undertake. In not randomising this allocation, the researchers may have allowed subconscious expectations of the mix of subjects in each cohort to influence the allocation decision. The third is that the article does not mention if the researchers were involved in any instruction of the course for any or both of the cohorts. While the article does not mention this to be so, the courses could have had instructor input. If any instructors were involved, and they knew about the objective of the research, this would reduce the independence of any results so produced.

The second consideration of the validity of these results centres about a centrepiece of the research design. This paper takes care to state that for a proper evaluation to take place, a subjective and objective measurement of structuring must be made. It defines an objective measurement as one that measures performance of an actual task. It then neglects to provide such a task in assessing structuring, instead resorting to a multiple-choice questionnaire. It is not clear why this was done, but it weakens the result of the objectivity measurement by using an approximation. Presumably it would have been simple for the students to have performed a simple real-life design task, and indeed this would be expected in order to assess the level of attainment of the web-design course. In this study, the researchers instead designed and used a tool to act as an objective measure of a cognitive process, instead of measuring that process itself. The approximation is only as good as the tool itself – if the tool is poorly designed, it cannot act as a replacement for the real thing. As it is unknown how well the questionnaire performed in this regard, it is similarly unknown how reliable the results of the objective measurement are. If an actual performance task was set to objectively measure structuring and improvement of mental model, there may have even been greater improvement shown by the e-learning cohort than the 0.7 point difference reported, though this is uncertain.

This research mainly addresses two areas: the fact of using technology-mediated learning (TML) itself, and the fact of its design and measurement of effect. This research does seem to add to the pool of evidence that suggests E-learning has a positive effect to learning outcomes if the instructional design is based solidly on the principles of cognitive learning. In particular, close attention must be made to how a learner uses working memory and the cognitive schemata of long-term memory to build and retain knowledge. One implication is that this research is a starting point; it does not explore deeply enough the relationship between interactive elements of an HCI to the process of building cognitive schemata, so it should not be assumed that because these results show a positive outcome in this case, that these results will be replicable in every situation. This is especially so because of the limited setting this research was applied to, and the lack of information it has provided about the actual HCI elements used.

An interesting feature of this research was that it showed that the E-learning course had a greater positive effect on the perception aspect of learning than the structuring aspect. Whether or not this is due to the objective measurement of structuring being poorly designed is not known, but it does have implications for the use of TML in a curriculum: does TML augment perception more than structuring? If so, can anything about the design of E-learning elements in a curriculum be changed to augment the structuring process?

The strongest implication to come from this would be that if E-learning is going to be successful at positively affecting the mental model compared to traditional learning, a much closer link between the cognitive processes of structuring and the instructional design of the electronic course elements is required than was displayed in this research.


References

Anderson, T. (Ed) (2008). The Theory and Practice of Online Learning. AU Press, Athabasca University. Chapter 1: Foundations of Educational Theory for Online Learning.

Attwell, G., Holmfield, L. D., Fabian, P., Karpati, A., & Littig, P. (2003). E-learning in Europe – Result and Recommendations. Thematic Monitoring Under the LEONARDO DA VINCI Programme. ISSN 1618-9477.

Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Hillsdale, N.J.: Lawrence Erlbaum.

Gagne, R. M. (1977). The Conditions of Learning. New York: Holt, Rinehart and Winston.

Gagne, R. M., & Briggs, L. J. (1979). Principles of Instructional Design. New York: Holt, Rinehart and Winston.

Grubbs, J. (2006). Integrating Methods to Achieve an Effective Online Learning Environment. Illinois Online Network: Case Studies. 2(1). Retrieved 10 August 2011 from http://www.ion.uillinois.edu/resources/casestudies/vol2num1/grubbs/index.asp

Means, B. et al (2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. Washington DC: United States Department of Education. Retrieved 8 August 2011 from http://www2.ed.gov/rchstat/eval/tech/evidence-based-practices/finalreport.pdf

van Merrienboer, J. J. G. & Sweller, J., (2005). Cognitive Load Theory and Complex Learning: Recent Developments and Future Directions. Educational Psychology Review, 17(2) (2005).

Zhang, L. et al. (2010). Evaluation of Learning Performance of E-Learning in China: A Methodology Based on Change of Internal Mental Model of Learners. TOJET: The Turkish Online Journal of Educational Technology. 9(1) January 2010.