In the previous article, the researchers attempted to objectively measure the cognitive structuring of new knowledge using a questionnaire, despite admitting previously that this can best be done by getting their students to actively perform a task that demonstrated their learning.
This was an example of using a tool in place of the real thing. At the time of writing, I knew of an example from the design of airline cockpits, but I couldn't find the reference so I didn't include it - how annoying! Since submitting that assessment, I found the reference.
In the paper 'Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research', (Hollan, J., Hutchins, E., & Kirsh, D., 2000) the authors describe a study performed by Hutchins and Palen (1997). This study looked at how the flight engineer interacts with the fuel gauge on his instrument panel. They write "He interacts with the panel both as if it is the fuel system it depicts, and, at other times, as if it is just a representation of the fuel system..." In this case, the tool is design well enough that it is as good as what it is depicting: looking at the fuel gauge is as accurate as physically going into the fuel tanks and measuring the amount of fuel.
In the case of the researchers mentioned in my previous post, the tool they designed was likely not as good as what it depicted: the questionnaire they used to objectively measure structuring allowed for subjectivity by the respondents. Consequently, the results gained from this questionnaire were not as strong as they could have been.
Refs:
Hollan, J., Hutchins, E., & Kirsh, D., 2000. Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research. ACM Transactions on Computer-Human Interaction. 7(2) 174-196. (2000)
Hutchins, E., and Palen, L., 1997. Constructing Meaning from Space, Gesture and Speech. In Tools and reasoning: Essays in Situated Cognition, L. B. Resneck, R. Saljo, C. Pontecorvo, and B. Burge, Eds. Springer-Verlag, Vienna, Austria.
Sunday, September 18, 2011
Friday, September 16, 2011
Assessment #3
EDGE903 – Assessment 3
Andrew Kemp
Introduction
This essay is an analysis of a peer-reviewed article entitled “Evaluation of Learning Performance of E-Learning in China: A Methodology Based on Change of Internal Mental Model of Learners”, which appeared in The Turkish Online Journal of Educational Technology in 2010 (Zhang et al, 2010). This article primarily sought to demonstrate that two aspects of technology used in E-Learning platforms, namely Human Computer Interface (HCI) and animations, served to improve learning outcomes compared to the case when these two elements were not used. This analysis will address (a) the methodologies used, (b) the main points raised, including the pedagogical basis of the research, (c) how well the research addressed the issue, and (d) implications for the design of interactive learning environments (ILEs).
Methodology and Construct of the Research
The basic setting of this research was a web design course for students who had no prior experience in web design. In seeking to demonstrate the benefits of HCI and animations on learning outcomes, the researchers conducted two parallel web design courses; one course was delivered using a printed handbook, while the other was delivered using an HCI called ‘Virtual Campus’.
The researchers used an advertisement placed on the internet to attract participants to the study. 60 subjects were randomly selected to participate, and from this group, 2 cohorts of 30 were randomly created. Once these two cohorts were created, they were each assigned either the traditional course or the e-learning course. The course ran for 50 days, followed by an evaluation questionnaire to obtain the data used in the research.
The questionnaire used to provide the data for this research consisted of 17 questions. These questions were used to provide four groups of data that pertained to perception and structuring of knowledge. As will be described later in this essay, the researchers recognised that consideration of cognitive learning is paramount to designing an effective course delivered electronically, and that cognitive psychology is as important for the measurement of learning outcomes. Accordingly the questions were designed to measure attention, attitude, structuring (subjective measurement) and structuring (objective measurement).
The questions relating to perception and structuring (subjective measurement) were affirmative in nature and were rated using a Likert scale, where 1=strongly agree, 3=neutral and 5=strongly disagree.
The questions designed to objectively measure structuring allowed subjects to choose one or more answers pertaining to design problems that they thought were correct. The questions were designed to emulate the understanding of the process of web design.
Both the subjective and objective sets of questions were scored to measure changes to the mental model of the subjects, and these changes were compared across the two cohorts to see if the e-learning cohort demonstrated a statistically significant improvement in learning outcomes compared to the traditional cohort.
Main Points
The researchers’ over-arching hypothesis was that “the HCI and animation features of E-Learning will positively influence the learning result”. In preparing their research, though, the researchers realised that measurement of changes in knowledge structuring were vital to demonstrate that HCI and animation features cause a change in mental model that were superior to traditional learning techniques. Accordingly, two further hypotheses were tested, namely that HCI and animation will have positive effects on the learner’s cognitive perception and structuring of knowledge. Specifically, there were three main points the researchers made: (a) that E-Learning is only beneficial if sound instructional strategies are incorporated into its design, (b) Cognitive Learning Theory is important as a foundation of Instructional Design and Cognitive Psychology is fundamental to the successful measure of the effects of any ID, and (c) The inclusion of an HCI and animations improved the learning outcome of the web development course.
1. E-Learning is only beneficial if sound instructional strategies are incorporated into its design.
The researchers generalised that “the present construction of E-learning courses both in the education and business sectors put too much focus on the technological side of designing E-learning courses”, citing Attwell, Holmfield, Fabian, Karpati, et al (2003) as a supporting source. Ally also makes this point in the first chapter of the book ‘The Theory and Practice of Online Learning’ (Anderson 2008) by stating that “the reason for those benefits [of E-learning] is not the medium of instruction but the instructional strategies built into the learning materials”. Research by the United States Department of Education adds to this in its meta-analysis of the literature, by reporting that learning outcomes were only moderately better for those receiving online versus face to face instruction, with an average effect size of +0.2 (Means et al, 2009). It used research by Cohen (1988) suggesting that such an effect size was “small” (ibid) (Cohen, 1998).
The researchers go on to argue that designers of e-learning courses should not simply transfer content from traditional systems to online systems as this can be detrimental to the learning outcomes. This is supported by the work of Grubbs (2006), who suggested that any technological additions to an e-learning course be integrated into and supported by the instructional design, and not added randomly.
2. Cognitive Learning Theory is important as a foundation of Instructional Design and Cognitive Psychology is fundamental to the successful measure of the effects of any ID.
Leading on from the previous point, the researchers were very strong on the point that Cognitive Learning Theory (CLT) be used in the design of any E-learning course in order to improve its effectiveness. They based their work on Gagne’s work on the cognitive basis of learning theory (1977, 1979).
Gagne connected learning type to learning outcome and classified five types (Verbal Information Learning, Intellectual Skills, Cognitive Strategies, Attitudes and Motor Skills). The researchers noted that their web design course exemplified ‘Intellectual Skills’: develop a basic concept -> develop rules -> apply rules. They also offered that Gagne’s work “is very useful when applying the new technology in the design of the E-learning course”, as his nine stages of cognitive learning are mapped to his nine stages of instructional design (Gagne & Briggs, 1979).
The basis of the use of cognitive considerations in the design and measurement of an E-learning course lies in the nature of learning itself. Learning amounts to change in knowledge structures (Zhang et al, 2010), or cognitive ‘schemata’ in long-term memory. van Merrienboer and Sweller (2005) discuss this in the context of Cognitive Load Theory, where knowledge structures (‘cognitive schemata’) can store complex sets of knowledge in long-term memory, that can be loaded as a set into working memory as required. Importantly, this means that while working memory is limited in the number of units it can manipulate, a cognitive schemata is seen as one unit, no matter how complex it is (ibid.). By designing an E-learning course using cognitive learning considerations, the researchers suggest that the course will be more effective in allowing the learner to construct such sets of cognitive schemata and store them in long-term memory.
In addressing the measurement of change of mental model, the researchers recognised that measuring change in behaviour alone is insufficient to measure a cognitive change. The reason for this is that factors besides learning can be responsible for behaviour change. The researchers cite factors such as improvisation, flexibility, tactical astuteness, amongst others. Measurement of learning should therefore focus more on the cognitive aspects of learning, such as perception and structuring, to provide a more accurate assessment of change of mental model. As perception comprises attention and attitude in working memory, and structuring is the cognitive process of forming cognitive schemata in long-term memory, the researchers used cognitive psychology theory to measure both of these aspects of learning. In addition, both subjective and objective measurement of structuring is required (Zhang et al, 2010). Evaluation of the subjective aspect of structuring involves asking what the learner thinks; objective measurement takes the form of performance measurement (Zhang et al, 2010).
3. The inclusion of an HCI and animations improved the learning outcome of the web development course.
This was the main point of the research article. Notwithstanding the structural bias that could have been introduced into this research, as discussed below, the researchers did take time to test that there was no statistically significant relationship between the individual characteristics of the subjects and learning result, despite some others having found correlation between academic attainment, motivation and e-learning results (Hiltz, 1995). Similarly, while Webster (1997) found correlation between comfort with onscreen images and e-learning, this study did not. To the researchers’ credit, effort was made to ensure that the two courses were as similar as possible to exclude extraneous effects affecting the results, for example, where animations were used in the E-learning course, the graphical slides used in the traditional paper-based course were in also colour.
Discussion and Implications
This research does indeed show a positive effect on change of mental model as a result of using an HCI and animations compared to not using them. It is also noted that the researchers took considerable care and effort in applying appropriate cognitive considerations to their research. There are two main caveats to these results that the researchers did not discuss or elaborate on, and so to this author these results, while positive, are ameliorated to a degree.
The first consideration has to do with structural bias. While the researchers took care to randomise the creation of the two cohorts, there remain three possibilities of structural bias in this study. The first is whether or not the hypotheses were divulged to the subjects of the two groups either during the recruitment phase or prior to the courses beginning. If the groups were primed to think that the e-learning group was ‘intended’ to ‘out-perform’ the traditional group, this may have had implications for the motivation or performance of the subjects. The second is that while the two cohorts were created randomly, it appears that they were not randomly assigned to which lesson they would undertake. In not randomising this allocation, the researchers may have allowed subconscious expectations of the mix of subjects in each cohort to influence the allocation decision. The third is that the article does not mention if the researchers were involved in any instruction of the course for any or both of the cohorts. While the article does not mention this to be so, the courses could have had instructor input. If any instructors were involved, and they knew about the objective of the research, this would reduce the independence of any results so produced.
The second consideration of the validity of these results centres about a centrepiece of the research design. This paper takes care to state that for a proper evaluation to take place, a subjective and objective measurement of structuring must be made. It defines an objective measurement as one that measures performance of an actual task. It then neglects to provide such a task in assessing structuring, instead resorting to a multiple-choice questionnaire. It is not clear why this was done, but it weakens the result of the objectivity measurement by using an approximation. Presumably it would have been simple for the students to have performed a simple real-life design task, and indeed this would be expected in order to assess the level of attainment of the web-design course. In this study, the researchers instead designed and used a tool to act as an objective measure of a cognitive process, instead of measuring that process itself. The approximation is only as good as the tool itself – if the tool is poorly designed, it cannot act as a replacement for the real thing. As it is unknown how well the questionnaire performed in this regard, it is similarly unknown how reliable the results of the objective measurement are. If an actual performance task was set to objectively measure structuring and improvement of mental model, there may have even been greater improvement shown by the e-learning cohort than the 0.7 point difference reported, though this is uncertain.
This research mainly addresses two areas: the fact of using technology-mediated learning (TML) itself, and the fact of its design and measurement of effect. This research does seem to add to the pool of evidence that suggests E-learning has a positive effect to learning outcomes if the instructional design is based solidly on the principles of cognitive learning. In particular, close attention must be made to how a learner uses working memory and the cognitive schemata of long-term memory to build and retain knowledge. One implication is that this research is a starting point; it does not explore deeply enough the relationship between interactive elements of an HCI to the process of building cognitive schemata, so it should not be assumed that because these results show a positive outcome in this case, that these results will be replicable in every situation. This is especially so because of the limited setting this research was applied to, and the lack of information it has provided about the actual HCI elements used.
An interesting feature of this research was that it showed that the E-learning course had a greater positive effect on the perception aspect of learning than the structuring aspect. Whether or not this is due to the objective measurement of structuring being poorly designed is not known, but it does have implications for the use of TML in a curriculum: does TML augment perception more than structuring? If so, can anything about the design of E-learning elements in a curriculum be changed to augment the structuring process?
The strongest implication to come from this would be that if E-learning is going to be successful at positively affecting the mental model compared to traditional learning, a much closer link between the cognitive processes of structuring and the instructional design of the electronic course elements is required than was displayed in this research.
References
Anderson, T. (Ed) (2008). The Theory and Practice of Online Learning. AU Press, Athabasca University. Chapter 1: Foundations of Educational Theory for Online Learning.
Attwell, G., Holmfield, L. D., Fabian, P., Karpati, A., & Littig, P. (2003). E-learning in Europe – Result and Recommendations. Thematic Monitoring Under the LEONARDO DA VINCI Programme. ISSN 1618-9477.
Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Hillsdale, N.J.: Lawrence Erlbaum.
Gagne, R. M. (1977). The Conditions of Learning. New York: Holt, Rinehart and Winston.
Gagne, R. M., & Briggs, L. J. (1979). Principles of Instructional Design. New York: Holt, Rinehart and Winston.
Grubbs, J. (2006). Integrating Methods to Achieve an Effective Online Learning Environment. Illinois Online Network: Case Studies. 2(1). Retrieved 10 August 2011 from http://www.ion.uillinois.edu/resources/casestudies/vol2num1/grubbs/index.asp
Means, B. et al (2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. Washington DC: United States Department of Education. Retrieved 8 August 2011 from http://www2.ed.gov/rchstat/eval/tech/evidence-based-practices/finalreport.pdf
van Merrienboer, J. J. G. & Sweller, J., (2005). Cognitive Load Theory and Complex Learning: Recent Developments and Future Directions. Educational Psychology Review, 17(2) (2005).
Zhang, L. et al. (2010). Evaluation of Learning Performance of E-Learning in China: A Methodology Based on Change of Internal Mental Model of Learners. TOJET: The Turkish Online Journal of Educational Technology. 9(1) January 2010.
Sunday, September 11, 2011
Deconstruction Activity - 'Bad' Interface Design
The most ubiquitous example of 'bad' interface design is a city's CBD. A CBD is so familiar to us, we don't realise how terrible it is!
Why is the experience of landing in a new city so exciting and disturbing at the same time? Because we don't know where anything is, what the social norms are, where the social enclaves are for each subculture: each of these has to be learned over time, through trial and error, and through investigation. This is true for a city within our own country as well as ones in other countries with completely different cultures.
Before technologies such as online maps and searches, we had maps of roads, some with a red + sign to indicate where the hospitals were. Sometimes the city installed blue street signs to point to places like a museum - but you found these by chance. What about finding a toilet in a hurry if last night's seafood revisited? Even this is difficult in 2011.
A more friendly design interface for any CBD would be numerous information points that could pinpoint shops and services and provide advice, catering for the lowest common denominator. Not everyone has an iPhone, and my grandma wouldn't even know what one is.
Why is the experience of landing in a new city so exciting and disturbing at the same time? Because we don't know where anything is, what the social norms are, where the social enclaves are for each subculture: each of these has to be learned over time, through trial and error, and through investigation. This is true for a city within our own country as well as ones in other countries with completely different cultures.
Before technologies such as online maps and searches, we had maps of roads, some with a red + sign to indicate where the hospitals were. Sometimes the city installed blue street signs to point to places like a museum - but you found these by chance. What about finding a toilet in a hurry if last night's seafood revisited? Even this is difficult in 2011.
A more friendly design interface for any CBD would be numerous information points that could pinpoint shops and services and provide advice, catering for the lowest common denominator. Not everyone has an iPhone, and my grandma wouldn't even know what one is.
Distributed Cognition - A load of academic boondoggle?
Sometimes academics can get so wound up in semantics that it is easy to lose sight of a broad concept. The research on 'distributed cognition' seems to be such an area - it is an attempt to make sense of a cognitive unit's environment and how that influences that unit's cognition. It is a noble endeavour, but one that is possibly clumsily named.
Hutchin's paper 'Distributed Cognition' (Hutchins, 2000) discusses the principle that not all cognitive events are "encompassed by the skin or skull of an individual". Broadly, it relates how external social and material factors take some of the cognitive load of an individual. These external factors are called 'cognitive artefacts', which can take the form of a calculator, a nomogram, or even another person. It is careful, though, to delineate between "the cognitive properties required to manipulate the artefact from the computation that is acjieved via the manipulation of the artefact" (ibid).
This is a crucial point. If the classical notion of cognition comprises sensing, memory, deduction, and environment, then these 'cognitive artefacts' are simply part of the 'environment'. To say that these environmental factors comprise part of the cognition, or do the thinking, of an individual is misleading. While it is important to take these factors into account, they are simply tools. If a person gains new knowledge by interacting with another person for example, then the second person has simply been sensed - any judgement or reordering of thoughts by the first person still take place within that person. Cognition in this situation is not distributed, but remains singular.
The only way that 'distributed cognition' can make sense, is if the cognitive unit is not one person, but ALL people within a specified unit, such as a society. People learn from each other (but think themselves), and the actions of the social group can often exhibit emergent properties that were not evident initially, or individually. These emergent properties can then be incorporated by an individual within that group. In this case, the individuals still think for themselves while interacting with others - the cognition has remained within 'individual skins'. In the same way, different parts of the brain can interact with each other to form an emergent behaviour or action of the whole brain (person), so the brain is a product of distributed cognition, as Hutchins discusses. The cognitive unit here is a localised set of neurons.
It is clear that cognition occurs by an individual. But an individual what? An individual set of neurons, a person, or a society. A casual reading of this subject might lead one to think that a person's thinking is done by another person or a machine, but that is not the case. Any cognitive unit's cognition is all done within itself, to the extent it is equipped to do. To say that another unit thinks for, or in place of, another is boondoggle. At the most, units can simply provide stimuli for each other to think for themselves.
Hutchins, E. (2000). Distributed cognition [2001]. Retrieved from http://files.meetup.com/410989/DistributedCognition.pdf
Hutchin's paper 'Distributed Cognition' (Hutchins, 2000) discusses the principle that not all cognitive events are "encompassed by the skin or skull of an individual". Broadly, it relates how external social and material factors take some of the cognitive load of an individual. These external factors are called 'cognitive artefacts', which can take the form of a calculator, a nomogram, or even another person. It is careful, though, to delineate between "the cognitive properties required to manipulate the artefact from the computation that is acjieved via the manipulation of the artefact" (ibid).
This is a crucial point. If the classical notion of cognition comprises sensing, memory, deduction, and environment, then these 'cognitive artefacts' are simply part of the 'environment'. To say that these environmental factors comprise part of the cognition, or do the thinking, of an individual is misleading. While it is important to take these factors into account, they are simply tools. If a person gains new knowledge by interacting with another person for example, then the second person has simply been sensed - any judgement or reordering of thoughts by the first person still take place within that person. Cognition in this situation is not distributed, but remains singular.
The only way that 'distributed cognition' can make sense, is if the cognitive unit is not one person, but ALL people within a specified unit, such as a society. People learn from each other (but think themselves), and the actions of the social group can often exhibit emergent properties that were not evident initially, or individually. These emergent properties can then be incorporated by an individual within that group. In this case, the individuals still think for themselves while interacting with others - the cognition has remained within 'individual skins'. In the same way, different parts of the brain can interact with each other to form an emergent behaviour or action of the whole brain (person), so the brain is a product of distributed cognition, as Hutchins discusses. The cognitive unit here is a localised set of neurons.
It is clear that cognition occurs by an individual. But an individual what? An individual set of neurons, a person, or a society. A casual reading of this subject might lead one to think that a person's thinking is done by another person or a machine, but that is not the case. Any cognitive unit's cognition is all done within itself, to the extent it is equipped to do. To say that another unit thinks for, or in place of, another is boondoggle. At the most, units can simply provide stimuli for each other to think for themselves.
Hutchins, E. (2000). Distributed cognition [2001]. Retrieved from http://files.meetup.com/410989/DistributedCognition.pdf
Sunday, September 4, 2011
Esoteric Word Activity
The response to this activity could be anything! The websites I have chosen have one thing in common though - helping me think about the ideas with a mix of images and text.
Abstract Thought
http://www.babelsdawn.com/babels_dawn/2009/01/abstract-thought-predates-homo-sapiens.html
This page highlights how abstract thought is a basis of our own nature, beginning way back..
Aspiration
If aspiration precedes inspiration and is characterised by a 'burning desire', then this page acts as a guide and explanation.
http://www.stevepavlina.com/articles/cultivating-burning-desire.htm
Aura
Colours and sounds are frequencies, and frequencies carry information and presence. This images on this page speak to our intuition, while the text speaks to our mind.
http://www.crystalinks.com/colors.html
Abstract Thought
http://www.babelsdawn.com/babels_dawn/2009/01/abstract-thought-predates-homo-sapiens.html
This page highlights how abstract thought is a basis of our own nature, beginning way back..
Aspiration
If aspiration precedes inspiration and is characterised by a 'burning desire', then this page acts as a guide and explanation.
http://www.stevepavlina.com/articles/cultivating-burning-desire.htm
Aura
Colours and sounds are frequencies, and frequencies carry information and presence. This images on this page speak to our intuition, while the text speaks to our mind.
http://www.crystalinks.com/colors.html