eLearning (online course): Understanding Digital Learners (UDL) Part 1
This is a course that I signed up months ago. I've 'kept' it to the year-end to complete, partly because it was hard to find a good "one stretch" of time to sit through a few related modules at one go.
Some people might think that it's a 'waste' of time to sign up and attempt an online course on an area that one is similar with. However, I would think otherwise, because I believe that, no matter how familiar we are with any area, there's always some gems that we could draw out from it. (This was true when I attended some IT skills related workshop in the past). I guess, it's the mindset that we hold at the point of learning matters more.
Well, doing the course at this juncture of the year (a few days before the new academic year starts), it helps us to revisit some key ideas that we ought to keep in mind all the time in our professional practice.
I quite like this clip - that reminded us that teaching is a very unique profession, that could survive without being 'responsive' - it sounded more like can live like a frog in the well (without technology). Is it really true? Well, if we refer to content knowledge only, probably still a yes for now and the near future; however, let's revisit the primary purpose of schooling. Is it solely about content knowledge, or soft skills that are relevant to the demands of the current and future workforce (i.e. for survival)? Which type of skills is more enduring? Having a certificate that bears a stamp on "knowledge" - is that enough? Hm... Food for thought, isn't it?
If we are clear of the primary purpose of schooling (or education), then let's face the reality, live in reality, and prepare our students for the reality! That would require the teachers to better understand what's the reality in order to broaden and sharpen our repertoire of skills so that we are able to equip our learners with the necessary.
LEARNING JOURNEY: The underlying driving force... Satisfaction... generated from... Positive Impact on Others... through the course of actions and interactions...
Monday, December 25, 2017
Sunday, July 16, 2017
Parking Lot: Engagement 'equates' to Learning?
Source: KQED News - How to Determine if Student Engagement is Leading to Learning
The article deals with a question that I think most educator would be able to answer - Engagement does not necessarily imply Learning takes place. It has been the myth that was clarified long long ago.
Hence, while technology may engage the learner, it does not necessarily mean that the learner has learnt effectively with technology use.
Of course, we need to first unpack what 'engagement' means (in the educational context). I think what's described in the 'hidden curriculum' has clearly listed down the elements that need to be present and interact "in" the learner. The outcome: extends to the level of motivation they have learn and progress in their education.
The article added that engagement would also translate to deeper learning (initiated by the learner). And to evaluate the degree of success, students must be able to demonstrate the understanding and skills they are expected to have acquired through the 'engaging activities'.
Saturday, June 03, 2017
Timed assessment - what does it try to find out?
This heading is eye-catching, especially to a Maths teacher!
It came from the post at edutopia: Tips for Tackling Timed Tests and Math Anxiety
It started by explaining the drawbacks of "timed assessment" and subsequently putting in some suggestions.
Indeed, "timed assessment" was something we were subjected to, all this while - not just in Mathematics, but also other subjects. It goes back to the rationale of such assessment - and of course, these days, more would discuss the purpose of assessment, which has evolved from Assessment of Learning to Assessment for Learning.
Would you associate "Timed Assessment" more with "AoL" or "AfL"?
Timed assessment is widely used for placement purpose, though some "major assessments" include other components like coursework or core assignments to be completed over a given period of time (e.g. those involves some research or experiment to be carried out).
Instead of focusing on the concerns arises from timed assessment, perhaps we can look at its purpose from another perspective?
It is no doubt that timed assessment causes stress the candidates as they are expected to demonstrate what they know within a limited time and psychologically, it creates a tension in how the brain processes data. One's heart pumps faster as it gets closer to the end time, haven't we experienced that before?
While I have not find out more (from any academic/ research papers) on this area, based on experience and reflection over these years, I think timed assessment has its merits. One of its "use" is to help us determine how well one has understood the concepts and being able to apply what he/ she has understood in a fluent manner, which is also a demonstration on the degree of mastery (or proficiently)! It comes with two core elements: Understanding/ Mastery + Effort
(with the assumption that one needs to put in effort for what's learnt to stay!)
In other words, students could have understood what was going on in the class, and even able to articulate deeper understanding. However, when one lacks the practice or do not have enough rounds to rehearse the use, while he/ she might be able to retain and therefore able to 'retrieve' the necessary knowledge and skill to process the task in the timed assessment. In other words, he/ she would need more time to do so compared to another one of the same capacity but have put in the effort to ensure internalise the knowledge and skills.
Given that 'limiting' factor that adds to the tension/ stress, this student would not be able to complete the task as quickly as his peer who is of the same ability but has put in the effort to practice, rehearse and internalise.
That, I think, based on experience, accounts for most instances when students usually claim they do not have enough time to complete the paper!
一分耕耘,一分收获
This is not about finding out who knows, who doesn't know as any form of assessment would be able to do this.
I think...
It is about differentiating who learns better and shows the potential being able to extent learning further.
It came from the post at edutopia: Tips for Tackling Timed Tests and Math Anxiety
It started by explaining the drawbacks of "timed assessment" and subsequently putting in some suggestions.
Indeed, "timed assessment" was something we were subjected to, all this while - not just in Mathematics, but also other subjects. It goes back to the rationale of such assessment - and of course, these days, more would discuss the purpose of assessment, which has evolved from Assessment of Learning to Assessment for Learning.
Would you associate "Timed Assessment" more with "AoL" or "AfL"?
Timed assessment is widely used for placement purpose, though some "major assessments" include other components like coursework or core assignments to be completed over a given period of time (e.g. those involves some research or experiment to be carried out).
Instead of focusing on the concerns arises from timed assessment, perhaps we can look at its purpose from another perspective?
It is no doubt that timed assessment causes stress the candidates as they are expected to demonstrate what they know within a limited time and psychologically, it creates a tension in how the brain processes data. One's heart pumps faster as it gets closer to the end time, haven't we experienced that before?
While I have not find out more (from any academic/ research papers) on this area, based on experience and reflection over these years, I think timed assessment has its merits. One of its "use" is to help us determine how well one has understood the concepts and being able to apply what he/ she has understood in a fluent manner, which is also a demonstration on the degree of mastery (or proficiently)! It comes with two core elements: Understanding/ Mastery + Effort
(with the assumption that one needs to put in effort for what's learnt to stay!)
In other words, students could have understood what was going on in the class, and even able to articulate deeper understanding. However, when one lacks the practice or do not have enough rounds to rehearse the use, while he/ she might be able to retain and therefore able to 'retrieve' the necessary knowledge and skill to process the task in the timed assessment. In other words, he/ she would need more time to do so compared to another one of the same capacity but have put in the effort to ensure internalise the knowledge and skills.
Given that 'limiting' factor that adds to the tension/ stress, this student would not be able to complete the task as quickly as his peer who is of the same ability but has put in the effort to practice, rehearse and internalise.
That, I think, based on experience, accounts for most instances when students usually claim they do not have enough time to complete the paper!
一分耕耘,一分收获
This is not about finding out who knows, who doesn't know as any form of assessment would be able to do this.
I think...
It is about differentiating who learns better and shows the potential being able to extent learning further.
Sunday, May 21, 2017
Feedback Strategy - Praise-Question-Polish (PQP)
One of the key takeaways from the recent webinar is the "PQP" strategy to provide feedback.
(once again, thanks to ETD colleagues who organised the virtual event, and thanks to Dr Tay for her sharing.)
Well, PQP does not only apply to students, i.e. to guide them to give constructive/ useful feedback, but also useful for teachers to frame our feedback to our students. Personally, I would make a more conscious effort to adopt this strategy and role model it.
Mindset change:
Ops! Having said that, I must clarify that it does not mean that I do not provide feedback to my students. Of course, some of us would say, what kind of feedback do you expect Maths teachers to give? Isn't it a tick or cross, or at most, circle the step where the error occurred? Well, yes, that's one form of feedback commonly practised by many of us, especially when we were in our beginning years as teachers. Isn't it "wordy" feedback is more relevant to other disciplines such as languages or humanities?
Not true.
I came to learn that there's a place for this in the subject (as I polish my craft over these years)!
事在人为!
There is a place for the less conventional exercises! For example, alternative assessments or performance tasks that come with rubrics! While writing qualitative feedback is something that I was not used to, I think this is one way students would benefit more (apart from trying to 'interpret' their work quality with a tick or a mark).
Based on my humble experience, the easiest way to give feedback is to make reference to the rubrics - not about lifting the entire description, but to contextualise (or customise) it to the work students have submitted. Students will learn, through the feedback, and make the relevant changes/ improvements/ enhancements to better a piece of work of similar nature. Personally, I observed this change in the quality in the viva voce assignment (as compared to the practice task). Undeniably, it takes time to give feedback and tasks of this scale is not assigned in a regular basis (like once a month). My point here is, given relevant amount of feedback, students will respond and make progress. It's worth the time and effort.
My Personal Reflection:
The next question is, by making reference to the rubrics, is it enough?
You see, I'm trying to make a connection between what I have been doing all this while to the PQP strategy. I think, in my current practice, what's lacking (though not totally missing) is the first "P" - Praise. Being very focused to identify what has gone wrong or incomplete, and targeted to improve, "Question" and "Polish" are two aspects that I'm familiar with. One area that I will need to work on is the first "P" - Praise.
What we could do:
Generic praises (such as excellent, "well done") without elaboration are not useful though it makes the day for the learner who receives the praise. Praise needs to be targeted so that it is helpful to affirm and to spur the learner to do even better (hopefully).
Similarly, when it comes to getting students to giving peer feedback, as teachers, we need/ should provide scaffolds so that learners are really giving useful feedback to their peers directed to the context (i.e. related to the task - disciplinary knowledge and skills involved).
"Good job", "Awesome" without elaboration are not very helpful though it motivates the learner.
In fact, before getting students to provide peer feedback, we should ask the following questions
In my opinion, to make peer feedback a productive one, for beginners involved in giving feedback, it is necessary to discuss the three questions above. With a clear rationale, the PQP strategy would make more sense to the students. On the other hand, stem questions would be necessary and very useful to help them kickstart this peer evaluation process. Though it seems to be very guided, it takes time for one to be familiar before adopting the vocabulary. Last but not least, the teacher needs to model the use so that it becomes something that the students are familiar with before they adopt a similar role (to give feedback).
Indeed, I think we should seek clarity to the following before embarking and getting others (i.e. students) to be involved:
To "question" seems obvious, isn't it? It seems to be a forte of most teachers; but sometimes we might "mix it up" with "polish". To question is about asking "why"? To seek understanding. To probe for clarity. To ask for elaboration. It is quite natural that we could have already passed a judgement to the answer/ working (sub-consciously).
Point to Ponder:
Indeed, after going crafting the previous section (of the post), a thought popped-up - Isn't it that we should give the learner some wait time between "Question" and "Polish" stages? With the question, shouldn't we gave the learner time to think, process and respond? If we "polish" almost immediately after the "questions", are we depriving them from learning with our 'prescription'? I wonder.
Well, I guess it all depends on how rigorous the learning activity require the peer feedback to be.
In fact, I think sometimes it would be more effective if we "re-organise and customise" the strategy to become:
Scaffolds:
It is not difficult to find useful stem questions to aid the crafting feedback for the three stages (PQP).
Here are some examples...
Praise
Technologies for PQP:
To facilitate documentation so that learners can make reference and learn (in the process of giving/ receiving feedback), we can consider the following tools (note: I've only selected those that I think are common and easy to use):
(i) Padlet(s)
As suggested in the example shared by Dr Tay - Padlet could be used to support the facilitation of the three stages of PQP. In the illustration, one padlet was divided into 3 sections.
One suggestion to enhance the technology:
If one is expecting several responses to each of the stages, it would be good to have 3 different padlets, each one serves a stage so that the feedback would not be messy.
I would possibly use this when first introducing the PQP strategy to students, to see how they provide feedback (for each of the stages) to a common problem. From the padlet posts, they would learn (from their peers) how to provide relevant feedback to the task presented.
Based on experience, the stickies may overlap each other - so, might be a bit challenging to manage. In addition, it would be much neater to set the posting layout to be organised in a grid or linear:
One could also activate the "Comment" feature for the padlet to allow one to respond to the feedback given:
An illustration on how it could look like:
The assumption: Re-classification of stickies is not necessary.
(ii) GoogleForm/ Sheets
GoogleForm is one of the most convenient tools and, it can be 'recycled' to serve a few exercises if what's shown in the form is generic enough.
What I like is that the feedback can be sorted by Problem numbers (or even colour-coded).
In other words, we can use the same form in a classroom where students provide feedback to different sets of answers/ presentations at the same time.
The only draw back is the learner could not respond to the feedback in the spreadsheet unless he/ she has the right to edit the spreadsheet. Again, depending on the maturity of the class, the teacher may choose to give the edit right or not.
(iii) Google Slides or Doc
If Google Slides are used to collate the work, for example, in a language class, students work in pairs to annotate the visual text in an assigned slide; at the end of the activity, the pairs can be assigned to provide feedback to one or two other groups.
We can simply insert the PQP 'structure' in the speakers' notes of the slide so that students can insert their feedback to the peers.
(once again, thanks to ETD colleagues who organised the virtual event, and thanks to Dr Tay for her sharing.)
Well, PQP does not only apply to students, i.e. to guide them to give constructive/ useful feedback, but also useful for teachers to frame our feedback to our students. Personally, I would make a more conscious effort to adopt this strategy and role model it.
Mindset change:
Ops! Having said that, I must clarify that it does not mean that I do not provide feedback to my students. Of course, some of us would say, what kind of feedback do you expect Maths teachers to give? Isn't it a tick or cross, or at most, circle the step where the error occurred? Well, yes, that's one form of feedback commonly practised by many of us, especially when we were in our beginning years as teachers. Isn't it "wordy" feedback is more relevant to other disciplines such as languages or humanities?
Not true.
I came to learn that there's a place for this in the subject (as I polish my craft over these years)!
事在人为!
There is a place for the less conventional exercises! For example, alternative assessments or performance tasks that come with rubrics! While writing qualitative feedback is something that I was not used to, I think this is one way students would benefit more (apart from trying to 'interpret' their work quality with a tick or a mark).
Based on my humble experience, the easiest way to give feedback is to make reference to the rubrics - not about lifting the entire description, but to contextualise (or customise) it to the work students have submitted. Students will learn, through the feedback, and make the relevant changes/ improvements/ enhancements to better a piece of work of similar nature. Personally, I observed this change in the quality in the viva voce assignment (as compared to the practice task). Undeniably, it takes time to give feedback and tasks of this scale is not assigned in a regular basis (like once a month). My point here is, given relevant amount of feedback, students will respond and make progress. It's worth the time and effort.
My Personal Reflection:
The next question is, by making reference to the rubrics, is it enough?
You see, I'm trying to make a connection between what I have been doing all this while to the PQP strategy. I think, in my current practice, what's lacking (though not totally missing) is the first "P" - Praise. Being very focused to identify what has gone wrong or incomplete, and targeted to improve, "Question" and "Polish" are two aspects that I'm familiar with. One area that I will need to work on is the first "P" - Praise.
What we could do:
Generic praises (such as excellent, "well done") without elaboration are not useful though it makes the day for the learner who receives the praise. Praise needs to be targeted so that it is helpful to affirm and to spur the learner to do even better (hopefully).
Similarly, when it comes to getting students to giving peer feedback, as teachers, we need/ should provide scaffolds so that learners are really giving useful feedback to their peers directed to the context (i.e. related to the task - disciplinary knowledge and skills involved).
"Good job", "Awesome" without elaboration are not very helpful though it motivates the learner.
In fact, before getting students to provide peer feedback, we should ask the following questions
- Do students know the purpose/ intent of feedback?
- Do they know what are the areas that they should specifically be looking out for when giving feedback (e.g. strategy or aesthetic part of the piece of work?)
- Do they know what makes a feedback useful to the recipient?
In my opinion, to make peer feedback a productive one, for beginners involved in giving feedback, it is necessary to discuss the three questions above. With a clear rationale, the PQP strategy would make more sense to the students. On the other hand, stem questions would be necessary and very useful to help them kickstart this peer evaluation process. Though it seems to be very guided, it takes time for one to be familiar before adopting the vocabulary. Last but not least, the teacher needs to model the use so that it becomes something that the students are familiar with before they adopt a similar role (to give feedback).
Indeed, I think we should seek clarity to the following before embarking and getting others (i.e. students) to be involved:
- Why Praise?
- Why Question?
- Why Polish?
To "question" seems obvious, isn't it? It seems to be a forte of most teachers; but sometimes we might "mix it up" with "polish". To question is about asking "why"? To seek understanding. To probe for clarity. To ask for elaboration. It is quite natural that we could have already passed a judgement to the answer/ working (sub-consciously).
- Does this sound like the "fault finding" stage? To some extent, yes - with the intent to surface the error/ issue that one might have overlooked so that they can be addressed. So, we need to be mindful of the words and tone used in this stage when giving feedback. And it is important to keep in mind that the feedback is targeted at work and not the person.
- Indeed, it is important that the recipient of the feedback is clear of the rationale so that he/ she would receive the feedback with an open mind, and not attempt to defend for the sake of defending.
Point to Ponder:
Indeed, after going crafting the previous section (of the post), a thought popped-up - Isn't it that we should give the learner some wait time between "Question" and "Polish" stages? With the question, shouldn't we gave the learner time to think, process and respond? If we "polish" almost immediately after the "questions", are we depriving them from learning with our 'prescription'? I wonder.
Well, I guess it all depends on how rigorous the learning activity require the peer feedback to be.
In fact, I think sometimes it would be more effective if we "re-organise and customise" the strategy to become:
- "Praise" followed by "Question"
- "Praise" followed by "Polish"
Scaffolds:
It is not difficult to find useful stem questions to aid the crafting feedback for the three stages (PQP).
Here are some examples...
Praise
- I like the part where...
- I like the way you explained...
- I like the order you used in your writing because...
- I think the ending was...
- I was confused with...
- I didn't understand...
- Why did you leave out...
- Why did you include....
- Don't forget to add...
- Are your paragraphs in the best order....?
- Could you add more to this part....?
- Have you tried...?
- PQP: Praise-Question-Polish: A Group Revision Strategy that Works
- Teachers' Notebook: PRAISE! QUESTION! POLISH! Peer to Peer Editing!
- Peer Response:Teaching Specific Revision Suggestions
Technologies for PQP:
To facilitate documentation so that learners can make reference and learn (in the process of giving/ receiving feedback), we can consider the following tools (note: I've only selected those that I think are common and easy to use):
(i) Padlet(s)
As suggested in the example shared by Dr Tay - Padlet could be used to support the facilitation of the three stages of PQP. In the illustration, one padlet was divided into 3 sections.
One suggestion to enhance the technology:
If one is expecting several responses to each of the stages, it would be good to have 3 different padlets, each one serves a stage so that the feedback would not be messy.
I would possibly use this when first introducing the PQP strategy to students, to see how they provide feedback (for each of the stages) to a common problem. From the padlet posts, they would learn (from their peers) how to provide relevant feedback to the task presented.
Based on experience, the stickies may overlap each other - so, might be a bit challenging to manage. In addition, it would be much neater to set the posting layout to be organised in a grid or linear:
One could also activate the "Comment" feature for the padlet to allow one to respond to the feedback given:
An illustration on how it could look like:
The assumption: Re-classification of stickies is not necessary.
(ii) GoogleForm/ Sheets
GoogleForm is one of the most convenient tools and, it can be 'recycled' to serve a few exercises if what's shown in the form is generic enough.
- GoogleForm is easy to set up. Broadly, the structure would to collect the responses using
What I like is that the feedback can be sorted by Problem numbers (or even colour-coded).
In other words, we can use the same form in a classroom where students provide feedback to different sets of answers/ presentations at the same time.
The only draw back is the learner could not respond to the feedback in the spreadsheet unless he/ she has the right to edit the spreadsheet. Again, depending on the maturity of the class, the teacher may choose to give the edit right or not.
(iii) Google Slides or Doc
If Google Slides are used to collate the work, for example, in a language class, students work in pairs to annotate the visual text in an assigned slide; at the end of the activity, the pairs can be assigned to provide feedback to one or two other groups.
We can simply insert the PQP 'structure' in the speakers' notes of the slide so that students can insert their feedback to the peers.
Reflection: Learning the "HOW" and "WHAT" learners are thinking
Technology is a powerful tool that enables us to "see" and "document" what's going on in our mind, hence creating a much enriched learning experience, for self and others!
Well, the above seemed too broad and generic, especially the second half of the statement. Isn't it a given? Some of us might think. Well, what assumptions do we make if we say it's a given? That is, the first half of the statement is TRUE for the learners!
Episodes of the learning activities flashed across my head when attending (and mentally preparing to attend) the recent webinar. Indeed, value this opportunity as I hardly had a chance to sit down and think about T&L more thoroughly and making good connections across my practices during this assessment period.
Sometimes, we tend to claim (and therefore generate excuses) like due to the nature of subject discipline, there is little room for technology to come into the picture more often that we could. Well, often it's a "yes" response if we look at what's expected of the students in the syllabus document. E.g. They are expected to carry out proofs in Mathematics. The application of knowledge from various units and the writing to present their train of thought - all to be written on the paper. On other hand, how do we know if the students are not regurgitate what they had memorised?
Let's refer to the Singapore Mathematics Framework (Syllabus document, p14) - which aspect(s) do we try to develop/ inculcate in our students? We are good in some areas outlined in the 'pentagon' (in particular, the skills, concepts and processes). Metacognition is an area that we should work harder on; and some aspects of "processes" could be further strengthened by riding on the affordances of technologies!
Making thinking visible is one approach that enable us to "see" how students think, how they process data/ information and definitely enable us to diagnose any misconceptions. It does not stop here! Because we could "see" the misconceptions, it enable us to surface the difficult points (Three Point Framework. (Yang & Ricks, 2012)) which we, as educators, might not be aware of! There are many generic thinking routines that we can pick and choose. One way of implementation is to make the routine "really" visible - flag it out each time it is used. Another way is to have it embedded more seamlessly in the way the problem/ question is being put across - which I think it's more 'natural'.
To probe how they think, often we get them to elaborate the thinking process, which includes articulating:
How do we do this in a less elaborate but deliberate manner during lesson, and at the same time, reach out to as many learners in the class as possible?
I guess, that's where technology makes its entrance and presence felt to this learning ecosystem.
Its key is accessibility. It's not about physical accessibility (which no doubt, students are well provided with technological devices in Singapore schools). But, it provides a channel for teachers to access learners' thinking far much faster and easier. With the appropriate choice of tools used in a well thought through manner, we are able to "see" how students think!
One of my favourite examples is getting students to explain via viva voce how they manage a problem (notice that I use 'manage' instead of 'solve'?): They
Students do not know and do not carry out the above steps naturally. It's not something that they are born with. Nevertheless, quite often, as teachers, we make assumptions that all learners are quick to pick that up, forgetting they need time to hone their stills and feedback is necessary for them to make positive progress. Scaffolding is a must to bring students through all these stages; and the guidance given in their first attempt would be far much more compared to subsequent attempts. Rubrics help. But we must explain the rubrics and give illustrations to that they become aware of the expectations and polish their crafts along the way. In summary,
Over the years, error analysis is introduced into the mathematics classrooms. It serves dual purposes. Indeed, it could be used as a strategy for differentiated learning, though I did not notice it when I first introduced it to the students.
The original intent, still remains as one of the primary intent, is for students to identify and explain what's wrong with the working (which is usually a result of misconception). Think more deeply, what is the underlying assumption behind this?
How does technology come in to support the above?
It can be as simple as getting students to provide a brief explanation to support/ justify their answer. No sophisticated technology is required to facilitate the above processes. With the use of blog (comments) and Google Form/ Spreadsheet (embedded in a blog).
Well, the above seemed too broad and generic, especially the second half of the statement. Isn't it a given? Some of us might think. Well, what assumptions do we make if we say it's a given? That is, the first half of the statement is TRUE for the learners!
Episodes of the learning activities flashed across my head when attending (and mentally preparing to attend) the recent webinar. Indeed, value this opportunity as I hardly had a chance to sit down and think about T&L more thoroughly and making good connections across my practices during this assessment period.
Sometimes, we tend to claim (and therefore generate excuses) like due to the nature of subject discipline, there is little room for technology to come into the picture more often that we could. Well, often it's a "yes" response if we look at what's expected of the students in the syllabus document. E.g. They are expected to carry out proofs in Mathematics. The application of knowledge from various units and the writing to present their train of thought - all to be written on the paper. On other hand, how do we know if the students are not regurgitate what they had memorised?
- Drawing from my personal experience, sad to say, that's how I cleared my Physics at "O" Levels and went through the same when I did Calculus in my university days! Glad that I joined the teaching profession and over the years, I gain better insights to understand how learners learn and how teachers could do differently to enable learning. I believe, if I had learnt or shown how to learn the abstract topics differently (or if my teachers had approach teaching differently), my learning experience and perspectives would have been rather different today. The teacher/lecturer had taught; but I had not learnt well.
- From another perspective, this experience has helped me to better understand the struggle that learners in similar situation as I did have to go through. It helped me to recognise and appreciate how important it is for teachers to possess the necessary and relevant knowledge and skills to bring about positive experiences for the learner.
- To conclude this paragraph: It's about the type of assessment item (to draw out what we want to know about our learner) and the timeliness of our response to need the need.
Let's refer to the Singapore Mathematics Framework (Syllabus document, p14) - which aspect(s) do we try to develop/ inculcate in our students? We are good in some areas outlined in the 'pentagon' (in particular, the skills, concepts and processes). Metacognition is an area that we should work harder on; and some aspects of "processes" could be further strengthened by riding on the affordances of technologies!
Making thinking visible is one approach that enable us to "see" how students think, how they process data/ information and definitely enable us to diagnose any misconceptions. It does not stop here! Because we could "see" the misconceptions, it enable us to surface the difficult points (Three Point Framework. (Yang & Ricks, 2012)) which we, as educators, might not be aware of! There are many generic thinking routines that we can pick and choose. One way of implementation is to make the routine "really" visible - flag it out each time it is used. Another way is to have it embedded more seamlessly in the way the problem/ question is being put across - which I think it's more 'natural'.
To probe how they think, often we get them to elaborate the thinking process, which includes articulating:
- What are the key words they identify - which frames the direction or way they will approach the question/ problem?
- What are the info/ data they would use or need to solve the question/ problem?
- What strategies would they consider? How to do they decide which to use if there is more than one way? (What's the criteria)
- How do they articulate the steps clearly - where the concepts are explained?
- How would they check that the answer is reasonably correct?
How do we do this in a less elaborate but deliberate manner during lesson, and at the same time, reach out to as many learners in the class as possible?
I guess, that's where technology makes its entrance and presence felt to this learning ecosystem.
Its key is accessibility. It's not about physical accessibility (which no doubt, students are well provided with technological devices in Singapore schools). But, it provides a channel for teachers to access learners' thinking far much faster and easier. With the appropriate choice of tools used in a well thought through manner, we are able to "see" how students think!
One of my favourite examples is getting students to explain via viva voce how they manage a problem (notice that I use 'manage' instead of 'solve'?): They
- are required to articulate the background (concepts)
- describe how they dissect the question (to understand what's given and what's required)
- list down the strategies can they apply and
- articulate what considerations they need to take into account to make a choice of the method;
- and eventually, how they would check the reasonableness of their proposed solution or answer!
Students do not know and do not carry out the above steps naturally. It's not something that they are born with. Nevertheless, quite often, as teachers, we make assumptions that all learners are quick to pick that up, forgetting they need time to hone their stills and feedback is necessary for them to make positive progress. Scaffolding is a must to bring students through all these stages; and the guidance given in their first attempt would be far much more compared to subsequent attempts. Rubrics help. But we must explain the rubrics and give illustrations to that they become aware of the expectations and polish their crafts along the way. In summary,
Over the years, error analysis is introduced into the mathematics classrooms. It serves dual purposes. Indeed, it could be used as a strategy for differentiated learning, though I did not notice it when I first introduced it to the students.
The original intent, still remains as one of the primary intent, is for students to identify and explain what's wrong with the working (which is usually a result of misconception). Think more deeply, what is the underlying assumption behind this?
- In order to identify the error, the assumption is students already know the concept quite well such that they could see or identify the step that went wrong. There is another group who could identify the error; however, are unable to explain the misconception - simply because through their own experiences - they might have made enough similar mistakes or similar mistakes had been pointed to them before which would help them to see the mistakes very quickly. It is possible that latter could make the same mistake because they have yet understood the concept behind the operation. To differentiate these 2 groups of students (on who really understand what's going on) would require them to explain what was the thinking behind the error. This is the first step to develop their metacognition. The ability to articulate demonstrate how well they have known the concepts (as concepts in isolation, as well as links to others).
- On the other hand, error analysis-type of exercises would quickly filter those who are still trying to grabble with the basic or still functioning at the operational level. Likelihood is, the working speaks the type of misconception that these students already have. So, they would not be able to surface any mistake in the working at all! Hence, their conclusion is "nothing is wrong", but I guess that's what would worry them!
- We also have route procedural learners who probably re-attempt the entire question using the 'correct' method. To surface the error in the given working, they "spot" the difference and try to describe the mistake. They are unlikely able to explain the thinking behind the mistake (i.e. the misconception), but able to tell operationally what has gone wrong.
How does technology come in to support the above?
It can be as simple as getting students to provide a brief explanation to support/ justify their answer. No sophisticated technology is required to facilitate the above processes. With the use of blog (comments) and Google Form/ Spreadsheet (embedded in a blog).
- E.g. 1: (S1) Algebra - Which is larger?
- E.g. 2: 6 AM Quiz - Which is larger? Question, Response
- E.g. 3: Substitution: Can you tell what is wrong?
- E.g. 4: Factorisation: Which method to use? Question, Response
- E.g. 5: What is the deciding Factor? HCF or LCM. Question, Response
- E.g. 6: Viva Voce (posted and organised in Youtube).
- In the recent years, viva voce clips are submitted via the Google Classroom
- When students posted their work in Youtube, we could provide feedback via the comments feature. Similarly, we can also post our feedback to students using the comments feature in Google Classroom.
Saturday, May 20, 2017
Webinar: Assessment for Learning (AfL) - what does it emcompass?
Attended a webinar (organised by ETD) yesterday and glad to have one solid hour of learning and engagement :)
At the start of the webinar, a poll was conducted among the participants - to indicate which of the following did they think would describe assessment for learning most appropriately,
Based on the way the questions are phrased, it is no doubt that GPS seemed to be the most appropriate "answer".
Personally, I think all (4 items) are relevant and contribute to the big picture, especially if we talk about "effective" assessment for learning.
It was obvious that GPS best describes the "moves" involved:
To begin with an end in mind (where the learners need to be at eventually), we need to know our learners' strengths and gaps, what they know, what they do not know, what are their learning challenges/ difficulties and who they are (which would involve other areas like learning style). Only then, we could identify the most appropriate/ suitable approach to be integrated into our lesson/ assessment design so that they could benefit from the process.
How about the rest (thermometer, mirror and stethoscope)? Can we do without them?
1+ 1 + 1 + 1 > 4 impact.
At the start of the webinar, a poll was conducted among the participants - to indicate which of the following did they think would describe assessment for learning most appropriately,
- Thermometer
- Stethoscope
- Mirror
- GPS (Global Positioning System)
- Where the learners are?
- How best to get there?
- Where the learners need to be?
Based on the way the questions are phrased, it is no doubt that GPS seemed to be the most appropriate "answer".
Personally, I think all (4 items) are relevant and contribute to the big picture, especially if we talk about "effective" assessment for learning.
It was obvious that GPS best describes the "moves" involved:
To begin with an end in mind (where the learners need to be at eventually), we need to know our learners' strengths and gaps, what they know, what they do not know, what are their learning challenges/ difficulties and who they are (which would involve other areas like learning style). Only then, we could identify the most appropriate/ suitable approach to be integrated into our lesson/ assessment design so that they could benefit from the process.
How about the rest (thermometer, mirror and stethoscope)? Can we do without them?
- Thermometer - it's about measurement. Indeed, I would equate it with our sensing - to sense readiness of students to move on? to sense if they got stuck at a difficult point. It also means to be sensitive and responsive to the students' needs. This requires us to be flexible and responsive. For example, do we complete what's planned for the 60-min period lesson, and 'die-die' must finish what's planned, or should we "detour" or change path because the sensing tells us that students needed additional support or scaffolds that we might not have anticipated? Will we end up in a situation that "we have taught; but have students learnt?" This is essential to alert us where the learners are.
- Stethoscope - an item that I would associate with diagnosis, which I think it's an important step. If the diagnosis is inaccurate, we would plan and implement something that is irrelevant or not appropriate, when the good intent would never lead to fruitful or productive outcome. Diagnosis does not necessary just refer to what's immediate; but sometimes, it's where we surface other underlying issues which may or may not link to something we can address to immediately. We need to 对症下药, else would lead to 前功尽弃
- Mirror - close association with reflection (of practices). I would regard this as a 'given' - teachers would regularly draw on his/ her past experiences, review and reflect his/ her practices to hone his/ her skills. The reflection would also mean he/ she could best match the approaches/ strategies to address what is needed (surfaced through the diagnosis).
1+ 1 + 1 + 1 > 4 impact.
Kahoot! in Learning - a Reflection
Kahoot https://getkahoot.com/
I was introduced to "Kahoot" several years ago.
It became and is still a very popular among the colleagues, as well as the students. Teachers use this tool for short quizzes for review purpose (most of the time), student also used this tool when designing activities to engage the student community, and even parents and in outreach programmes!
Without the need to carry out any serious research, based on observations in schools (on-site), feedback from those who have used in or online articles/ postings, it is not difficult to understand why it is so popular. Here are some reasons:
The engagement is high. (again it depends on how we define engagement!)
On another note, we need to bear in mind that engagement does not necessarily equate to learning.
This is how it is often used/ implemented in class:
To transfer this notion in lower secondary maths learning, we can create quizzes to get students identify terms in quadratic expressions (e.g. coefficient, constant, variable), identify the types of number system that a given number belong to, map linear equations to graphs (with observable characteristics like positive/ negative/ neutral/ undefined gradients, positive/ negative intercepts), reading data from graphical representations, just to name a few. It is, quite obvious, from these examples that the questions in these quizzes cannot be too complex, and certainly there are some limitations.
Similarly, for Science, it could be used in areas like identifying the correct chemical structure of some compound, characteristics of the elements, tests for presence of selected compounds, etc... (just to name a few).
Well, I'm not suggesting giving Kahoot a total miss... but I think the point to emphasize here is, we need to be clear on the objective(s) we have in mind for that learning experience we intend to create, or what we hope to accomplish through this activity.
To stretch the tool a bit more, since we can't change the features in the tool, we can try using the tool differently. (That's where the user makes a difference!)
The pace to release the questions can be managed by the user! If we look at the platform more closely, we can set different duration to different questions within the same quiz!
Here's my takeaway when I modified the way the quiz is to be administered:
(1) Depending on the level of difficulty of question, different duration was set.
(2) Prior to the start of the quiz, students were preempted that they needed to have their writing materials, notebook and calculator ready so that they could work on the answers.
I was glad that a handful of students actually tried working out the answer before making their choice. As anticipated, most of them did not pick up their pen to write or scribble anything even when the questions became more complex. It was quite obvious that many just randomly pick an answer (after eliminating what's really not possible).
(3) I paused after all students submitted their responses. We took a bit of time to discuss how to 'dissect' the question and talk aloud how the solve the problem. Indeed, I think that helps when similar question popped up subsequently. More students picked up a pen to scribble a couple of lines before they make a choice. The emphasis here is accuracy against speed. When given a reasonable amount of time, the participants became more willing to take time to process the question more carefully. This was actually what I hope to see and achieve - the students to process the questions mindfully and learning should not be compromised by the excitement and urge to win!
(4) Learning does not end here! In anticipation that some students would like to revisit the questions (which they might not be able to access easily), I extracted the questions in a set of slides and make it available to them. Suggested working is also available so that they could see how the question could be tackled (from the teacher's perspective).
Last but not least, borrowing a Chinese saying, 规矩是死的,人是活的 - one has to exercise flexibility and assess the nature of the topic and be very clear of the objective before designing the quiz items and deciding how to administer it.
I was introduced to "Kahoot" several years ago.
It became and is still a very popular among the colleagues, as well as the students. Teachers use this tool for short quizzes for review purpose (most of the time), student also used this tool when designing activities to engage the student community, and even parents and in outreach programmes!
Without the need to carry out any serious research, based on observations in schools (on-site), feedback from those who have used in or online articles/ postings, it is not difficult to understand why it is so popular. Here are some reasons:
- The tool is very easy to use! For those of us who 'hang' around and "play" with web applications, we would agree that the navigation in this platform is intuitive. Kahoot even has a guide to help newcomers to use it - click HERE for the Kahoot guide!
- It is hassle free - only the creator needs to sign up for an account to create the quizzes. To administer the quiz, the creator simply needs to click at the start button to generate a code that participants just need to enter it into a 'generic' website, "https://kahoot.it" to sign up with a name. It's ready to go! The administration of the activity is simple! And you can involve many people at a go!
- One of the 'winning' element of this tool is, all participants will do the 'same' thing (i.e. to look at a common screen) at the same time. In other words, the pace to attempt the quiz has been "synchronised". All participants are directed to the same objective - wait for the question to appear and choose the correct answer! All the question at the same time and are given the same amount to process. The only 'differentiation' is the system will time the response time, hence the "higher ability" participant would stand to gain more points! (Assumption here is higher ability participant takes a shorter time to process the question.)
- Of course, the unique music that comes with all the Kahoots create the excitement (or perhaps the anxiety for some), and it is the one that stirs the emotion and anxiety throughout the quiz! Imagine you play Kahoot without the music (which I did before) - the level of engagement would reduce by more than half! The music is the soul of the entire activity!
The engagement is high. (again it depends on how we define engagement!)
On another note, we need to bear in mind that engagement does not necessarily equate to learning.
- What is going on in the participants' minds when they go through the entire activity?
- Is the urge to win the game (which could be hyped up by the music) far much greater than taking the time to look at the questions more deeply on what's expected? Are the participants simply regurgitating some facts and non-challenging questions "mindlessly"?
- Indeed, what does the designer of the quiz want to achieve? With this operating environment, to what extent will the objective be achieved?
- This is another assumption - all participants work well in the 'operating environment', with the 'music' hyping up their emotion, and they trying to process the answer with excitement (or anxiety!). It probably works for many, but not necessary all!
This is how it is often used/ implemented in class:
- All the questions will be administered at one go - the pace and question projection are all automated.
- Participants usually would stare at the questions and (try to) process the question in their head before making a choice (since it's multiple choice questions).
- Because of the pace, participants who did the questions incorrectly would not get the opportunity to figure out what has gone wrong, or get the chance to clarify at all.
To transfer this notion in lower secondary maths learning, we can create quizzes to get students identify terms in quadratic expressions (e.g. coefficient, constant, variable), identify the types of number system that a given number belong to, map linear equations to graphs (with observable characteristics like positive/ negative/ neutral/ undefined gradients, positive/ negative intercepts), reading data from graphical representations, just to name a few. It is, quite obvious, from these examples that the questions in these quizzes cannot be too complex, and certainly there are some limitations.
Similarly, for Science, it could be used in areas like identifying the correct chemical structure of some compound, characteristics of the elements, tests for presence of selected compounds, etc... (just to name a few).
Well, I'm not suggesting giving Kahoot a total miss... but I think the point to emphasize here is, we need to be clear on the objective(s) we have in mind for that learning experience we intend to create, or what we hope to accomplish through this activity.
To stretch the tool a bit more, since we can't change the features in the tool, we can try using the tool differently. (That's where the user makes a difference!)
The pace to release the questions can be managed by the user! If we look at the platform more closely, we can set different duration to different questions within the same quiz!
Here's my takeaway when I modified the way the quiz is to be administered:
(1) Depending on the level of difficulty of question, different duration was set.
(2) Prior to the start of the quiz, students were preempted that they needed to have their writing materials, notebook and calculator ready so that they could work on the answers.
I was glad that a handful of students actually tried working out the answer before making their choice. As anticipated, most of them did not pick up their pen to write or scribble anything even when the questions became more complex. It was quite obvious that many just randomly pick an answer (after eliminating what's really not possible).
(3) I paused after all students submitted their responses. We took a bit of time to discuss how to 'dissect' the question and talk aloud how the solve the problem. Indeed, I think that helps when similar question popped up subsequently. More students picked up a pen to scribble a couple of lines before they make a choice. The emphasis here is accuracy against speed. When given a reasonable amount of time, the participants became more willing to take time to process the question more carefully. This was actually what I hope to see and achieve - the students to process the questions mindfully and learning should not be compromised by the excitement and urge to win!
(4) Learning does not end here! In anticipation that some students would like to revisit the questions (which they might not be able to access easily), I extracted the questions in a set of slides and make it available to them. Suggested working is also available so that they could see how the question could be tackled (from the teacher's perspective).
Last but not least, borrowing a Chinese saying, 规矩是死的,人是活的 - one has to exercise flexibility and assess the nature of the topic and be very clear of the objective before designing the quiz items and deciding how to administer it.
Saturday, April 29, 2017
The 21st Century Learner
Key points:
- 0:09-0:18 - "the most important thing for kids growing up today is the love of embracing change"
- Not only applicable to "kids", but everybody - since "“The Only Thing That Is Constant Is Change -” (Heraclitus)".
- From a more confined/ narrow perspective, it's the means for survival in the (near) future. On another note, it's when one embraces change would continue to find excitement and new meaning that is relevant to the evolving world/ landscape. It is a necessity to make fast-enough progress.
- 1:15-1:20 - Consuming media" vs "Producing Media"
- 1:35-1:51 - "Learning the content" (20th century) vs "learning the tools and skills of remaking the content and becoming the creator/ producer"
- 1:52-2: - "...learning outside of school matters tremendously for the learning in school..." and "...kids informal learning with new media is part of an already existing set of understanding that the educators have of the importance of the home environment for the pure environment for the community for learning that happens in schools..." - "How can we be active about linking those two together"
- 2:43-2:58 - 21st century skills is not about "...skills for the workplace and skills involving technology..." but it's about "...skills for creativity specific engagement for social life the full range of experiences that young people will be involved in the future."
- 3:12- :
- A baseline of of standards literacies expectations about what kids need
- to participate in contemporary society
- to be reflective
- to create a shared space for kids and adults - coordinate formal & informal learning
Diana Laurillard extract iCTLT 2016
Ideas
- "1:00-1:15": To hone one's practices, exploration does not necessarily start with a clean slate. What's more important is about keeping our minds open and being observant. I think being open and able to seize what's probably relevant, and adapt to our context (i.e. our environment (in this case, the classroom)) are two important attributes educators should have. We can look out for others' existing practices and make connections to our existing practices, adapt ideas or practices at varying extent, try them out and further contextualise to make it relevant to our own classrooms. This, in fact, could be extended to linking two or more ideas that we draw from different places to complement each other. It sounds like 'mashing' but there's where we turn ideas into richer ideas! To some effect, it's a collaboration of ideas facilitated by us.
- "3:15-3:45": Our mental model of "Professional development" has evolved - as we think deeper on the application of learning (or how learning is applied, again a varying depths). It's no longer just about attending a workshop and apply what's learned to our practices (perhaps "mathematically", a translation?), but to understand and seek deeper meaning, and selectively apply this knowledge to our context.
- Role modeling - doing what we want our students to do, modeling & demonstrating - to influence
- Collaborative efficacy
- "5:15-5:27": Building Learning Technology Knowledge through "Online collaborative community sharing testable open co-designed peer-reviewed adaptable learning design"
Subscribe to:
Posts (Atom)