Pretty sure you got that back to front. ATARs are calculated using marks well after they've been aligned.Also, you may be interested to know that ATARs are calculated using raw (HSC exam) marks, before they've been aligned.
Is this true?Also, you may be interested to know that ATARs are calculated using raw (HSC exam) marks, before they've been aligned.
UAC scales the raw HSC mark (raw exam mark + moderated assessment mark), not the aligned marks.Pretty sure you got that back to front. ATARs are calculated using marks well after they've been aligned.
Raw exam mark is aligned to give Exam mark
Raw school mark is moderated then aligned to give Assessment mark
These two (aligned) marks combine to give HSC mark for a subject/course. This is what BOSTES shows on your HS Certificate, nothing to do with University admission yet.
UAC then scale HSC marks to equalise the difficulty levels between courses, then Aggregate for ATAR ranking.
Not quite. Take a look at this extract from the 2015 ATAR scaling report, particularly the last paragraph.Pretty sure you got that back to front. ATARs are calculated using marks well after they've been aligned.
Raw exam mark is aligned to give Exam mark
Raw school mark is moderated then aligned to give Assessment mark
These two (aligned) marks combine to give HSC mark for a subject/course. This is what BOSTES shows on your HS Certificate, nothing to do with University admission yet.
UAC then scale HSC marks to equalise the difficulty levels between courses, then Aggregate for ATAR ranking.
Well I'm not a liarIs this true?
Yup.UAC scales the raw HSC mark (raw exam mark + moderated assessment mark), not the aligned marks.
Aligning is based on the standards set by BOSTES, but that doesn't compare each course to each other.
Okay thanks, I stand corrected.Not quite. Take a look at this extract from the 2015 ATAR scaling report, particularly the last paragraph.
Yup, sounds rightOkay thanks, I stand corrected.
So we can say HSC mark is aligned and UAC mark (aggregate) is unaligned but scaled.
The marks are later changed ('scaled') to compensate for this. It's similar to the way the BOSTES aligns the marks, but for some reason the UAC feels that instead of using the BOSTES' aligned marks, they'd prefer to align/scale them themselvesIt would be unfair to use raw marks wouldn't it? Some exams are harder than others.
oh wow I actually never knew thisThe marks are later changed ('scaled') to compensate for this. It's similar to the way the BOSTES aligns the marks, but for some reason the UAC feels that instead of using the BOSTES' aligned marks, they'd prefer to align/scale them themselves
I think I understand the different purposes now. BOSTES aligns the marks of each subject/course so they are comparable to previous years (ie. after alignment an 85 in a subject this year indicates about the same as 85 last year). Whereas UAC deals with ranking same-year students doing different subjects, the marks are scaled to compensate for different difficulty levels between the subjects.The marks are later changed ('scaled') to compensate for this. It's similar to the way the BOSTES aligns the marks, but for some reason the UAC feels that instead of using the BOSTES' aligned marks, they'd prefer to align/scale them themselves
It would be unfair to use raw marks wouldn't it? Some exams are harder than others.
No, UAC doesn't scale based on difficulty. They scale based on the strength of the cohort. To quote:I think I understand the different purposes now. BOSTES aligns the marks of each subject/course so they are comparable to previous years (ie. after alignment an 85 in a subject this year indicates about the same as 85 last year). Whereas UAC deals with ranking same-year students doing different subjects, the marks are scaled to compensate for different difficulty levels between the subjects.
The scaling algorithm starts from the premise that a student’s position in a course depends on:
- how good he/she is in that course, and
- the strength of the competition.
Scaling controls for the strength of competition
That's saying in a nice diplomatic way to avoid offending the "lesser" subjects.No, UAC doesn't scale based on difficulty. They scale based on the strength of the cohort.
No, strength is different to difficulty. Strength is of the student, difficulty is of the paper.That's saying in a nice diplomatic way to avoid offending the "lesser" subjects.
Effectively it leads to the same outcome.
It's not about the exam paper but the subject/course as a whole. We're talking about difficulty as in the capability to achieve the same mark in one subject relative to another, leading to the necessity to scale them differently.No, strength is different to difficulty. Strength is of the student, difficulty is of the paper.
They compare it to how those students did in comparison to English as a medium and students with the same subjects.It's not about the exam paper but the subject/course as a whole. We're talking about difficulty as in the capability to achieve the same mark in one subject relative to another, leading to the necessity to scale them differently.
If it's all about strength of the cohort let's see this
http://www.uac.edu.au/documents/atar/2015-ScalingReport.pdf (page 36)
Physics: HSC mean 36.5 / Software Dev: HSC mean 36.9 (rather similar)
After scaling, Physics mean = 30.4 / Software Dev mean = 23.6
I guess your explanation says the SDev cohort is worse than the Phys cohort in the common English courses. Another way of expression is it's less difficult to achieve 36.9 mean in SDev than for 36.5 mean in Phys. Same effects overall.
Well no, that's your own misinterpretation. Correlation is not causation, for which you have incorrectly concluded that because UAC has reduced the mean for SDD much lower than Physics, it must be due to it being more difficult (how ever you want to define that). This is simply not true, and it is not the same effect.It's not about the exam paper but the subject/course as a whole. We're talking about difficulty as in the capability to achieve the same mark in one subject relative to another, leading to the necessity to scale them differently.
If it's all about strength of the cohort let's see this
http://www.uac.edu.au/documents/atar/2015-ScalingReport.pdf (page 36)
Physics: HSC mean 36.5 / Software Dev: HSC mean 36.9 (rather similar)
After scaling, Physics mean = 30.4 / Software Dev mean = 23.6
I guess your explanation says the SDev cohort is worse than the Phys cohort in the common English courses. Another way of expression is it's less difficult to achieve 36.9 mean in SDev than for 36.5 mean in Phys. Same effects overall.
So this implies both the cohort's academic achievement and the individual's academic achievements are modelled to derive the scaled mark for that subject. At no point is 'difficulty' assumed by UAC. They don't know if the exam or the overall subject is difficult, even by your definition.The model underpinning the scaling algorithm specifies that the scaled mean in a course is equal to the average academic achievement of the course candidature where, for individual students, the measure of academic achievement is taken as the average scaled mark in all courses completed. The model specification leads to a set of simultaneous equations from which the scaled means of 2 unit courses are calculated.