Eskenzi PR ad banner Eskenzi PR ad banner
  • About Us
Wednesday, 8 February, 2023
IT Security Guru
Eskenzi PR banner
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us
No Result
View All Result
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us
No Result
View All Result
IT Security Guru
No Result
View All Result

A Level results: can we trust an algorithm?

Letting an algorithm decide something as important as our children's futures highlights the risks when AI decision-making is not transparent.

by Attila Tomaschek
August 18, 2020
in Insight
A Level results: can we trust an algorithm?
Share on FacebookShare on Twitter

At a time when students’ lives in the UK have already been upended with the school year being cut short by a deadly pandemic, an added layer of chaos and controversy has erupted after officials decided to entrust establishing pupils’ A-Level grades to a computer algorithm.

In theory, the algorithm used to determine the grades would be the “fairest possible for students progressing on to further study or employment as planned”, according to exams regulator Ofqual.

In reality, it turned out to be the opposite, something educations officials should have seen coming.

The algorithm was supposedly engineered in a way that was intended to produce fair results across the board while controlling for potential grade inflation by teachers who want to see their pupils succeed. What ended up happening is that 39 percent of students saw their grades drop when compared to their teachers’ recommendations. That percentage, in itself, is eye-opening and elicits doubt in the capacity of the algorithm to generate accurate results. But when we consider how disproportionately pupils from disadvantaged backgrounds were so negatively affected by the results, it becomes all the more apparent that something is not quite right.

Ultimately, the algorithm sought to establish how this year’s class of pupils would have performed on their exams based heavily upon how last year’s class performed in each individual school district. As a direct result of how this was set up, high achieving students from disadvantaged backgrounds were left unfairly punished while underachieving students from affluent areas were undeservedly rewarded. Essentially, what this means is that students’ grades were largely predicated on their postcode, and socio-economic status.

Apart from being a completely unjust methodology to establishing A-Level grades for students, it further highlights the broader issue of bias in artificial intelligence. Completely preventing bias from creeping into AI is not easy since unconscious human biases often unintentionally worm their way into algorithms without engineers even realising where things could have gone sideways until far after the fact. But in this case, the potential for bias in the algorithm was apparent well before the results were in.

Even Education Secretary Gavin Williamson acknowledged that high-achieving students from disadvantaged areas were at risk of being unfairly downgraded as a result of how the system was set up. This makes it even more perplexing why this process was given the go-ahead at all.

Granted, these are extraordinary times and extraordinary measures need to be taken in many aspects of our lives as a result of the current health crisis. Figuring out a way to fairly establish A-Level grades in the absence of formal assessments couldn’t have been an easy task. That is one thing we can probably agree on, but relying on such a heavily-biased AI solution to produce results that can have considerable and lasting implications for English pupils’ futures is massively off-base here.

Because of all this, students had absolutely no control over the outcome. Instead, their futures were effectively placed in the hands of a flawed computer algorithm that largely based their results on how others before them performed (and there is evidence it did not do a good job of even achieving that).

The process sacrificed the individual for the majority and worked to completely undermine the potential of the students unfairly downgraded. We can expect a flood of appeals, whose process should be swift, robust, fair, and accommodating. If the appeals process doesn’t do justice to the injustice that befell UK students this year, Downing Street should emulate how officials in Scotland handled the situation and execute a complete u-turn.   And do it before the GCSE results go the same way.

Contributed by Attila Tomaschek, Digital Privacy Expert at ProPrivacy

FacebookTweetLinkedIn
Tags: ai
Share8TweetShare
Previous Post

Canadian Government Services Face Cyberattack

Next Post

Webinar: How to keep the UK secure as it reopens

Recent News

Cato Networks delivers first CASB for instant visibility and control of cloud application data risk

Cato SASE Cloud Named “Leader” and “Outperformer” in GigaOm Radar Report for SD-WAN

February 7, 2023
AT&T Cybersecurity grows SASE offering by adding Palo Alto Networks

UK second most targeted nation behind America for Ransomware

February 7, 2023
safe

Will Emphasising App Security Lead to More App Installs?

February 6, 2023
Phone with app store open

$400,000 Fine for Stalkerware App Developer

February 6, 2023

The IT Security Guru offers a daily news digest of all the best breaking IT security news stories first thing in the morning! Rather than you having to trawl through all the news feeds to find out what’s cooking, you can quickly get everything you need from this site!

Our Address: 10 London Mews, London, W2 1HY

Follow Us

© 2015 - 2019 IT Security Guru - Website Managed by Calm Logic

  • About Us
No Result
View All Result
  • Home
  • Features
  • Insight
  • Events
    • Most Inspiring Women in Cyber 2022
  • Topics
    • Cloud Security
    • Cyber Crime
    • Cyber Warfare
    • Data Protection
    • DDoS
    • Hacking
    • Malware, Phishing and Ransomware
    • Mobile Security
    • Network Security
    • Regulation
    • Skills Gap
    • The Internet of Things
    • Threat Detection
    • AI and Machine Learning
    • Industrial Internet of Things
  • Multimedia
  • Product Reviews
  • About Us

© 2015 - 2019 IT Security Guru - Website Managed by Calm Logic

This site uses functional cookies and external scripts to improve your experience.

Privacy settings

Privacy Settings / PENDING

This site uses functional cookies and external scripts to improve your experience. Which cookies and scripts are used and how they impact your visit is specified on the left. You may change your settings at any time. Your choices will not impact your visit.

NOTE: These settings will only apply to the browser and device you are currently using.

GDPR Compliance

Powered by Cookie Information