> Proponents of these assessment systems argue that they can create more efficient public services by doing more with less and, in the case of welfare systems specifically, reclaim money that is allegedly being lost from the public purse.
I think more so than anything to do with AI this is the problem. The system was designed from the beginning to suck because it's meant to reject applications. Imagine the same system designed specifically to help people get their application approved. It checks for mistakes in real time and prompts users to correct them before the application is even submitted. Great, right? Way better than before, helping people who are new to navigating the system with a machine that's an expert at it.
But when these tools are sold as cost-cutting software of course they're going to suck. You don't even need AI, the algorithm could just be roll a die and deny 20% of applications right off the top. It's sad how effective this would be. The article
is a laundry list of autonomous systems designed with the same parameters and the same goals, just pay out less. I appreciate them for trying to do better this time but in the end…
> While it had been designed to reduce the number of welfare applicants flagged for investigation, it was flagging more.
> Proponents of these assessment systems argue that they can create more efficient public services by doing more with less and, in the case of welfare systems specifically, reclaim money that is allegedly being lost from the public purse.
I think more so than anything to do with AI this is the problem. The system was designed from the beginning to suck because it's meant to reject applications. Imagine the same system designed specifically to help people get their application approved. It checks for mistakes in real time and prompts users to correct them before the application is even submitted. Great, right? Way better than before, helping people who are new to navigating the system with a machine that's an expert at it.
But when these tools are sold as cost-cutting software of course they're going to suck. You don't even need AI, the algorithm could just be roll a die and deny 20% of applications right off the top. It's sad how effective this would be. The article is a laundry list of autonomous systems designed with the same parameters and the same goals, just pay out less. I appreciate them for trying to do better this time but in the end…
> While it had been designed to reduce the number of welfare applicants flagged for investigation, it was flagging more.