‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree::A new crop of artificial intelligence tools carries the promise of streamlining tasks, improving efficiency and boosting productivity in the workplace. But that hasn’t been Neil Clarke’s experience so far.
Well, would you look at that, it’s playing out exactly the same as every other technological advancement ever. Instead of using it to reduce employee workloads and maintain an equilibrium of output, it exploits them by brute-forcing increased productivity with no changes to compensation and the capitalists hoard even more of the profits for themselves.
I mean, did you read the article?
The context of that quote was about people using AI to write shitty stories and then submit them for review by humans. They weren’t complaining about AI that was supposed to help them at work, being used to hurt them at work…
In fact, the entire rest of the article is just one long anecdotal story from a single Union leader for a very specific (though broadly represented) trade group.
There’s almost nothing of substance here and I’m shocked your comment Is so highly upvoted.
Exactly what I keep saying when people start blaming the tools being used for automation. Productivity is up and up and up, but none of that has been given back to the workers in the past fifty years. If I try to find dialogue on that issue, I run into a mountain of blatant propaganda defending the continued robbery of the middle and lower classes.
Temporarily embarrassed millionaires will lick the boots of capitalism in the naive hope of pulling themselves up by the straps
Also the amount of work that it puts on IT, implementing new tech and not providing/approving the training (which only goes so far)
In medicine, when a big breakthrough happens, we hear that we could see practical applications of the technology in 5-10 years.
In computer technology, we reach the same level of proof of concept and ship it as a working product, and ignore the old adage “The first 90% of implementation takes 90% of the time, and the last 10% takes the other 90%”.
Which adds up to 180%. And that is all you need to know about deadlines.
Yup, a complete 180
Those are the same MBA chuds who think nine women can birth a child in a month.
Because medicine employs a little technique called “ethics,” and there’s a strong ethical argument for restricting AI to research purposes only and completely outlawing any practical deployments, at least until the implications are fully understood.
AI may very well be the nuclear WMDs of our time, and we’re letting everyone play with it like a high school chemistry set.
It’s almost like medicine that goes into your body is very different from apps on the App Store. But other than that, yes, very interesting observation Cerevant@lemmy.world!
Yeah, let’s talk about self driving cars…
Yes, let’s.
Do you want me to Google if they are statistically safer than human drivers per mile, or should you?
All of these tools need their hand held. So anything they generate or do still needs to be checked by humans that have their own separate workload to worry about.
Just had a Marketing colleague try to make the case for a SaaS solution telling us the justification was clearly articulated in our company-specific deck he created
Reading the deck and then checking the revision history, it’s clear that buddy just pasted in the vendor’s brochure ware
Some parts of the business have no place in actually selecting tech
Agreed. Most of the devs I’ve seen at work that use it aren’t checking anything, and as a result their code is even more garbage than normal.
I use GitHub Copilot and will bounce code questions off of ChatGPT, but I never copy/paste. I’ll iterate through small bits and usually have an idea of approach before asking it anything.
Yeah, it only enhances productivity when you either have the time to check it or you have it generate something you can visually check as it is generated. I sometimes have it generate code for me when I’m working with known/studied libraries.
I really wish MBA programs and journalism schools would start teaching that technology doesn’t progress linearly (much less exponentially) forever. “Look what it can do today! Imagine in 5 years!” Is there still low-hanging fruit to pick? Because if not, it might be as good as it’s gonna be for awhile.
There’s obviously exceptions where things have gotten steadily better for a very long time. But often, it’s a punctuated equilibrium situation around major scientific advancements. And way more often, business realities pause advancement. (Like maybe OpenAI’s next giant leap forward will have to wait on chip suppliers to expand capacity.)