My thoughts on AI, another person opinion
I’ve had a pull and retract response to AI, or more specifically the large language models that is generative AI. I’ve fallen for the “but this is the best it will get” argument as well as the “it will never get better”. The first time I had OpenAI generate a response it felt like a parlor trick, but I felt it could be used as an ideation device. I would type in keywords I was feeling and be given a direction. I got burnt out wresting with the machine. I stepped away from it. Then suddenly we had ChatGPT where it would be able to stick with the conversation. Before this it was clear the chatbot was constantly trying to end the conversation, probably to hide its inabilities. I later wrote a stories with GPT 4, but they were all boring and the truth is no matter how much I put into it, these words were never mine. Does not matter how much effort I put into directing with prompts. I never felt an ownership of the words being generated.
From then on ChatGPT was solely used as a wall that I can bounce questions on, narrow down what I’m trying to ask, or give me a starting point to search something up. Now more recently I’ve used the reasoning button. This has made it much more useful as it reduces the amount of prompts needed point the GPT in the right direction. That being said I don’t think we are at a point where these are useful. The many jobs people want to replace these with will create new jobs, but these jobs will be soulless. There will be no ownership of the labor. No useful skill developed to later sell as valuable. Resumes will just say “made sure AI continued to do its job and feed any data requested.”
Everyone will be in IT. Jobs will either be maintain the physical computer (working, great! Broken? Replace), clicking buttons within a propriety web page (this is only a human for liability reasons, so it would be phased out eventually), or I guess trade jobs will still exist but the worker will be heavily attached to an AI model owned by some bigger corporation. Sure those people will have bosses, AI bosses that appear with an avatar that is your boss. AI processes key metrics, evaluate how to respond, draft the email, then fire you via email.
Society will implement this before we ever prove this technology is good enough to act as our bosses. People are going to be affected. What’s funny but also not funny is using an AI model that is close enough to the real output of a human takes more energy than a human. So the cost does not cancel out. Sure this might be figured out in the future but that may require breakthroughs that are not part of any trajectories that exist at the moment.
There’s no doubt about it. This tech will change things. The unfortunate aspect is everyone is looking at previous technologies and want to accelerate what came before it. They want to be the creators that started it all. They want to be the next Charles Babbage, Dennis Ritchie, or Richard Stallman. The only problem is most of the “accelerationists” do not give a shit about the craft.
This is just another, get in on the ground floor grift. These are not people that want to create technologies to further us into a future. Instead it’s a bunch of narrow minded money hungry business minded people hoping to scramble to the top of a sinking ship. They will burn the Earth to get there. Anyone who loves technology and favors a safe development is a futurist. Pushing AI in the way that we have been is not futurist, its nihilist. It’s the belief that the world will eventually burn so we must do the burining.


🔥
ReplyDelete