I reckon GPT summed it up perfectly, as it should have. That's why I can't understand people that think we need to fear or worry about AI. AI can only perform what it has been programmed to do. No matter how sophisticated the program is. It can only mirror the data. "GPT: The solution is to stay human." I love that.
How is this any different (besides speed of responses) for what we have created with our educational system?
We routinely hear people pontificate as knowledgeable experts on things that they have at best a cursory understanding of. We applaud them. Look at Neil deGrasse Tyson for example. While I don't question his knowledge in the very narrow field of his specialization, why do we have him up in front of people going on about things that he doesn't have any more background in than an average person with a rudimentary background?
It is the human demonstration of the logical fallacy of argumentation by authority.
What it allows is for people to abdicate their responsibility and be able to point somewhere else to shirk their accountability for decisions. Either that or to be able to shop around for an expert that confirms their own biases.
Artificial intelligence agents have a glaring fault. They are designed by people with all their inherent flaws. They are designed to quickly come up up with "an answer" based on a lookup of potentially relevant information that will pass a cursory review. In some ways, that is the way that people perform the task as well. Look at Wikipedia for example. There is a huge amount of factual and verifiable information there. There is also a huge amount of made up crap. A lot of people will go and get a quick answer and not spend any time at all discerning whether the information is of the true and accurate variety or the made up crap.
This falls into the abyss of garbage in our news media as well. Look at my favorite whipping boy, the New York Times. The vast majority of their reporting is perfectly accurate. You go and see what the score was for a game, what the temperature was, what show is playing, who died and had their obituary written. When there is so much factual information, nobody will question the garbage that they print that is just made up from whole cloth and fits the author's agenda.
There is a lot of good that can be done with AI. It still needs a human to decide whether the "answer" it spits out is accurate or just crap.
More and more companies are turning to AI for customer support. I spent yesterday afternoon trying to get help from a credit card provider bot that continually said it didn’t understand. There was no option to escalate to a human. I use the Substack image generation AI to create copyright-free images for my posts. Asking for a 2025 calendar, I got this little beauty: https://jimthegeek.substack.com/p/entering-the-age-of-ai
That's crazy and oddly reassuring. I was amazed at the uncanny analysis chatGPT was able to come up with in the article. The words rang true, like I might have said myself.
But I've seen how biased this AI is when talking about controversial topics like vaccines. So caution and suspicion is necessary, even more so, because unlike us humans these "tools" and some users even, don't understand the danger and power of this to run amuck. It's hard enough to discern truth even without this possibly weaponized tool.
I should send you my question/answer i had with CHAT GPT on guns and the term assault rifle.
AI seems to be data collection and ideology management and of of course.... job loss.
I reckon GPT summed it up perfectly, as it should have. That's why I can't understand people that think we need to fear or worry about AI. AI can only perform what it has been programmed to do. No matter how sophisticated the program is. It can only mirror the data. "GPT: The solution is to stay human." I love that.
How is this any different (besides speed of responses) for what we have created with our educational system?
We routinely hear people pontificate as knowledgeable experts on things that they have at best a cursory understanding of. We applaud them. Look at Neil deGrasse Tyson for example. While I don't question his knowledge in the very narrow field of his specialization, why do we have him up in front of people going on about things that he doesn't have any more background in than an average person with a rudimentary background?
It is the human demonstration of the logical fallacy of argumentation by authority.
What it allows is for people to abdicate their responsibility and be able to point somewhere else to shirk their accountability for decisions. Either that or to be able to shop around for an expert that confirms their own biases.
Artificial intelligence agents have a glaring fault. They are designed by people with all their inherent flaws. They are designed to quickly come up up with "an answer" based on a lookup of potentially relevant information that will pass a cursory review. In some ways, that is the way that people perform the task as well. Look at Wikipedia for example. There is a huge amount of factual and verifiable information there. There is also a huge amount of made up crap. A lot of people will go and get a quick answer and not spend any time at all discerning whether the information is of the true and accurate variety or the made up crap.
This falls into the abyss of garbage in our news media as well. Look at my favorite whipping boy, the New York Times. The vast majority of their reporting is perfectly accurate. You go and see what the score was for a game, what the temperature was, what show is playing, who died and had their obituary written. When there is so much factual information, nobody will question the garbage that they print that is just made up from whole cloth and fits the author's agenda.
There is a lot of good that can be done with AI. It still needs a human to decide whether the "answer" it spits out is accurate or just crap.
More and more companies are turning to AI for customer support. I spent yesterday afternoon trying to get help from a credit card provider bot that continually said it didn’t understand. There was no option to escalate to a human. I use the Substack image generation AI to create copyright-free images for my posts. Asking for a 2025 calendar, I got this little beauty: https://jimthegeek.substack.com/p/entering-the-age-of-ai
That's weirdly funny. Was that intentional I wonder?
Maybe AI's can mimic humor too.
LOL - I see you have an appointment on the 35th of Moday.
Here's my misadventure with AI image generation: https://facet58consultants.com/ai-image-generation-fail/
Do those results imply that AI can indeed experience the human emotion of peeve?
That's crazy and oddly reassuring. I was amazed at the uncanny analysis chatGPT was able to come up with in the article. The words rang true, like I might have said myself.
But I've seen how biased this AI is when talking about controversial topics like vaccines. So caution and suspicion is necessary, even more so, because unlike us humans these "tools" and some users even, don't understand the danger and power of this to run amuck. It's hard enough to discern truth even without this possibly weaponized tool.
Can’t wait for AI generated movies!
I suspect we've been watching them for years!