People are ascribing them to levels of utility that is compared to the Almighty. They aren't, and aren't going to be like this anytime soon. But the need for $$$ leads leaders to not clear it up
I fear for the human race. Born with remarkable brains, capable of critical thinking and analysis, we refuse to use them, or use them only to better our own bank accounts. Our time here may be ending sooner than anticipated….. Thank you for the cogent explanation.
I agree with everything you have said. A desperate need for AI is here:
Many people can’t afford lawyers, but are dragged into district courts on civil cases in which they are completely out-gunned by the opposing attorney.
“Free” legal assistance is completely overwhelmed. Case law - what the opposing attorneys are able to referred to when addressing the judge is pretty much unavailable to regular people unless they can go - physically to a legal library or access it online (for a FEE). AI with the legal “knowledge” of all case law would give regular people who can’t afford an attorney more of a fighting case in court.
I don’t know if I am saying this “correctly” but you get the gist of it.
When I need a reality check, I go to people who will tell me the truth, even if it’s not what I want to hear. I don’t need a pep squad, I need honesty. That AI therapist story was creepy.
Psychology is partly responsible for this as well. In the process of forging the social science from philosophical questions and biological methods, assumptions and metaphors that were considered placeholders until better ideas and/or empirical results showed otherwise became codified largely as fundamental tenets of the field. As such, it's been rare for them to be questioned, because to do so would challenge and possibly invalidate decades of research findings, and threaten many active labs.
Two interrelated ideas that are examples that lead directly to the "brain as computer" analogy are mechanism and reductionism; both were championed by Descartes.
Seriously, I am AI illiterate to be sure but my instincts are good. One thing I've always maintained is the fact that none of these aides can actually think. They can guess very quickly and mimic human-like behaviors that could possibly prompt folks to unknowingly engage in anthropomorphism and assume they are communicating with an actual intelligence.
The idea that some folks would be unable to tell is quite frightening indeed. I've already been reading about AI tools replacing interns and lower level white collar positions and how CEO's love the effect on their bottom line.
Where do you grow the next set of company leaders if AI tools are engaging in the very activities which provide the experience and knowledge those future leaders would need in order to be effective if the door to those experiences and knowledge is closed by the very company they work for?
Is profit so important that one would knowingly hollow out their ability for future prosperity in favor of more profit now? Does that even make sense? Brings to mind that old story about the goose and the golden eggs don't it?
Geoff, you have once again provided much food for thought and as a layman I thanks you for it!
Clearly, you are not AI Illiterate. That skeptic's eye is not reflexive, but indeed it is rational caution.
I work in IT training and certifications, and my worry is that AI plus a low-ish level knowledge of networking and technology will replace the career progression as people become more competent, more knowledgeable, and ultimately a few who become true "experts".
That will hollow out the middle, and as the experts (like myself) are greyed out (a.k.a. Retiring) there will be no experts left.
And every executive I talk to about this are in denial that this is coming. It is bleak.
I fear for the human race. Born with remarkable brains, capable of critical thinking and analysis, we refuse to use them, or use them only to better our own bank accounts. Our time here may be ending sooner than anticipated….. Thank you for the cogent explanation.
I agree with everything you have said. A desperate need for AI is here:
Many people can’t afford lawyers, but are dragged into district courts on civil cases in which they are completely out-gunned by the opposing attorney.
“Free” legal assistance is completely overwhelmed. Case law - what the opposing attorneys are able to referred to when addressing the judge is pretty much unavailable to regular people unless they can go - physically to a legal library or access it online (for a FEE). AI with the legal “knowledge” of all case law would give regular people who can’t afford an attorney more of a fighting case in court.
I don’t know if I am saying this “correctly” but you get the gist of it.
When I need a reality check, I go to people who will tell me the truth, even if it’s not what I want to hear. I don’t need a pep squad, I need honesty. That AI therapist story was creepy.
Thanks!
Psychology is partly responsible for this as well. In the process of forging the social science from philosophical questions and biological methods, assumptions and metaphors that were considered placeholders until better ideas and/or empirical results showed otherwise became codified largely as fundamental tenets of the field. As such, it's been rare for them to be questioned, because to do so would challenge and possibly invalidate decades of research findings, and threaten many active labs.
Two interrelated ideas that are examples that lead directly to the "brain as computer" analogy are mechanism and reductionism; both were championed by Descartes.
Now bwain hurtz.
Seriously, I am AI illiterate to be sure but my instincts are good. One thing I've always maintained is the fact that none of these aides can actually think. They can guess very quickly and mimic human-like behaviors that could possibly prompt folks to unknowingly engage in anthropomorphism and assume they are communicating with an actual intelligence.
The idea that some folks would be unable to tell is quite frightening indeed. I've already been reading about AI tools replacing interns and lower level white collar positions and how CEO's love the effect on their bottom line.
Where do you grow the next set of company leaders if AI tools are engaging in the very activities which provide the experience and knowledge those future leaders would need in order to be effective if the door to those experiences and knowledge is closed by the very company they work for?
Is profit so important that one would knowingly hollow out their ability for future prosperity in favor of more profit now? Does that even make sense? Brings to mind that old story about the goose and the golden eggs don't it?
Geoff, you have once again provided much food for thought and as a layman I thanks you for it!
Clearly, you are not AI Illiterate. That skeptic's eye is not reflexive, but indeed it is rational caution.
I work in IT training and certifications, and my worry is that AI plus a low-ish level knowledge of networking and technology will replace the career progression as people become more competent, more knowledgeable, and ultimately a few who become true "experts".
That will hollow out the middle, and as the experts (like myself) are greyed out (a.k.a. Retiring) there will be no experts left.
And every executive I talk to about this are in denial that this is coming. It is bleak.
Thanks for your comment!
Thank you for your reportage!