Discussion about this post

User's avatar
Adam's avatar

Now bwain hurtz.

Seriously, I am AI illiterate to be sure but my instincts are good. One thing I've always maintained is the fact that none of these aides can actually think. They can guess very quickly and mimic human-like behaviors that could possibly prompt folks to unknowingly engage in anthropomorphism and assume they are communicating with an actual intelligence.

The idea that some folks would be unable to tell is quite frightening indeed. I've already been reading about AI tools replacing interns and lower level white collar positions and how CEO's love the effect on their bottom line.

Where do you grow the next set of company leaders if AI tools are engaging in the very activities which provide the experience and knowledge those future leaders would need in order to be effective if the door to those experiences and knowledge is closed by the very company they work for?

Is profit so important that one would knowingly hollow out their ability for future prosperity in favor of more profit now? Does that even make sense? Brings to mind that old story about the goose and the golden eggs don't it?

Geoff, you have once again provided much food for thought and as a layman I thanks you for it!

Expand full comment
Jackie Ralston's avatar

Psychology is partly responsible for this as well. In the process of forging the social science from philosophical questions and biological methods, assumptions and metaphors that were considered placeholders until better ideas and/or empirical results showed otherwise became codified largely as fundamental tenets of the field. As such, it's been rare for them to be questioned, because to do so would challenge and possibly invalidate decades of research findings, and threaten many active labs.

Two interrelated ideas that are examples that lead directly to the "brain as computer" analogy are mechanism and reductionism; both were championed by Descartes.

Expand full comment
6 more comments...

No posts