
The world is so fucking stupid right now that it’s kind of hard to believe that any one thing could stand out. Yet this happened on Monday:
If you have zero education, but learn how to ask AI models the right questions , in many jobs you will be able to outperform someone with an advanced degree, but who is unwilling to use Large Language Models. Just takes a smartphone, curiosity to experiment and a mindset to learn.
I have to say upfront: I detest AI. The images look fake and the writing is abysmal. Maybe AI can spell better than you do, or is better with subject/verb alignment than you are, but it’s not better than your writing (or your art). AI “writing” is rote and soulless, and that’s because it’s literally written without thought.
Machines can’t think. I don’t care what the people who create it say—it’s not thinking. It’s just a series of if/then combinations, and a lot of the time it’s completely made up because machines don’t know the right thing to do when information is missing. But people do: to either admit they don’t know the answer, or to go in search of the correct answer.
Another reason to detest AI is that it is often used to remove accountability for unpleasant or cruel decisions (like denying insurance coverage for a medical condition). AI is not a doctor, it is not a human who has treated hundreds or thousands of real patients and who knows more than even a machine which has ingested all of the pertinent clinical research. Plus a doctor keenly understands that a human life is worth saving.
Recently the Episcopal Church–my church–invited responses to a survey about the role of AI in pastoral care, parish administration, faith formation, theological discussions, data analysis and a bunch of other areas. My responses were along the lines of thought I’ve presented above. But there was one other element that I thought was crucial, and that the creators of these tools absolutely do not have the best interests of humanity at heart and so to me there’s a moral reason not to add their tools to our church life.
Which brings us back around to Cuban’s post. There’s a crisis in this country that is several decades in the making. Where 100 years ago, the idea of being a credentialed professional was gaining steam, over the last 40 years or so we’ve had a strain of thought in American culture that knowledge and training aren’t as important as grit and elbow grease. And coupled with the GOP’s effort to kill public education at every level, we are now dealing with a scarcity of people who can think critically.
The thing is that you can’t ask AI meaningful questions *unless you know what questions to ask*. And the only way to know what questions to ask is to study a subject and to be trained in its methodology. And relying on a tool that does your thinking for you only makes you less likely to work to educate yourself. Learning is hard work, and writing is sometimes even harder. So why not use a tool to make your life easier?
This is why, from a paper to be given at the 2025 CHI Conference on Human Factors in Computing Systems:
We surveyed 319 knowledge workers who use GenAI tools (e.g., ChatGPT, Copilot) at work at least once per week, to model how they enact critical thinking when using GenAI tools, and how GenAI affects their perceived effort of thinking critically. Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship.
And if you can’t think for yourself, you’re ripe for being told what to think.
Think about where we are now. And about who is in charge.
