Critical thinking and AI

Thanks to Hetan Shah and Margot Finn on Bluesky, I came across this article by Gillian Tett in the Financial Times. It’s a discussion about the rhetoric and reality of AI adoption in business with, as you’d expect, a focus on the financial sector and its regulators. The bit that caught my attention was the description of a New York financier evaluating, for the first time, summer interns who had grown up using AI. While they appeared impresive initially,

… when senior financiers later probed their ideas they found them alarmingly shallow.

Consequently this person’s company made fewer return offers and is now focusing less on graduates in science, technology, engineering and mathematics — and more humanities students instead.

Now, I am speaking as a physics professor and a physicist of course. In my experience, there is a lot of critical thinking involved in science, engineering and maths, and I get a bit testy – grumpy even – if I think my humanities colleagues are trying to claim it as their thing.

However, there is also a prevalent “get stuff done” imperative. I learned my ways of working in a world where if you didn’t think critically at the start, you could waste a lot of time. I mean, years. A career even. I suppose the FT equivalent would be buying up sub-prime mortgages.

As I watch the students and postdocs around me exploit the productivity gains of rapidily-improving AI tools to solve problems, write code, test hypotheses, develop and validate new techniques – “get stuff done” – I have two concerns.

One is simply that I can’t compete at certain tasks any more and probably shouldn’t try. This has always happened to professors ever since there have been students, but I enjoy coding so it makes me a bit selfishly sad.

The other concern is reflected in the FT article. I hope it is just a transitional concern, but it is real. It is this: The metrics we use to assess success need to change, and quickly.

It would be possible now to be so “productive” that even if most of your produce was garbage, you could initially out-perform a previous generation who didn’t have access to AI productivity tools. But you could still be “alarmingly shallow” in your understanding of the subject in which you were supposed to be becoming an independent researcher.

The problem is that if speed and efficiency at solving problems is your only USP, what will happen if and when you become responsible for deciding which problems are the most important ones to tackle? If you have never had to think this through carefully, with months or years of potentially wasted effort at stake, will you have the critical thinking and contextual knowledge required?

Maybe you will, maybe this is all fine. There’s certainly a possibility that my worries are on a par with complaining that life’s too easy now we have compilers. Or debuggers. I never even learned assembler, never mind used punch cards, and I’ve done ok.

Is AI really qualitatively different?

Well, lots of people seem to think it is, and the FT article crystalised something that has been bugging me a bit, so I thought it was worth adding to the welter of AI thinkpieces by posting here. Comments, counterpoints welcome, especially from “AI natives”.

Unknown's avatar

About Jon Butterworth

UCL Physics prof, works on LHC, writes (books, Cosmic Shambles and elsewhere). Citizen of England, UK, Europe & Nowhere, apparently.
This entry was posted in Physics, Science, Technology and tagged , , , , , . Bookmark the permalink.

Leave a comment