A wonderful essay on what’s obvious” to human and how the fallacy that obviousness is driven by human bias”, which in itself is error prone, can lead to ungrounded, optimistic euphoria, especially around AI.

Knowing what to observe, what might be relevant and what data to gather in the first place is not a computational task — it’s a human one. The present AI orthodoxy neglects the question- and theory-driven nature of observation and perception. The scientific method illustrates this well. And so does the history of science. After all, many of the most significant scientific discoveries resulted not from reams of data or large amounts of computational power, but from a question or theory. (…)

Computers can be programmed to recognise and attend to certain features of the world — which need to be clearly specified and programmed a priori. But they cannot be programmed to make new observations, to ask novel questions or to meaningfully adjust to changing circumstances. The human ability to ask new questions, to generate hypotheses, and to identify and find novelty is unique and not programmable. No statistical procedure allows one to somehow see a mundane, taken-for-granted observation in a radically different and new way. That’s where humans come in.

The essay is loaded with astute observations and arguments, made me thing. A must read.