This continues the discussion here, item 13.  The main reason is finding and including material and its source.

 

The Existential Risk of AI: A Real Threat, Not Sensationalism

1

  1. A problem with commentaries like this is that the authors (apparently) have not been involved in the development of any AI model. So their arguments are necessarily limited to interpreting public remarks by actual model developers, who are pretty secretive about their work these days. So they don't have much foundation for their arguments.

  2. Furthermore, there is a strong tendency in this particular article to self-citation; that is, justifying statements in paper N+1 by referencing papers N, N-1, N-2, etc. For example, the first author (Subbarao Kambhampati) appears in 12 of his own references. In other words, this author is largely engaged in a protracted conversation with himself.

  3. This can produce weird effects. For example, the only defense for the "just statistically generated" claim is a reference to another paper by the same author, ... , which (strangely) makes no use of the word "statistics" at all.

  4. This doesn't mean that the paper is necessarily wrong, but the arguments are fluffy compared to a typical paper from DeepSeek, OpenAI, DeepMind, etc., written by people who actually do this technical works hands-on instead of watching from afar as others do the direct work.  source

 

2

  1. A traditional approach to making knowledge useful involves two steps. First, humans study data, discover patterns, and make these observations into "rules of thumb", equations, laws of science, etc. Then engineers incorporate these principles into their machinery or computer programs or chemical processes or whatever. We've done this for centuries, millenia, and perhaps longer.

  2. The larger significance of deep learning is that it short-circuits this process; it allows us to go directly from raw data to useful mechanisms with no human understanding in the middle. We've now constructed masses of math and computation that automate this process end-to-end. Given masses of data, we can now extract the patterns and put them to work automatically through deep learning. Even after this is done, humans may be unable to explain how it works in detail.

  3. In a sense, this is a familiar. I bet you can recognize a picture of your mother, but you can not describe how the masses of neurons in your brain that perform this feat actually work. What's new is that deep learning enables us to accomplish feats (including this example!) in silicon rather than with biological neurals. And, similarly, we can't explain how this works, even though it is right there before our eyes.

  4. The history of AI research suggests (to me) that regarding understanding of human cognition as a precondition for creating AI was actually a fundamental barrier to progress. Human cognition is probably too complex for humans to understand in detail. Deep learning provides a way around this barrier: by taking a different approach to making knowledge useful, we open a path to AI that evades the problematic requirement that we first understand how it works.

  5. So, coming back to your comment above, my personal view is that an understanding of human cognition is not a requirement for constructing superintelligent AGI. In fact, if we treat that as a requirement (as in traditional approaches to AI, prior to deep learning), then we me might well never get past that barrier.

  6. Only by shedding the requirement of "understandability" of intelligence do we open up the possibility of mimicry.

 source