Richard Stallman's thoughts on ChatGPT, AI and their impact on humanity
This sounds just like something my brother-in-law said. I think they are both technically correct and both missing the point. Does a calculator truly understand math when it spits out a correct answer? Of course not. And it doesn't matter. I have been really impressed with chatgpt, and when it comes to shiny new tech I am usually in the poo poo camp. If tech does something useful then it is useful tech. The fact that it is not true intelligence doesn't matter at all. Besides, what's intelligence anyway? Aren't we still debating that ourselves'?
I think Stallman is right. It's really the term 'intelligence', that's the issue here.
We should stop using using that term. I personally just use 'machine learning' or '(statistical/mathematical) model'. But then there's marketing, i know.
No context was given but here was his actual statement—-
> I can't foretell the future, but it is important to realize that ChatGPT is not artificial intelligence. It has no intelligence; it doesn't know anything and doesn't understand anything. It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false. It can't avoid that because it doesn't know what the words _mean_.
Surprisingly not mentioned in the post: The whole Free Software movement aside, RMS has actually spent a chunk of his academic career at the MIT AI lab, researching artificial intelligence and co-authored some papers during his time there.
Granted, many of their research topics at the time are now no longer considered AI topics, part of what ultimately lead to the AI winter at the time.
Particularly because of that connection, however, I think it could indeed be interesting to hear some more from his perspective on developments in recent years.
I would argue that ChatGPT has reached a certain level of understanding about what it's saying. That's because you can ask it questions about what it says, and it can continue to reason along the line of what it's previously said. It does sometimes make mistakes in this, but this is improving. It's just that people want understanding to look a certain way, that seems more familiar to us.
Richard was super sensitive about the power companies have over users with closed source software, and had a great impact on our culture (just look at the controversy over the name OpenAI), but it seems like he's deeply underestimating the much much bigger power AI is (and soon will be) having over us, even though there have been countless books and movies predicting it, and we feel it coming.
In his recent talk for the FSF at Boston Stallman suggested that published weights are open source. I guess because they’re modifiable and auditable. It’s an interesting argument. So far, I’ve managed to modify llama myself so I guess so??
What I would just love to see is the out come of the following:
1. Train Chat GPT on human stuff.
2. Make Chat GPT spit out libraries of knowledge by random walk.
3. Train Chat GPT on its own stuff.
4. Do this a few times.
5. Ask it some questions.
It’s shocking to see so many famous intellectuals like Stallman and Chomsky get this so wrong. It seems like they have preconceptions so strongly held that no amount of hard evidence could convince them otherwise.
GPT-4 deeply understands what it is talking about. I was able to pose it difficult classical physics problems that would be hard for a physics undergrad to solve and it could give correct answers consistently. These are things I made up that are nothing like existing examples. When asked to explain it’s work, or modify it for unusual scenarios, it could do so. I could also get it to invent new terms and concepts to describe the work, and then use those correctly. It even could reason out appropriate simplifying approximations when demands exceeded its calculating abilities. If gpt-4 isn’t really understanding what it is saying, or not intelligent than neither are any humans.
Mobile friendly URL: https://i.reddit.com/r/linux/comments/122gmm9/richard_stallm... (sorry about that).
undefined
Link should be to old.reddit.com
Stallman's statement (as it's deleted on Reddit):
>I can't foretell the future, but it is important to realize that ChatGPT is not artificial intelligence. It has no intelligence; it doesn't know anything and doesn't understand anything. It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false. It can't avoid that because it doesn't know what the words _mean_.
I asked ChatGTP plus what it thought:
>do you understand what words mean?
and it replied:
>Yes, as an AI language model, I have been trained on vast amounts of data to understand the meanings of words and their usage in various contexts. I use this knowledge to generate responses to questions and statements that are as accurate and informative as possible. However, it is important to note that language is a complex and nuanced system, and there may be cases where my understanding of a word or its context may not be entirely accurate or complete.
Which I guess is not so different from humans although we have other inputs from the physical world I guess.
This is just a snippet of text out of context posted somewhere
It states something pretty uncontroversial, repeated several times through HN
And does not contain an opinion about ChatGPT utility or impact on humanity
But we already know his opinion opaque software is bad. And the widespread interest in llama and alpaca shows a lot of people agree, want to run it locally, learn how it works, reproduce it
> It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false
Well, that could also be perfectly applied to any human. The difference being that humans may make purposefully or not false statement (i.e. by lying or by ignorance), whereas the AI will only make false statements by ignorance.
Also... "It can't avoid that because it doesn't know what the words _mean_.". That's very subjective to understand what a word "means", is that again about lies and ignorance? And we need the AI to lie to declare it intelligent?
Maybe, ChatGPT has ironically a modest answer to this claim, when asked what it's thinking about Richard Stallman's statement:
```
Richard Stallman raises some valid concerns about the limitations of AI language models like ChatGPT. It is true that these models, while highly advanced and capable of generating human-like text, still have limitations in terms of understanding and reasoning.
ChatGPT, being a product of GPT-4 architecture, learns from vast amounts of text data and is designed to predict and generate plausible-sounding responses based on the input it receives. However, it lacks the capability to truly comprehend meaning or context in the same way a human can.
It is important to recognize these limitations and not overestimate the abilities of AI language models. Users should be aware that the information provided by ChatGPT might not always be accurate or reliable, and it is essential to cross-verify any critical information obtained from such models with trusted sources.
In summary, while Richard Stallman's statement highlights important concerns, it is worth noting that AI language models like ChatGPT can still provide valuable insights and assistance when used responsibly and with an understanding of their limitations.
```
This seems to miss the forest for the trees. Whether LLMs will have an impact similar to the industrial revolution will not depend on whether they pass some arbitrary threshold where everyone is convinced that it is an AGI and understands what words mean. It will depend on the utility they provide. And the utility is there right now. GPT4 is so immensely useful for so many things. At even a reasonable pace of improvement, it is hard to see why LLMs would not be able to do more and more things.
Morever and slightly off topic, most humans also don't care what words mean or what numbers mean in a philosophical sense. If you talk of "The Axiom of Choice" in a big company software meeting, people will ask if it is the new ice cream flavour in the cafeteria. That doesn't prevent people from getting value out of both words and numbers.
One one hand, it would be more interesting to hear RMS' views in the implications, if any, on software freedom (personally I think there are many angles here).
On the other hand, the comment attributed to him is correct, if simple and pretty obvious.
I actually think it shows incredible reasoning ability already. It can change its answers based on new content you provide. For example, you can show it a Java program and ask how it will behave, then show it release notes of a new Java version it has never seen before and ask it how the functionality may change, and it will get it right. Most programmers won't because our ability to attend to information is far inferior unless we really try hard. Focus is not something most humans excel at. Our brains are more capable but are less utilized most of the time.
> It doesn't need intelligence to nullify human's labour.
I propose a new term for this thing: AK - Artificial Knowledge.
The purpose of this tool is to compress, match and combine text based, _informal_ information.
> I can't foretell the future...
I like how he states the scope of his answer just like ChatGPT would.
I think people tend to underestimate the concept of understanding, and also the concept of conveying thing by just saying them.
Most people don't get most things most of the time.
I wonder if Stallman has actually used GPT-4. His opinion seems like a conclusion that a person could arrive at by just reading the specifications.
undefined
> It has no intelligence; it doesn't know anything and doesn't understand anything.
I don't like the word "intelligence". It's too arbitrary and depends on the language. In my native language, there are two synonyms which imply different thresholds for something to be considered intelligent. I'm sure in other languages it's also pretty arbitrary.
Instead, let's compare those models with complex biological systems we typically consider to be at least somewhat intelligent.
- biological systems use spiking networks and are incredibly power efficient. This is more or less irrelevant for capabilities.
- biological systems have a lot of surrounding neural and biochemical hardware - hardwired motorics, senses processing, internal regulators. Complex I/O is missing from these models, but is being added as we're talking. The large downside of current models is that it cannot understand what drives humans as it has different hardware, it's trained on their output, and has to "reverse engineer" the motivation. Which might or might not be possible, but it makes them different.
- biological systems are autonomous agents in their world. They exist on an uninterrupted timeline, with input and output streams constantly present. Those models don't exist on a timeline, they are activated by the user each time.
- biological systems have some form of memory; they compress incoming data into higher order concepts on the fly, and store them. This is a HUGE DEAL. The model has no equivalent of memory or neuroplasticity, it's a function that doesn't keep any state. LLMs have the context which can be turned into a sliding window with an external feedback loop (chatbots do that), however it's not equivalent to biological memory at all, as it just stores tokens verbatim, instead of trying to compress the incoming data.
- biological systems exhibit highly complex emergent behavior. This also happens in LLMs and even simpler models.
- biological systems are social. Birds compose songs from tokens, and spread them through the population. Dogs, monkeys, and humans teach their kids. The mental capacity of a human isn't that great; every time you think you're smart, remember that you stand on the shoulders of giants. The model does have much more capacity than a human.
My own conclusion: sparks of "intelligence"? Undeniably, the emergent behavior alone is enough. They do understand things, in the conventional terms. However, they are also profoundly different than human intelligence, and still lack key elements like the memory.
Stallman is always right
[dead]
Great... and now prove this does not hold for yourself as well
Richard Stallman is neglecting the impact of ChatGPT. It doesn't matter if ChatGPT is a magician or not, it absorbes our minds.
That's why opensource AI is trailing behind proprietary implementations. Sad!
I think his answer is driven by a preference for the status quo and a reluctance to face difficult changes.
ChatGPT, and especially GPT-4, seem to do much more than just play games with words. You can’t overlook the “emergent” phenomena that manifest themselves when using them.
Man, this is so wrong in so many ways...
> it is important to realize that ChatGPT is not artificial intelligence.
The first is to assume that there is a technical, precise, objective and clear definition of "Artificial Intelligence". There isn't. He should know that.
> it doesn't know anything and doesn't understand anything.
And what does "know" or "understand" means in the context of a machine that doesn't even have self-consciousness?
Besides, are you implying that human beings know stuff? The overwhelming majority of people knows very, very little. Most of the people I know are too lazy to think or doing the hard work of studying. I'd suggest Kahneman's "Thinking Fast and Slow" before putting any faith in people's "knowledge".
> It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false.
I think we can we apply the same judgment to Stallman's argument itself, since his concepts are so badly defined.
And thank you for an open democratic society where every statement is liable to be false, regardless if it comes from a machine, Richard Stalman or Putin and Jiping.
I'll take Turing's test approach: if it looks intelligent to me then it is certainly more intelligent than me.
Also, I'll take Dijkstra approach: "the question on whether computers can think is as irrelevant as whether submarines can swim".
Edit: to all Stallman's fanboys downvoting this: got any good argument?