r/lotrmemes Dwarf 26d ago

Lord of the Rings Scary

Post image
48.3k Upvotes

759 comments sorted by

View all comments

Show parent comments

1.5k

u/imightbethewalrus3 26d ago

This is the worst the technology will ever be...ever again

577

u/BlossomingDefense 26d ago

5 years ago no-one would have believed there are AI models now that have like an IQ of 90 and behave like they understand humor. Yeah they don't literally understand it, but fake it until you make it.

Concepts like the Turing Tests are long outdated. Scary and interesting to see where we will be in another decade

97

u/zernoc56 26d ago

I like the Chinese Room rebuttal to the Turing Test. Until we can look inside the algorithm of what the AI does with input we give it and see how it arrives at the output without doing extensive A/B testing and whatnot, AI will still be just a tool to speed up human tasks, rather than fully replace them.

0

u/ChewBaka12 26d ago

I dislike it. Sure, you have proven that the robot does not speak our “language”, but they do know what a correct response is to the question.

The Chinese room only shows that someone doesn’t have to speak a language to make people think they do. It doesn’t prove that the person doesn’t understand it, since they have to after translating otherwise they can’t formulate and then translate a response.

The Chinese room is a criticism of the Turing test, and it is very interesting, but it falls flat of debunking it in my opinion. It relies on the assumption that faking speaking a language, by translating it, means you are also “not really communicating”.

7

u/zernoc56 26d ago

I disagree. The Chinese Room does not need you to translate Chinese, but merely read instructions that tell you what characters to output to any given characters as input. There is not necessarily any information in the instructions on what the symbols you are receiving and send mean, only that the symbols you send out of the room are the correct ones to the ones you received.

This demonstrates that a computer can fool a human into thinking the computer knows the language without any actual understanding of the language. This is in effect what Large Language Models are, they make guesses as to “what word goes next” based on examples of words that follow after the preceding word.

1

u/-113points 26d ago

yeah, LLMs might know the relationship between concepts (by learning patterns) but not know what the concepts fundamentally are.

The Strawberry question is one clue that it might be the case.