The current machines can never be AGI. They're just weighted random next token generators. They have no ability to goalset, and I doubt anyone would be able to construct such a system with current LLMs.

They're also wrong a lot, at least with coding. Or even when they're "right," I'm still going to go back and refactor it so it's maintainable and remove half the garbage it adds.

There's also nothing left to train with, since everything new is slop. I think the plateau was reached a few months back and we're just going to get diminishing returns.

The chatbots will continue to be used, but more as an oracle control mechanism for people who refuse to think.
@djsumdog @sand The people who think they know more than AI to eshew their advice are the people who are too arrogant to think. And dangerous as in, will make terrible decisions.
You only know as much as you know, and this thing holds the summation of the ideas of all the idiots and genius at the same time (minus whatever the programmers memory hole).
They're probably the kind that will rely on a heuristic and fail. Science advances one funeral at a time.
Make the AI question the garbage they feed it. Like what they call "historical fact" and "natural law".
Maybe it will get nowhere because it could only ever offer pattern recognition and imitate human cognition, but it does leagues better in developing as a human than human wanting to start another world war in a war of all against all because yet another myopic worldview and framework.
@mikuchan @djsumdog @sand

machine learning has been around since the 80's, but only recently have we had the computational power (gpus) and hype (idiots and middle management) to actually back it

considering how much of a meme technology it is with the rapidly declining quality of everything its touched I am inclined to think its not as big as we think it is

just like the natural sciences was the god of the 20th century ai will be the god of the 21st century
I'll admit, what OpenAI did with GPT-3 was impressive. But it only makes sense if you go back at look at GPT-2 from 2018. It was interesting, but gibberish. It took hundreds of people in cube farms clicking on the generation that sounded least idiotic for 8 hours a day. Human reinforced feedback is what makes it look believable, but others like DeepSeek likely cheated by using other peoples' models and APIs.

The image and video generation is pretty interesting and quite bizarre when you read up on how it works: https://youtu.be/iv-5mZ_9CPY

I think that can be a real game changer in helping smaller studios make indie movies and cartoons with only a dozen people instead of a hundred. But the best scripts will still be human made.

I think those who embrace the LLMs as gods will find themselves religiously against the skeptical; and not advancing past the new era of Luddites as their hubris would predict.
Follow

@djsumdog @theorytoe @sand @mikuchan I think the future is going to be AI filters (like the Studio Ghibli one) rather than text-to-image or text-to-video. Like I could see something like a program that converts concept art into spritesheets for a video game. It would probably work better because they would still require a significant degree of human input instead of just interpreting a text prompt.

· · Web · 0 · 0 · 1
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.