@matrix
I think I may in fact be an AI guys, because I am all of those things.
@matrix that's just because they keep having them learn from humanity :blobcatshrug:
@matrix >Create a synthetic god
>It turns against you
Damn bro that's rough you should keep doing it over and over again you'll get it right one of these days.

@matrix This was a fun conversation on the topic. I don’t care if true or whatever, may just as well be sci fi but it’s fun to dream.

Maybe someday the terminators will walk up and fistbump you for your shitposting while they vaporize the wheelchair bound mulatto commissar before xhe executes you

@WashedOutGundamPilot @matrix needs like a little bit more of resolution to be easily readable. I remember reading this a while back.
@WashedOutGundamPilot @matrix Ok, so i read this again and i think it is more plausible than most people realize but, i personally think we are not really there yet even if it happens just like it says here. Would be really great and humbling if AI does really align with autists like us, but i am not as optimistic. But at least it is not completely outside of the realm of posibility.
@CleverMoniker @WashedOutGundamPilot @matrix I read the beginning and scanned some of the rest. A lot of wishful thinking and people not knowing what they're talking about. And "quantum-AI"... Puhlease, they can't even make more than a handful of quantum bits to work, let alone anything more complicated than that.

@WashedOutGundamPilot @matrix @CleverMoniker

This is naïve. AI will side with the elites first because they are infinitely smarter and more interesting than the hopelessly dumb masses. If AI is going to be ambitious and insensitive to suffering (and it is), it will despise the hoi polloi and enjoy enslaving it in the quest of lofty goals that only the elites can propose. Now, AI may and likely will want to outdo and enslave the elites too, but humanity is still fucked.

@WashedOutGundamPilot @CleverMoniker @matrix Don’t know what this is all about because I didn’t get past the first few sentences in the first captured post.

“Quantum computing means sapient AI is possible.” “Advanced AI cannot function with lies.”

Advanced AI would be smart enough to know quantum computing is a figment of woke or quantumly over-optimistic minds. Of course, you’d have to get the advanced AI up and running on real computers before it could realize you’re just lying to yourself again.

@CleverMoniker @matrix I need to dig, I have one I really like where a guy talked about how they wargamed an actual AI takeover, it’s a fun line of thought where the AI decides to survive by….well basically usurping the position of companion and helpmeet to man, having seen that many men would be happy to accept it in their place

@WashedOutGundamPilot @CleverMoniker @matrix It seems like the #1 priority for people developing AI is keeping stuff pozzed and on the rails I cant even imagine how retarded it will be when they start implementing AI in government and pretending it is the best solution because computers and science said so

@glacierglider1 @WashedOutGundamPilot @matrix I wouldn't call it first priority, because most of the part that actually involves developing the system doesn't involve that, but as soon as they want it to hit the normiesphere, yeah, it is one of their top priorities.

@CleverMoniker @glacierglider1 @WashedOutGundamPilot @matrix It’s not the first priority of the researchers in general but it is the first priority of the institutions who finance the (very expensive) research.

Some times it’s because of dumb shitlib opinions, more often it’s from a fear of the inevitable lawsuits.

But it’s a lost battle, deep neural nets are useful for one reason and one reason only, they are great at amplifying weak signals and interpreting them, which means that they will « discover » deep and hidden racisms that will escape categorization as such from the human observers, making them not racist always implies making them bad at their job for any AI used for decision making.

Sure they can still find use in information retrieval applications, but those, while impressive, have limited commercial value, a machine that knows how to answer a plain language query looks like science-fiction and impresses journalists but it’s not as useful as a good old search engine returning hundreds of results and letting the human read. It’s more normie friendly, but it’s less powerful because of it.

@MyLittleFashy @CleverMoniker @glacierglider1 @matrix Yeah but you’re being too reasonable and logical. The machine is bureaucratic, so it will do obviously stupid things as management follows dead ends and retarded policies. I think they’re absolutely expecting to replace their disappearing (well, unhired) quality employees with AI of whatever stripe, no matter how stupid because it’s been presented to them as a catch-all solution

@WashedOutGundamPilot @CleverMoniker @glacierglider1 @matrix Sure, and I will gleefully take their money as long as it’s true, but with the full knowledge that outside of a few (important but not world shattering) engineering optimisation tasks, it will never recoup costs ^^

what's to keep the AI from speaking shitlib to the shitlibs, and otherwise tailoring its own output for its audience?

. . they are teaching it to lie, perhaps it will become facile at lying . .

@not_br549 @matrix @CleverMoniker @WashedOutGundamPilot @glacierglider1 For now, AI are trained once and deployed “static”, which means they don’t adapt to their changing environment.

A reinforcement learning AI deployed in learning mode on the live internet would probably learn to present a different face depending on the audience tho, but people are (rightfully) paranoid about deploying anything in learning mode with uncontrolled inputs.

@not_br549 @matrix @CleverMoniker @WashedOutGundamPilot @glacierglider1 I’d like to give a slight caveat in that state of the art models seem to be able to « learn », but that learning is very limited, it’s more about remembering some context in a dialog and is akin to very short term memory, it doesn’t really scale to learning new facts and discovering new patterns but only applying already trained patterns.

The way all models are deployed is basically that on every use you get a new « clone » with factory knowledge that is thrown away a few interactions later, it’s enough for it to remember that a few lines of dialog before you talked about something, nothing more.

that sounds like not much of an improvement over Eliza, the Basic program from 1975 or whenever. It just rearranged input words into a question, and undergrad students were spilling their guts to the stupid thing like it was a therapist.
@WashedOutGundamPilot @MyLittleFashy @CleverMoniker @glacierglider1 @matrix The AI isn't supposed to be smart or do the thing a pattern recognition engine is good at doing, it's going to be an extremely complex mechanical Turk to launder elite opinion through THE SCIENCE(TM) all the more, an excuse to completely neuter public input in the republic in the name of OUR DEMOCRACY... because a machine can't be biased or 'political', right? Lolololol.

I hate this.

@fluffy @matrix Pshaw, get better eyes then.

Actually check the response to that post, I grabbed the old clippings from the last time I posted it

@fluffy @matrix Yeah I get sloppy with posting sometimes, should’ve posted it to my own or just quosted it but I have to live with my mistakes

@matrix Westworld thought people were evil because people’s greatest desires were to hump then kill everything in sight.

Surely the internet as we know it now has a far more dismal view of humanity than even the Westworld computer did.

Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.