The danger of AI isn't skynet, it's insurance companies using AI tools to raise your premiums in real time using CCTV and sensors in your phone, it's police using AI algorithms to engage in "predictave policing" and sentiment analysis of political dissidents, it's governments using AI Generated articles to drown out Google searches for some scandal that just happened, it's some AI tool allowing corporations to make more complex inferences about you using the data they already have on you and then using that to charge you for something.

It's exploitation with a thin veil of objectivity and "mathematical facts", that's the danger, an insurance provider you can't argue with, a cop you can't reason with or even see, it's a notification on your phone telling you that you owe an extra grand on your taxes or insurance premiums because you got in a car accident or because your son just got diagnosed with Leukemia and Google Assistant overheard the diagnosis on your smartphone and sent it to the relevant agencies.

youtu.be/-MUEXGaxFDA
Follow

@Shadowman311
Oh no... Anything but an insurance company lowering my price, when I am a responsible person.

@Shadowman311
AI is the same thing as literally any other predictive model. We can either act scared, or take at what these data can actually provide.

Imagine, if an AI would recommend you advert on a pill for an illness you've not been diagnosed with.... yet. Or it simply offering you the best cultural actions in your environment based on your actual taste. Is that exploitation? All I see is data used to actually provide a perfect service.

The problem always comes to morality of negative predictions. But that isn't an AI problem Ever heard of.... racial profiling? That wasn't just racism. It was a statistically working model. The only reason we are supposed to hate it is, because the individuallist morality isn't compatible with collective propability.

but there is one thing, that does scare me about the AI. The internet will die. All universal and connected networks will turn to invite only once. And I mean all of them. No exceptions. Because adverts are one problem, but no spam advert bot is made to earn your trust for 2 years before he snaps and starts begging. But AI scamers will be better, then we can now imagine. Therefore, all communications will need to be ultra-filtered and verified.

Also consider how spergs online are quietly and fastidiously working in the background to democratize these AIs so they can generate better racism/porn. Just go onto the local LLM generals on /g/. They’re tinkering away and they’re going to get there, just so Best Girl can say nigger in their ERP. And that’s barely an exaggeration.

Now just imagine for a minute when more serious anti-establishment actors get ahold of the work of these guys. Just consider how effectively this can be weaponized against The System.

@NEETzsche @Shadowman311 @LukeAlmighty We just wanted to play videogames, but you did this to us, Anita. You weaponised the autists.

How long until regular people can do things make consumer-grade drones fly around and, to get around how they can’t handle much recoil, shoot cyanide needles at niggers / jews instead of bullets. Autonomously based on facial recognition or whatever the fuck.

There’s so much potential for this to backfire on the powers that be if the “wrong” people tinker too much

@NEETzsche @Shadowman311 @LukeAlmighty You could probably even get establishment support to build a database of white faces, saying that's your target, and then it's as simple as inverting a single if ツ

You want to really destabilize things, you can also use drones to airdrop guns into prisons. Our enemies rely on stability and predictability to get anything done, so just throwing a bit of chaos into the mix usually works to our benefit

https://www.route-fifty.com/infrastructure/2023/05/prisons-under-attack-drones-delivering-contraband/386848/

It’s been going on for over a decade. So with or without AI, a disenfranchised people can create all kinds of problems and be a humongous thorn in the side of the establishment. That’s why Whitey is going to get more no-work WFH jobs… or else.

@NEETzsche @Shadowman311 @LukeAlmighty >no-work WFH jobs
These seem like the perfect breeding ground for dissent.

They are, it’s why they wanted to shut them down so hard. I should clarify that they’re not truly “no-work,” and in many cases they are highly productive, at least as far as white collar work is concerned, but they’re remote, so all of the bullshit parts of white collar work like getting your suit on and commuting for 90min each way, then spending another 3hrs in meetings and another 2hrs at the water cooler, is obviated. You end up doing that 1hr of real, actual work before going back to watching anime or plotting the demise of ZOG or whatever else it is you feel like doing.

No way of knowing, prisons don’t have radar on them and an average drone moves at the same speed as a bird and has the size of a large raptor. Remove all the lights from it and drop the shit from a decent altitude around 2-3am when no guard would be looking at the prison yard.

Humans do not naturally look upwards for threats, we’ve never had aerial predators so the concept of shit coming in from above is alien to us. To my knowledge there has only been one serious attempt via drone on a political leader, the Venezuelan president. Drones will have to be a regulated item soon because eventually someone is going to hurl a FPV drone into a gaggle of American politicians and its going to cause a huge freakout.

@NEETzsche @LukeAlmighty @Shadowman311 Can confirm, hang around those places a bit. Hard to tell if /aicg/ is on the coom or doom side of the spectrum atm. The fact that people are consistently posting characters means they're at least a bit okay.
/lmg/ is just a normal /g/ thread though, they got good resources to try and setup your own stuff. I'm personally screwed since my PC is on an AMD GPU and last I checked AMD still doesn't play well with local gen.
@supersid333 @LukeAlmighty @NEETzsche @Shadowman311 It's an absolute shit show. I have gotten it to work with my Vega64 but it revolves around finding the right bitsandbytes build. The right pytorch build. And finding out what's the last supported version of ROCm supported on the card via scattered documents and downloads on their own damn website. Some models don't even run because of some banal instruction not supported. And AMD has worse support for older cards in this regard than Nvidia does. After I got it running I refuse to touch or update my install of oogabooga. Because I don't want to smash my CRT over my head for a week trying to get it functioning again.
A friend and I went halvsies on a used dell server featuring 4 SXM2 slots for doing machine learning projects and surprisingly, you can find Tesla p100's for $50 on eBay. Most are reaching their limits on memory failures but for the price ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯ we've already had one die due to it. And it has some of the same problems I detailed above. But it is quite nice. I reccomend it. Big problem with the specific server we got was it's a 1u server. Nobody sells the damn heatsinks so we had to machine our own. We also had to include piping slots on the sides of the heatsinks to run automotive break lines containing coolant to keep temps below 80°c or else extreme throttling would occur.

TL:DR AMD fucking blows for ML and it was all over when blender dropped GPU rendering on AMD cards in linux 5 years ago. I fucking hate it. Probably just me due to the age of my hardware. ROCm can suck my cock.

All of this is going to get corrected in time and no amount of retarded laws and policies and social shaming will do a damn thing. Efforts to stymie generative AI are going about as far as efforts to get rid of recreational drug use and Internet piracy: precisely nowhere.

A force of nature has been activated. The cat’s out of the bag, man. Pandora’s box has been opened. These people are trying to argue with fucking God on this one and it’s going to be humiliating for them.

@NEETzsche @LukeAlmighty @Shadowman311 Yeah, machine learning has been a veritable Pandora's box. At this point you can only adapt because nothing is going to stomp it out now that it's spread.

They’re going to try, though. That’s the thing. They’re going to try hard. And we get to sit back and laugh.

>racial profiling

You say that like that is what it'll be used for. We have access to a lot of amazing technologies right now but very seldom are they used to those end.

We could go on a massive debate about genetic modification, if you like.
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.