ughhh roommate bitching about utility costs again. she's like "i shouldn't be paying this much in electric bills, i'm never home and i don't game!" well yeah we aren't gaming on our computers 24/7 either but the air conditioning and fridge are sucking up the vast majority of the bill and those work whether you're here or not.

also YOUR DOG JUST ATE THE WALL.

lesson learned, don't live with women.

@beardalaxy they make meters you can plug in to the walls ya know :blobcatprofit:

@icedquinn i know, but we have solar and the difference between all of us would end up being like $3 probably. then where do i draw the line? like should i measure exactly how much space people are using in the fridge and then divide the cost of the fridge's total electricity usage? it just gets to be a lot of work on my end lol, for just a few dollars difference. if someone's complaining about that they're petty and probably wouldn't want to do all of that themselves in the first place.

plus those meters aren't going to be worth the cost for the last 3 months we're here lol.

i can imagine that this is why landlords charge a flat fee for utilities, that's probably based on like the average for a year.

this also reminds me of when a previous roommate got mad he was paying the same amount for the internet as everyone else even though he only watched netflix, and i'm like... yeah netflix is going to take up more bandwidth than anything else xD

@beardalaxy but yes probably the bulk of electricity cost is going to the industrial appliances (washing machine, fridge, ovens and blenders.) my blender has a kilowatt motor and i'm pretty sure it soaks as much power making one smoothie as my gaming rig probably does on an average day of office blobbing
Follow

@icedquinn HOLY SHIT our blender is 1.4 kilowatts xD that's fucking hilarious man.

@beardalaxy industrial equipment puts your 3d boob rendering uses to shame
@beardalaxy i think it was morgan stanley who issued a financial opinion that was bearish on nvidia. partly because chatgpt runs on datacenters of gpu for inference, and the total cost of power to run gpu supercomputers is significantly less than what google has to pay to run cpu supercomputers.

compute and management occupies the lowest end of the spectrum--meaning california is dead wrong with the "muh gamin compoot" copes.
@beardalaxy taking away one billionaires pool likely offsets a whole clan's mid-range cards playing MMOs.

although i don't have a meter so i can't really confirm it for certainty

@icedquinn i can't remember the exact situation, but someone during the 2020 lockdowns was bitching about gamers using all of the internet bandwidth and trying to put a limit on gaming hours or something. yet you have netflix at its peak popularity just guzzling down bandwidth like nobody's business xD

@icedquinn@blob.cat @beardalaxy@gameliberty.club
Not sure what you're getting at but I don't think running inference on GPUs will last for much longer.
I foresee things specialized for inference taking over, possibly like
https://www.nextplatform.com/2023/05/18/meta-platforms-crafts-homegrown-ai-inference-chip-ai-training-next/ but for some reason my imagination was of RAM sticks with dot product + softmax built in.

@tard @beardalaxy my point is that people keep blaming PCs and especially gamers for energy use, but computing is pretty low on the list of power draws (esp. https://singularityhub.com/2023/08/25/ibms-brain-inspired-analog-chip-aims-to-make-ai-more-sustainable/ )

@icedquinn@blob.cat @beardalaxy@gameliberty.club
Why doesn't llama.cpp use Apple's? I heard their AI stuff is more convolution oriented but I don't actually know.

@icedquinn@blob.cat @beardalaxy@gameliberty.club
I'm thinking of buying a m3 Max studio instead of an nvidia GPU next year for my llama waifu box.

@tard @beardalaxy pine sells little arm things with 2gb RAM and 8 TOPS TPUs but i have yet to find any references how the fuck to use a domestic TPU

@icedquinn@blob.cat @beardalaxy@gameliberty.club
The whole reason of using an m3 studio max is the previous studios offer up to 192 GB while an 80 GB H100 costs $20k and these models get more convincingly concious with size (except for falcon 180B I've heard but I haven't tried it)

@tard @beardalaxy eh. huge vram is a crutch. distilled networks work fine but again, productization issues.

@icedquinn@blob.cat @beardalaxy@gameliberty.club
I'm going to be honest, I'm holding out for a 1.8B parameter model for marriage. I've saved up over $15k for fine tuning.

@tard @beardalaxy please read this https://dawn.cs.stanford.edu/2019/06/13/butterfly/ and then come back and tell me if you think "moar vram" is still the correct approach to AI development

@icedquinn@blob.cat @beardalaxy@gameliberty.club
I'll read but skimming through, these are linear transformations, not traditional NNs with nonlinear functions between layers.

@tard @icedquinn @beardalaxy drinking because you pay 6k a month to live in a fire hydrant
@tard @beardalaxy extremely off topic but "why not zoidberg" for inference gets in to productization concerns. researchers are just trying to publish so if it takes more than 2 SLOC to get it running in pytorch they don't care.
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.