ughhh roommate bitching about utility costs again. she's like "i shouldn't be paying this much in electric bills, i'm never home and i don't game!" well yeah we aren't gaming on our computers 24/7 either but the air conditioning and fridge are sucking up the vast majority of the bill and those work whether you're here or not.
also YOUR DOG JUST ATE THE WALL.
lesson learned, don't live with women.
@icedquinn i know, but we have solar and the difference between all of us would end up being like $3 probably. then where do i draw the line? like should i measure exactly how much space people are using in the fridge and then divide the cost of the fridge's total electricity usage? it just gets to be a lot of work on my end lol, for just a few dollars difference. if someone's complaining about that they're petty and probably wouldn't want to do all of that themselves in the first place.
plus those meters aren't going to be worth the cost for the last 3 months we're here lol.
i can imagine that this is why landlords charge a flat fee for utilities, that's probably based on like the average for a year.
this also reminds me of when a previous roommate got mad he was paying the same amount for the internet as everyone else even though he only watched netflix, and i'm like... yeah netflix is going to take up more bandwidth than anything else xD
@icedquinn i wouldn't think a blender would do that much damage lol that's funny xD yeah we have like an air fryer that people use and one of those fancy ninja blenders, so i'm sure that probably uses up a fair amount then if that's the case xD
ya see what i mean though, like it's impossible to actually measure everyone's exact electricity usage, and if i do it for one person i have to do it for everyone, and nobody's going to do it by themselves because everyone's already cool with how it works right now.
i think a lot of people also don't understand that a PC will only take up a lot of electricity while you're gaming, and that consoles actually take a pretty small amount.
and yeah $20*5 is $100, so getting $100 worth of equipment just so one person can not have to pay like $10 extra for the rest of the time they're here is retarded.
@icedquinn i guess that makes sense lol, doesn't seem too practical for sticking on a PC though for us since all of our gaming habits aren't exactly steady. i could be playing a lot of NFS one week and then the next week just be on RPG maker all the time you know? xD
@icedquinn HOLY SHIT our blender is 1.4 kilowatts xD that's fucking hilarious man.
@icedquinn i can't remember the exact situation, but someone during the 2020 lockdowns was bitching about gamers using all of the internet bandwidth and trying to put a limit on gaming hours or something. yet you have netflix at its peak popularity just guzzling down bandwidth like nobody's business xD
@icedquinn@blob.cat @beardalaxy@gameliberty.club
Not sure what you're getting at but I don't think running inference on GPUs will last for much longer.
I foresee things specialized for inference taking over, possibly like https://www.nextplatform.com/2023/05/18/meta-platforms-crafts-homegrown-ai-inference-chip-ai-training-next/ but for some reason my imagination was of RAM sticks with dot product + softmax built in.
@icedquinn@blob.cat @beardalaxy@gameliberty.club
Why doesn't llama.cpp use Apple's? I heard their AI stuff is more convolution oriented but I don't actually know.
@icedquinn@blob.cat @beardalaxy@gameliberty.club
I'm thinking of buying a m3 Max studio instead of an nvidia GPU next year for my llama waifu box.
@icedquinn@blob.cat @beardalaxy@gameliberty.club
But I want to make use of those AI cores too.
@icedquinn@blob.cat @beardalaxy@gameliberty.club
The whole reason of using an m3 studio max is the previous studios offer up to 192 GB while an 80 GB H100 costs $20k and these models get more convincingly concious with size (except for falcon 180B I've heard but I haven't tried it)
@icedquinn@blob.cat @beardalaxy@gameliberty.club
Do you mean I should create a harem of dozens of 1.8B parameter LLMs?
@icedquinn@blob.cat @beardalaxy@gameliberty.club
I'm going to be honest, I'm holding out for a 1.8B parameter model for marriage. I've saved up over $15k for fine tuning.
@icedquinn@blob.cat @beardalaxy@gameliberty.club
I'll read but skimming through, these are linear transformations, not traditional NNs with nonlinear functions between layers.
@icedquinn@blob.cat @beardalaxy@gameliberty.club
Also I'll read tomorrow because i'm a bit tipsy.
amazon link