These are public posts tagged with #ai. You can interact with them if you have an account anywhere in the fediverse.
FTR after spending a day trying to decipher what the **** was going on with RTAX_CWND in the kernel, I tried asking Gemini, just to check if I could have spent myself that day of research.
No, folks, the answer was entirely wrong on all points that mattered in my case. It would have misled me into thinking that:
the dst_entry metric is updated,
it provides an initial hint for the congestion window,
Gemini also failed to mention ip-tcp_metrics(8), which is the only way to see the learned values.
Java se v AI světě rozjíždí naplno. Vedle knihoven jako Spring AI nebo LangChain4j se dostáváme i blíž k hardwaru – díky Vector API, Panama/FFM API a postupně i Project Babylon (pro přímé volání AI služeb z Javy).
Že by nebylo skoro nutné používat Python? :)
I was browsing over this research paper by Toby Ord who talks about comparing an LLM working on a lengthy task as compared to a human working on the same lengthy task. The longer it takes to complete the task, the greater the odds of failure for the LLM, in fact the odds of failure increase exponentially with time. Naturally, larger LLMs with more computing power can go for longer, but the exponential odds of failure with respect to time trend are the same no matter what.
So for example you might be able to get the LLM to work on a task for 1 hour with a 8-billion parameter model and expect it will succeed at the task 50% of the time. Maybe you can get 2 hours out of a 16-billion parameter model where you can expect it will succeed at the task 50% of the time (I am guessing these parameter sizes). But after that “half life” the odds of the LLM succeeding taper-off to zero percent.
I haven’t read the little details yet, like how do you judge when the task is done (I presume when the LLM claims that it has finished the job), or do these tests allow multiple prompts (I presume it is just fed an input once and allowed to churn on that input until it believes it is finished). So that makes me wonder, could you solve this problem, increase the AI’s rate of success, if you combine it with classical computing methods? For example, perhaps you can ask the LLM to list the steps it would perform to complete a task, then parse the list of steps into a list of new prompts, then feed-back each of those prompts to the LLM again, each one producing another list of sub tasks — could you keep breaking-down the tasks and feeding them back into the AI to increase it’s odds of success?
It is an interesting research question. I am also interested to see how much energy and water this takes as compared to a human working on the same task, including the caloric intake of food, and perhaps the energy used to harvest, process, and deliver the food.
Building on the recent empirical work of Kwa et al.…
Toby OrdMidjourney Launches Its First AI Video Generation Model, V1 - Midjourney has launched its first AI video generation model, V1, which turns image... - https://slashdot.org/story/25/06/18/1935234/midjourney-launches-its-first-ai-video-generation-model-v1?utm_source=rss1.0mainlinkanon&utm_medium=feed #ai
Midjourney has launched its first AI video generation…
slashdot.orgThis is interesting: A video analysis, frame-by-frame of the (supposed) #Hamas attacks of 7 Oct. 2023. Max Igan and Max Guertinn. You can see glitches in the footage, anomalies that are not optical or from compression. AI has trouble with certain features, like fences. - A lot of us know we've been lied to, just not the extent. #AI #fakevideo #manipulation
https://old.bitchute.com/video/1gHC5exNDpKA/
NSFW: Bitch, Lewd Ideas
#AI #AIGenerated #DeadOrAlive #Honoka #Bikini #Stripper #Bitch #Whore #Slut #Lewd
Now that’s one slutty bikini for Honoka. A high school girl wearing some skimpy panties and bra like a prostitute.
@juliewebgirl
Water War. Nestle and LLM Data Centers both want All The Waterz!!! ... so basically we're heading into the Tank Girl timeline.
Dear billion dollar IT corporations: If you think it’s a good idea to automate away #translation: NO.
Behind your back, people who speak other languages than English LAUGH at your terrible AI slop translations full of errors.
Auto-translation is simply not there yet, and anyone who claims otherwise is either delusional or a liar.
Mattel faces pressure to be more transparent about…
Ars Technica