I wonder how you would train an AI to recognize intelligent writing. I guess you can use any old GPT to generate arbitrary amounts of midwit slop.
@cjd I have to assume this is a hard problem or Google would be better at removing fake articles from search results.

@cjd @Moon
I thought we already did have a left/right wing bias recognition in AI.

@cjd @Moon
Obvious jokes aside, the problem is, that you cannot create a dataset by humans, since humans are incapable of making this distinction themselves.

The entire concept of schizophrenia and inteligence being 2 sides of the same coin does apply here 10fold. Because, briliant people do see paterns that you cannot visualize, that means, that you cannot know, if they actually are smart, or if they are bullshitters.

This is, why most atempts at doing this end up just with recognizing, how niche words you use, since the niche words are needed to make a scientific article. But, you immediately turn to social science loons, who cannot form a single sentence without going full systemic prejudice against margenalized methaphors for cheese.

So in my vision, this depends on the depth of the network. If you're doing simple word recognition then yes, you're going to end up with the most midwit of the midwit.

But, and this is a simple implementation: Suppose you use a text classifier on the individual sub-phrases, then for each one of those, you output neural layer snapshot represented as an image, then you take the images making up a sentence, and you feed them through a net to pattern recognize similar sentences and again you're outputting a neural snapshot as an image.

At each level of this, you can train using 2 similar phrases and one different. The reward function is based on the neural image of the similar phrases being more similar (XOR of the pixels is less) than the different one.

Feed those images back in, this time per-paragraph, and you should have a form of paragraph level classification. Then you feed that output into a network which which classifies text into a score and you train on things you find worthwhile.

@cjd @Moon
I don't understand, how that's supposed to measure inteligence.

Well the point is you train it on what you consider intelligent writing vs. fluff and midwit slop, then you teach it to distinguish.

BTW humans have a way of signaling and detecting intelligence - that is through humor. It's like the first man-made proof-of-work: It requires more brain cells to be funny (prove) than it does to laugh (validate).

@cjd @Moon
> Well the point is you train it on what you consider intelligent

Well, I guess I wrote the paragraphs of text in vain then.

You WILL get a Redditor AI. There is no way around that.

The origin of this thread was me saying "I wonder how...." which is about how to avoid that failure mode. You say it's impossible, I'm not convinced.

@cjd @Moon
I didn't say it was impossible, I said, it was impossible for a human to create a dataset.

I also said, that intelligence isn't based around the word structure, but about how well the mentioned concepts align with the world, that AI has no access to.

If we're talking about making an AI which generates text, then I agree. But I'm talking about an AI which classifies text (that I expect will be written by humans).

> isn't based around the word structure

Well, yes and no. REALLY stupid text has an identifiable structure. Midwit text looks smarter than it is. What I'm looking for is how to make the model deep enough to identify quality fedi banter.

Of course midwit diarrhea is a moving target w/ Goodhart's law, especially if people are start training GPTs against my "quality posts" classifier...
My thinking here is to train it on things that *I* find interesting (e.g. I hit the like button). So that's going to contain some complex language, some simple language, some grammatical errors, etc. Not to give the AI an easy way out here...

@cjd @Moon
If you want to make an AI to learn what interests YOU, then even the dumbest "find words I like" system will do the job. As long as the text contains "linux, boot, freeware, software, hardware", It's great.
If it contains "republican, democrat, trump, fuck, Nigger" It's BAD.....

But you changed the goal entierly now. Your original post was about finding intelligence.

Ok that's a fair point. I'm trying to find things that *I* will consider intelligent (or at least amusing), and I just wrote my original post poorly.

But the thing is, I can't think of any particular set of words that communicate what I would find interesting - it's like trying to word-filter for what you'll find funny. What do you do? Filter for "knock knock"? Maybe if you're 5.

Most political takes are boring and repetitive. Most Science is horrifying midwittery. Most blockchain takes are spam and get-rich-quick. Most conspiracy takes are aliens and flat earth bullshit. BUT, there's 1% in each category which is a flash of brilliance (IMO) and I'd really like to try to filter for it...
Follow

@cjd @Moon
In that case, as long as you have a dataset, it should be unbelievabely simple.

I wrote a word frequency based spam filter as a homework at Uni. I didn't even have to go into neural networks. If I find it, I might even send it to you. :D

· · Web · 1 · 0 · 1
As I said earlier in this thread, I have not read the papers, but right now I'm strongly suspecting that most models are incredibly stupid and they just throw 10mn$ at GPUs to train the crap out of them. So something like hashing every word, every 2 words, every 3 words, and so on, then adding up the bits in an array of 256 float32s might give you a pattern that you a text similarity filter that beats a lot of way more complex neural nets...
Also I make a lot of grammar errors, sorry about that.

@cjd @Moon
And... I found the filter :D
I will have to look through the files to check, if there is anything sensitive, and I have to go now to a pub, but would you want a working spam filter from a 1st grade Uni homework?

I'd say don't break your back over it. I'm supposed to be doing a bunch of other stuff so it's highly doubtful I would give it more than a couple minutes of reading...

@cjd @Moon
Also, goodbye friend. If you turn that thing on, I will disappear from your universe :ayaya:

@cjd
Can this AI read books? If so, then have the AI read certain "men of letters" (blank slate AI).

Perhaps beforehand, have it learn to distinguish verbose writing (too many adjectives, too many adverbs, and buzzwords)

A bank of buzzwords can be maintained quite easily, and have any pop-culture article containing these words be flagged as stupid or low-brow. (This is giving the AI some agency.)

Anyway. My wheelhouse is mathematics and Tech. Comm. and not software engineering, so throw rocks if you like.
@LukeAlmighty @Moon
The problem is that what makes a classical book great are the same words and phrases that make for some horrific drivel, if they're not used in exactly the right way.

Training an AI on quality writing is interesting, but tweets and fedi posts are perhaps better for training because they're short and so it doesn't take reading over pages and pages of long words to determine whether you're dealing with genius or reddit-tier poop.

@cjd @pepsi_man @Moon
I am sorry, but I thought you are very educated when it comes to the IT.

So, why would you believe, that less information is good for learning?

Nope, got nothing but a highschool diploma. You have the wrong guy.
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.