I wonder how you would train an AI to recognize intelligent writing. I guess you can use any old GPT to generate arbitrary amounts of midwit slop.
@cjd I have to assume this is a hard problem or Google would be better at removing fake articles from search results.

@cjd @Moon
I thought we already did have a left/right wing bias recognition in AI.

@cjd @Moon
Obvious jokes aside, the problem is, that you cannot create a dataset by humans, since humans are incapable of making this distinction themselves.

The entire concept of schizophrenia and inteligence being 2 sides of the same coin does apply here 10fold. Because, briliant people do see paterns that you cannot visualize, that means, that you cannot know, if they actually are smart, or if they are bullshitters.

This is, why most atempts at doing this end up just with recognizing, how niche words you use, since the niche words are needed to make a scientific article. But, you immediately turn to social science loons, who cannot form a single sentence without going full systemic prejudice against margenalized methaphors for cheese.

So in my vision, this depends on the depth of the network. If you're doing simple word recognition then yes, you're going to end up with the most midwit of the midwit.

But, and this is a simple implementation: Suppose you use a text classifier on the individual sub-phrases, then for each one of those, you output neural layer snapshot represented as an image, then you take the images making up a sentence, and you feed them through a net to pattern recognize similar sentences and again you're outputting a neural snapshot as an image.

At each level of this, you can train using 2 similar phrases and one different. The reward function is based on the neural image of the similar phrases being more similar (XOR of the pixels is less) than the different one.

Feed those images back in, this time per-paragraph, and you should have a form of paragraph level classification. Then you feed that output into a network which which classifies text into a score and you train on things you find worthwhile.

@cjd @Moon
I don't understand, how that's supposed to measure inteligence.

Well the point is you train it on what you consider intelligent writing vs. fluff and midwit slop, then you teach it to distinguish.

BTW humans have a way of signaling and detecting intelligence - that is through humor. It's like the first man-made proof-of-work: It requires more brain cells to be funny (prove) than it does to laugh (validate).

@cjd @Moon
> Well the point is you train it on what you consider intelligent

Well, I guess I wrote the paragraphs of text in vain then.

You WILL get a Redditor AI. There is no way around that.

The origin of this thread was me saying "I wonder how...." which is about how to avoid that failure mode. You say it's impossible, I'm not convinced.

@cjd @Moon
I didn't say it was impossible, I said, it was impossible for a human to create a dataset.

I also said, that intelligence isn't based around the word structure, but about how well the mentioned concepts align with the world, that AI has no access to.

If we're talking about making an AI which generates text, then I agree. But I'm talking about an AI which classifies text (that I expect will be written by humans).

> isn't based around the word structure

Well, yes and no. REALLY stupid text has an identifiable structure. Midwit text looks smarter than it is. What I'm looking for is how to make the model deep enough to identify quality fedi banter.

Of course midwit diarrhea is a moving target w/ Goodhart's law, especially if people are start training GPTs against my "quality posts" classifier...
@LukeAlmighty @cjd I think this quote was part of Einstein's misguided opposition to quantum physics.
Follow

@Moon @cjd
It still is a simple quote that sends a smart message though.

· · Web · 0 · 0 · 0
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.