• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Help Me Understand This Comment

Butter!

Rough Around the Edges
Joined
Feb 8, 2014
Messages
7,702
Location
Deep Storage
I was surfing r/nosurf on Reddit, hoping to break the universe, when I happened upon a comment that I found supremely interesting. However, because I don't understand the technical subject matter very well at all, I can't tell if the person is crazy, how intrigued I should be, how to look up additional information, etc. I can't message the guy, because I don't actually have a Reddit account.

Can anyone here please help shed any light? I'm always on the lookout for new tech-related doom scenarios to fear.

Comment thread here - https://www.reddit.com/r/nosurf/comments/psk37b/i_miss_the_camaraderie_of_the_old_internet/

Comment (by u/west_pac):
Well right now the internet is largely built on the idea that it is at once a repository for human knowledge, a forum for discussion, and a platform for media. My assessment is that the first two are being eroded, and that this will continue to the point that they'll basically be made useless. Once that happens, the internet will be a platform for media and little else, so streaming services basically. Even then, how much of that will you trust? How will you know if it isn't all a deepfake? Because once we get there it really ceases to have any value beyond sheer entertainment factor.

We've already seen steps in this direction with the kind of websites that pop up when you search for things like "how do I change the oil in my car?". Instead of getting useful results, you get bot generated clickfarm sites that have essentially made google useless. Right now these scrape info from other websites, but there's a creeping phenomena in which the bots are scraping other bot-created pages because they can't tell the difference. This is what we're going to continue to see with AI, it won't answer the question, it will just either ramble incoherently or it will tell you that you shouldn't drive a car because it's learned that cars are bad. We aren't creating better search algorithms fast enough to deal with this and it's already driving disengagement from the web.

People underestimate, or are unaware of the coming crisis with AI. They think it will be this thing that we can use to enhance old WW2 footage or to drive their car for them, but what it will do instead is just make everything fuzzy and ****. AI will write and print books but they'll suck, it will deepfake news broadcasts to the point that there will be no good way to transmit information, it will create newspaper sites that don't actually exist but will have hundreds of AI journalists writing for it all reporting on things that never happened. AI is about to completely eradicate our ability to discern reality on the internet, and that will spill out into the real world.

I'm not sure what we're going to do about this, honestly. I don't think anyone is thinking about how it will affect public trust in information. I don't know of any solutions being developed. One way we could potentially fix it is by developing ways to verify that no AI was involved, but it's hard to envision how that would go without being extremely privacy-invasive or potentially exploited by AI.

I won't lie, this **** has the potential to push us back into the dark ages as a species. We could well have to go back to machine-printed books and newspapers, distributed by hand only. It may be necessary to physically dumb down our technology so that we can trust it again. Until AI is made illegal globally, there's a good chance that things will get very bad before they get better.

I wasn't really sure where to ask about this, but I figured this section was a good bet, since I'm most interested in the technical stuff the poster is mentioning.
 
Seems about right to me. A bit hyperbolic, and I'm sure people will find a way to get value out of it regardless.

The Internet is basically information infrastructure on a cyclopean scale. What that infrastructure gets used for is vast, diverse, and complicated.
 
Seems about right to me. A bit hyperbolic, and I'm sure people will find a way to get value out of it regardless.

The Internet is basically information infrastructure on a cyclopean scale. What that infrastructure gets used for is vast, diverse, and complicated.

I think "a bit" is underestimating it. Back to the stone ages?

Yes, verifying information is becoming harder to do because of AI, bots, etc. but it's definitely not going to be the end of the world. Google searches for changing oil are perfectly decipherable. There can be complete and total bull **** written on machine typewriters and distributed by hand, as with any other media, including the internet.

He's right in that there's no immediate resolution to this other than people being diligent on what type of information they consume, but National Enquirer, the Sun, etc. were around long before the internet. Still are.
 
I meant more specifically. What is he talking about with clickbait sites that make Google obsolete? What exactly is happening with algorithms at this level, and why can't people do anything about it?

Why would we end up with fake newspapers staffed by bots and reporting on non-events?

I genuinely need the Explain Like I'm 5 and Also Stupid version.
 
OK I will give it a go. West_pac is not a member here so no holds barred.

Comment (by u/west_pac):
Well right now the internet is largely built on the idea that it is at once a repository for human knowledge, a forum for discussion, and a platform for media.
All false. Not a good start.

My assessment is that the first two are being eroded, and that this will continue to the point that they'll basically be made useless.
Also false.

Once that happens, the internet will be a platform for media and little else, so streaming services basically.
That happened years agoThe majority of internet traffic is porn. This has been the case since the 90's

Even then, how much of that will you trust?
Well, none of it Never have, not starting now.
How will you know if it isn't all a deepfake?
Because we are all at liberty to get up off our collective butts and actually check for real. But no. Lets sit and wait for the cute nurse with the spoon. What a lazy entitled moron the writer is.

Because once we get there it really ceases to have any value beyond sheer entertainment factor.
that already happened last century. Geez back to the 1950's with you.

We've already seen steps in this direction with the kind of websites that pop up when you search for things like "how do I change the oil in my car?". Instead of getting useful results, you get bot generated clickfarm sites that have essentially made google useless.
Since that doesn't happen to me, I am wondering about this guys search history. But ever willing, I used that exact search term and found lots of general instructions and a lot more requests for make/model/year for clarity. If this loon is serious, he has a problem.

Right now these scrape info from other websites, but there's a creeping phenomena in which the bots are scraping other bot-created pages because they can't tell the difference.
That is not how it works. That is not how anything works.
This is what we're going to continue to see with AI,
Wheeled out the crystal ball, have we? Predicting the future now, is it?

it won't answer the question,
Correct. Because that is not what it is for, why would anyone imagine such nonsense?

it will just either ramble incoherently or it will tell you that you shouldn't drive a car because it's learned that cars are bad.
Not entirely sure what this idiot is claiming. He outsmarted Siri? That is a feat accomplished by cletus the slack jawed yokel. of simpsons fame. Who is telling me not to drive a car? Nobody. That's who.

We aren't creating better search algorithms fast enough to deal with this and it's already driving disengagement from the web.
Oh the algorithms are getting better in leaps and bounds over the years. Your problem really is that you are not getting the results you want. You could, of course, adjourn to the religious site of your preference. Lots of little altar boys to browse there. But no, instead you choose to vomit your ignorance on the rest of us for no reason other than google will not serve up your particular flavour of porn. Guess what. The internet is not your personal toy. Deal with it.

People underestimate, or are unaware of the coming crisis with AI.
Wrong.

They think it will be this thing that we can use to enhance old WW2 footage or to drive their car for them, but what it will do instead is just make everything fuzzy and ****.
Wrong, wrong.

AI will write and print books but they'll suck,
Wrong.

it will deepfake news broadcasts to the point that there will be no good way to transmit information,
Fantasy.
it will create newspaper sites that don't actually exist but will have hundreds of AI journalists writing for it all reporting on things that never happened.
I do not look at the existing one's nor have done for decades. How can this affect me?
AI is about to completely eradicate our ability to discern reality on the internet, and that will spill out into the real world.
Hahaha. How cute. You think reality is on the internet. Too funny

I'm not sure what we're going to do about this, honestly.
If you were honest, really honest. you would admit that you will do nothing. Again.

I don't think anyone is thinking about how it will affect public trust in information.
Blatant lie. There is a crapton of multidisciplinary boffins considering it

I don't know of any solutions being developed.
Then either your imagination is lacking, you are unqualified google or you are religious. Pick one. Or all three, I guess.

One way we could potentially fix it is by developing ways to verify that no AI was involved, but it's hard to envision how that would go without being extremely privacy-invasive or potentially exploited by AI.
THE most ignorant paragraph yet

I won't lie,
Too late

this **** has the potential to push us back into the dark ages as a species.
That is what you desire. You really want a new dark age. Put those uppity wimmin back in th kitchen cookin bread and babies. Yeah, thats where they belong.

We could well have to go back to machine-printed books and newspapers, distributed by hand only. It may be necessary to physically dumb down our technology so that we can trust it again. Until AI is made illegal globally, there's a good chance that things will get very bad before they get better.
30+ years in the printing game here so I know this is a total load.
 
I was going to respond with a partial and much shorter version of one of Abaddon's points, which is that you can tell something if you rely on something other than the internet for some of your perception. Sure if I sit in a basement and tune in to the web all day I'll get lots of misinformation. But if I read the paper, and look around, and talk to other people, and think, then there's at least some hope that I'll be able to tell the difference. Of course we're all going to be fooled from time to time. I have a few times, and people generally correct the error. And I still like the internet, because it tells me things like how to bypass the automatic parking brake on a Honda so you can change the pads, and stuff like that.
 
I understand that he's apparently wrong, but I still don't understand what he is saying. I am trying to understand the actual mechanism he is describing. I certainly don't get Google results matching anything like the clickbait sites he is talking about, for example, so I was trying to understand what is even being referenced.

I also want to understand why any AI scenario would lead to them writing fake newspapers and things like that. The comment makes it sound like humans wouldn't have any control over it, and I don't understand the scenario (however fantastic) under which that would happen on the internet. Why is the poster acting like regular news sites would disappear? Is Associated Press being overtaken by AI in this setting, or what? And if so, how?

I'm not trying to peddle a conspiracy theory, I'm trying to understand the tech that underlies this particular dystopian vision because it sounds cool.
 
Huh. I disagree with abaddon on almost every point, if not in letter then in spirit.
Feel free to hold forth on your points of disagreement. I will not be holding my breath.

Oops, I cannot anyway because it isn't my breath. Your god gave me it so it must be his, right? About the time when he "ensouled" me at conception, right?

Don't like that? Fine. Participate in the thread or don't. Claiming to disagree with "everything" is NOT participating unless you can present something, anything substantial.
 
I understand that he's apparently wrong, but I still don't understand what he is saying. I am trying to understand the actual mechanism he is describing. I certainly don't get Google results matching anything like the clickbait sites he is talking about, for example, so I was trying to understand what is even being referenced.

I also want to understand why any AI scenario would lead to them writing fake newspapers and things like that. The comment makes it sound like humans wouldn't have any control over it, and I don't understand the scenario (however fantastic) under which that would happen on the internet. Why is the poster acting like regular news sites would disappear? Is Associated Press being overtaken by AI in this setting, or what? And if so, how?

I'm not trying to peddle a conspiracy theory, I'm trying to understand the tech that underlies this particular dystopian vision because it sounds cool.
Considering the actual source of fake news these days, it seems odd indeed that AI could make in any worse. I suppose a right wing nutcase could build an AI that wrote his material for him, but a moderate could as easily make one that checks facts.
 
I understand that he's apparently wrong, but I still don't understand what he is saying. I am trying to understand the actual mechanism he is describing. I certainly don't get Google results matching anything like the clickbait sites he is talking about, for example, so I was trying to understand what is even being referenced.
Of course. You or I or anyone could google any random term. Will our results be the same? Nope.

Next: Clickbait. Not all Clickbait is the same level of malevolent. It varies from "please buy my stuff" to "some men are coming to your place". A spectrum, if you will.

In general, "Clickbait" is extra links with attractive titles and pictures intended to lure you in. You might not actually even pay anything, but the target website gets clicks and hence loot. An interesting example was a few years ago. The link read (approx) 30 celebs you did not know were homeless, shock. Accompanied by a photo of Matthew Perry (Chandler from friends). There were not 30 celebs, there were 50. Matthew Perry wasn't in there anyway. And none of them were celebs. And none of them were homeless.

Next level up is the likes of the "One simple trick that Doctors/opthalmologists/whoever don't want you to know" You pay 50 bucks and live forever, have perfect eyesight for life, have perfect hearing for life, etc. All nonsense of course. Sure, it is obvious snake oil, yet people fall for it all the time.

The next level up is dark. These are people who want you bank accounts, your credit cards, even your identity. I mess with them sometimes for a lark. Submitting 100,000-150,000 apparently valid CC accounts that were all fake seemed to somehow annoy them.

I also want to understand why any AI scenario would lead to them writing fake newspapers and things like that.
You answered your own question. If the AI had control, why on earth would they bother with such nonsense?

The comment makes it sound like humans wouldn't have any control over it, and I don't understand the scenario (however fantastic) under which that would happen on the internet.
Yup. Sounds like your protagonist saw the matrix and thought it was realistic. It isn't. Humans simply do not provide sufficient electricity to power such a matrix.

Why is the poster acting like regular news sites would disappear? Is Associated Press being overtaken by AI in this setting, or what? And if so, how?
Well that is the fantasy, no? The orgiastic end times that they want, no? The peddlers of such muck know that there are no "end times" on the way. They also know that by scaring the living crap out of the parishioners they get bums on seats and money in plates.

I'm not trying to peddle a conspiracy theory, I'm trying to understand the tech that underlies this particular dystopian vision because it sounds cool.
It is rather simple, in principle. You are on the webernets. Your computer/device is exchanging data with some server on the internet. Both of you can retain the record of that exchange or discard it. Perhaps you choose to discard it. Tough. The server down the other end can choose to keep it no matter what you do.
 
I understand that he's apparently wrong, but I still don't understand what he is saying. I am trying to understand the actual mechanism he is describing. I certainly don't get Google results matching anything like the clickbait sites he is talking about, for example, so I was trying to understand what is even being referenced.

I also want to understand why any AI scenario would lead to them writing fake newspapers and things like that. The comment makes it sound like humans wouldn't have any control over it, and I don't understand the scenario (however fantastic) under which that would happen on the internet. Why is the poster acting like regular news sites would disappear? Is Associated Press being overtaken by AI in this setting, or what? And if so, how?

I'm not trying to peddle a conspiracy theory, I'm trying to understand the tech that underlies this particular dystopian vision because it sounds cool.


All right, there are two factors involved in the hypothesis. One factor is how the AIs are developed and optimized. The other is what the AIs in question are optimized for.

I normally don't suggest videos for background information, but the best concise explanation of how AI algorithms are developed I know of just happens to be a video:



The AIs are a bit like the "jerkass wishing genies" prevalent in fantasy fiction. They don't necessarily do what the developer has in mind for them to do. They do exactly what they're "told" to do instead, which in this case means whatever optimizes the performance measurement applied when making them. For the reasons described in the video, performance measures are things that are easy to measure and directly related to profits, such as how long a user stays on a news site, rather than things that are hard to measure and only indirectly related to profits if at all, such as the editorial fairness of the news presented.

Furthermore, the iterative process by which the AIs evolve includes people, and society, in the feedback loop. It doesn't matter which is evolving, or how, as long as the result is achieved. That is to say, if the best way to maximize the time users spend on a news site turns out not to be "tailor the news to the users' interests" as might have been the original idea, but more like "present whatever news influences the user to consume more news" (for instance, by presenting a compelling conspiracy theory that motivates them to read more related theories) then that will be what happens. News algorithms can thus (and many believe, already have) become optimized in ways that maximize public anxiety, conflict, and ill will.

That's not much different than what newspaper and TV news editors did to sensationalize the news, but the AIs can do it without anyone intending that or even being aware of the effects it's having.

As far as I can tell, that's the current basis for the redditor's projections. That post takes the idea farther, looking ahead to the possibility of AI's not just selecting stories but writing them, or modifying existing stories arbitrarily, so that for instance if there is an honest trusted Walter Cronkite journalist out there, it won't help because AIs will modify those stories to get more clicks, or just put his name and likeness on their own made-up stories to get more clicks. I'm not aware of anything like that actually happening yet, and it appears to me that there are software methods (related to encryption and authentication) that would prevent it.
 
All right, there are two factors involved in the hypothesis. One factor is how the AIs are developed and optimized. The other is what the AIs in question are optimized for.

I normally don't suggest videos for background information, but the best concise explanation of how AI algorithms are developed I know of just happens to be a video:



The AIs are a bit like the "jerkass wishing genies" prevalent in fantasy fiction. They don't necessarily do what the developer has in mind for them to do. They do exactly what they're "told" to do instead, which in this case means whatever optimizes the performance measurement applied when making them. For the reasons described in the video, performance measures are things that are easy to measure and directly related to profits, such as how long a user stays on a news site, rather than things that are hard to measure and only indirectly related to profits if at all, such as the editorial fairness of the news presented.

Furthermore, the iterative process by which the AIs evolve includes people, and society, in the feedback loop. It doesn't matter which is evolving, or how, as long as the result is achieved. That is to say, if the best way to maximize the time users spend on a news site turns out not to be "tailor the news to the users' interests" as might have been the original idea, but more like "present whatever news influences the user to consume more news" (for instance, by presenting a compelling conspiracy theory that motivates them to read more related theories) then that will be what happens. News algorithms can thus (and many believe, already have) become optimized in ways that maximize public anxiety, conflict, and ill will.

That's not much different than what newspaper and TV news editors did to sensationalize the news, but the AIs can do it without anyone intending that or even being aware of the effects it's having.

As far as I can tell, that's the current basis for the redditor's projections. That post takes the idea farther, looking ahead to the possibility of AI's not just selecting stories but writing them, or modifying existing stories arbitrarily, so that for instance if there is an honest trusted Walter Cronkite journalist out there, it won't help because AIs will modify those stories to get more clicks, or just put his name and likeness on their own made-up stories to get more clicks. I'm not aware of anything like that actually happening yet, and it appears to me that there are software methods (related to encryption and authentication) that would prevent it.

Thank you! This is exactly what I was looking for.

What the inventive poster was actually talking about is more clear to me now. It is unlikely to become clear enough for me to attempt writing speculative fiction about such a setting, though. And based on what you guys say, it's probably too far-fetched to make for a good setting anyway.
 
I also want to understand why any AI scenario would lead to them writing fake newspapers and things like that. The comment makes it sound like humans wouldn't have any control over it, and I don't understand the scenario (however fantastic) under which that would happen on the internet. Why is the poster acting like regular news sites would disappear? Is Associated Press being overtaken by AI in this setting, or what? And if so, how?
Regular news sites wouldn't disappear and AP is not going to be overtaken by AI. Probably.

But I can see how GPT-like text generators might be used by malicious actors to produce large quantities of semi-plausible fake stories on large numbers of real looking fake news sites, and such stories getting viral on social media, and if those fake news sites start referencing each other getting high on Google search results.

If a country wants people in another country to be hopelessly misinformed it would no longer need large numbers of people writing fake articles.
 
Hmm, to make this fiction…

Perhaps a company develops a line of AI-based Digital Personal Assistants. These become popular, and do things like gather news stories of interest to their user, ads for sales, scheduling appointments, etc. perhaps they share information to group like-minded individuals, perhaps something like Facebook suggesting friends or linked in suggesting connections. The AI algorithms, set to plead their users and to increase the amount of content those consume, amplifies the “news bubble” effect, and ends up entirely unintentionally radicalizing large numbers of people: creating subgroups that get the same errant news stories, suggesting links and scheduling meetings among conspiracy theorists, and linking to gun sales and similar goff to those with a penchant to violence?

Needs work, but you could make a sci-fi story on that basis, I think.


Sent from my iPhone using Tapatalk
 
There is no "AI". There are programs running algorithms based on "big data". No intelligence involved. Because there is no consciousness involved. The problem with the internet (or better, the Web) is that by now every village idiot can access it. In the late 90s when I first accessed it, that wasn't the case. Only nice, smart people following netiquette were around. Back then you could have learned about my favorite books, music and so on. On my own website. These days you can learn nothing about me on the internet because I don't want morons to know about it.
 
There is no "AI". There are programs running algorithms based on "big data". No intelligence involved. Because there is no consciousness involved.
More like exhaustive pattern recognition than anything else.
The problem with the internet (or better, the Web) is that by now every village idiot can access it. In the late 90s when I first accessed it, that wasn't the case. Only nice, smart people following netiquette were around. Back then you could have learned about my favorite books, music and so on. On my own website. These days you can learn nothing about me on the internet because I don't want morons to know about it.

Yeah I remember when we used to ask "Is it September already?" and then AOL started and had all their new newsgroup users directed to alt.best-of-internet which got swamped by random guys wanting to talk about cars or pigs.
Cliff Stoll's book "The Cuckoo's Egg" describes the early situation well, that people expected people to behave etc. I mean here's a hippy Grateful Dead fan admining university computers in Berkeley. Also featuring Blue Bubble.
Wandering, sorry.
 

Back
Top Bottom