The horizon is not so far as we can see, but as far as we can imagine

AI Will Degenerate In Much The Same Way Google Did

If you’re old enough to remember search before and after Google, you remember how good Google search was at the beginning.

Google used links to rank what to show to searchers. In the old web, before Google, every link was, in essence, an endorsement. We linked to what we thought was good, that other people should read.

It was a pristine “state of nature” system.

But the minute Google became dominant in search, everyone started manipulating links and metadata and everything else to get Google to send them more traffic. Links were no longer organic, no longer endorsements, but attempts to manipulate the algo. The more that was true, the more it became necessary to engage in “search engine optimization”, and the more algorithmic search engines sucked. Of course, Google also self-sabotaged, by trying to optimize search results so that Google would make the most money possible.

I recently read a regular traveler saying he never reads travel blogs and magazines any more, because AI is so much better. I’m sure he’s right.

But AI is better because it’s reading all the travel blogs and magazines, sorting and summarizing. AI being better, readership is cratering, and so the blogs and magazines will slowly die off. Travel’s one of those activities where you need relatively recent information, where was great to stay years ago isn’t very helpful. So, as the blogs and magazines die, the AI’s results will slowly get worse, until they’re crap scraped from official websites of hotels, museums and other travel destinations, since that’s all that will remain.

AI, in other words, in this and other ways, many of them similar, will destroy the ecosystem required for it to be good, same as Google did.

This is “eating the seedcorn/destroying the soil’s fertility” type of stupidity. If you destroy an ecosystem you’re dependent on (and we’re all dependent on some ecosystems) then whatever you’re doing is only short term viable.

So enjoy AI as an alternative to search for now (but always check its source, because it does hallucinate) but understand this is a moment in time, a moment which is destroying what makes it possible.

This blog runs on donations and subscriptions from readers. It’s free, but not free to produce. If you value it, please give.

Previous

Trump Has Caused A Constitutional Crisis

Next

Musk’s In A Lot of Trouble And Won’t Be the World’s Richest Man Much Longer

24 Comments

  1. Jefferson Hamilton

    “I recently read a regular traveler saying he never reads travel blogs and magazines any more, because AI is so much better. I’m sure he’s right.”

    I’m not. You can’t trust what “AI” (it’s not really AI) tells you. Sure, most of the time it may be correct, but what about when it isn’t, which will happen eventually, and probably a lot more often than a human author?

  2. “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” —Dune

    In many ways AI resembles the common propaganda scheme where those with conflicts of interests coopt a seemingly neutral third party to spread their propaganda.
    Optimistically society won’t fall for it regarding AI because an AI third party elicits different emotional than humans. Pessimistically they will fall for it just like they have for the military-spy complex in the media, pharma funding the FDA, medical journals and doctors, and every other industry.

    The other two primary uses of AI are to gather and analyze your data to convince you to buy shit you don’t need, and to increases sales of a product by attaching AI to it.

    History keeps rhyming with itself over and over.
    Facebook and company succeeded because they could cannibalize real friendships and social connection. They’ve degraded those to such a degree they now mostly rely on addiction and bandwagon effects.

  3. Soredemos

    It’s hard to degenerate when there was never much there to begin with.

    At best in practice it’s a glorified autocorrect that is only useful about 50% of the time. Using it to write large blocks of prose reveals how incredibly limited and formulaic it is (maybe many people don’t notice us because of how increasingly post-literate society is becoming).

    Using it for any sort of complex technical job I absolutely wouldn’t trust the end result, and in fact neither do many technical professionals; to get anything actually useful you need to carefully review and edit the work, at which point why didn’t you just have a professional write it in the first place?

    It’s horrible for art, anything it makes is instantly identifiable as fake.

    It can make cute short songs, because so much of pop music is lazily engineered hook choruses anyway.

    A rare area where it actually is useful is upscaling and frame generation for higher frame rates in video games, where it just has to convincingly copy previous frame data. The data set is handed to it on a platter (and at low enough resolutions or frame rates it has insufficient data and shits itself).

    Is ‘AI’ getting better? Well, it’s getting more refined to suck slightly less. I’m guessing there’s a very hard ceiling on what any of this tech can do.

    Also at the end of the day automated algorithms aren’t artificial intelligence, even in the loosest possible sense. To call it such is marketing.

  4. Ian Welsh

    I don’t use it for writing, but I find AI far better as a search engine than Google or any of the others I’ve tried. Of course one has to check the sources, but the sources are provided, and they are usually the right sources, where with search engines the sources I need are often not provided in the first few pages. Search is essentially useless.

    I’ve been told that it’s pretty good for a lot of coding tasks, too, but that’s unrelated to this post. (I haven’t confirmed, I stopped programming in 98, and am hopelessly out of date and practice.)

  5. adrena

    My physician used AI to provide a summary of our meeting

  6. Purple Library Guy

    It occurs to me that both of these problems, the search problem and the AI problem, are problems largely because the internet is being operated largely on a capitalist basis. So for instance, if websites were not largely desperate for more clicks and views so as to make some money or at least defray web hosting costs, they wouldn’t need to game the search. And again, if travel writers did not have to worry about money from traffic, a reduction of traffic from people shifting to using AI (derived from what they wrote) would not stop them from doing it, as long as there were enough readers left to feel like community.

    I’ve long thought that one thing that would make the internet better would be governments offering free web hosting to any citizen who wanted it, with the proviso that they didn’t use ads. They could still sell merch, have a Patreon or whatever, but no advertising. It would cost a lot of money, but it would be worth it by reducing a host of problems and encouraging creativity.

  7. Soredemos

    I wouldn’t trust any physician, engineer, or programmer who relied on algorithms instead of just doing the damn job they’re paid to do. If I were some sort of project lead I would fire any employee I caught using it. Long term reliance on it for even annoying tedious stuff is going to atrophy actual skills.

  8. T

    “because AI is so much better. I’m sure he’s right.”

    Really? Maybe Im just not recognizing the so called good AI. All I recognize as AI is so , so, so bad. It reads like a smart 4th grade kid trying to fake his way thru a book report of a book he didn’t read. So repetitive saying the same thing many different ways, always seemingly leading to a point but never delivering, wasting 20 paragraphs to say what could be said in a couple sentences.

    Often it gets the question wrong and gives an irrelevant answer. It seems like it is searching thru a bunch of FAQ webpages (that you have probably already been to looking for an answer) and gives an extremely verbose non-answer.

    Color me unimpressed. Your mileage may vary.

  9. Feral Finster

    AI trained on AI is notoriously bad.

  10. I wouldn’t trust any physician, engineer, or programmer who relied on algorithms instead of just doing the damn job
    ——
    What is the definition being used for algorithm and doing the job?

    Patient X has cholesterol over Y, prescribe statin.
    Patient X reaches age Y, do colonoscopy.
    A chunk of professional work consists of this thoughtless following an algorithm. The problem isn’t necessarily the use of an algorithm. It’s that it results in people considering it Gospel and conceding all aspects of critical thinking and questioning. This problem is even worse considering the people who create these algorithms are rich powerful people with psychopathic tendencies.

  11. Carborundum

    A couple of use cases that I have found useful are:

    1) Feeding LLMs a set of literature and then querying them about it interactively. This is, in general, less susceptible to hallucination. Particularly interesting to get snippy with some of the models about the epistemology of their conclusions – some fold immediately, while others can get modestly insistent. One thing they don’t seem to be good at is identifying unspoken big issues of contention running around in the background of the literature.

    2) Getting LLMs to extract the main takeaways of one’s own writing for various audiences. Generally decent, though demoralizing – very lowest common denominator (high fidelity modelling there, given what I’ve seen audiences conclude / do over the years).

    I will be very interested to see how generalist managers react to this tech. In my experience, most are incurious and heavily reliant on face validity, which this tech seems to be pretty good at delivering.

  12. mago

    All bow down to the tecno gods.
    Artificial Intelligence and creators are your saviors and superiors.

    Next up: trans humanism.
    Distortions upon distortions.

    The order of the universe is natural, cyclic and beyond mental fabrication.

    Mechanistic thinking leads to hubris, destruction of the natural order and the mess we’re in today.

    Gimme a cheeseburger.
    Living in the USA. . .
    hey hey hey

  13. KT Chong

    Yeah, maybe. For now, I’m enjoying wisdom from AI Elon Musk:

    https://www.youtube.com/watch?v=aXEvG9eKq7A

  14. KT Chong

    Oh yeah, and dropping DeepSeek and one AI after another to crash the US stock market, just three days after Trump announced the Stargate Initiative, that was flexing and a power move.

    A couple Western analysts already got a sense of what’s coming…

    https://www.youtube.com/watch?v=wMxXqDNu9_w

    https://www.youtube.com/watch?v=SwHnJn0ubYs

  15. KT Chong

    Yesterday I asked Perplexity AI to predict major breakthroughs and innovations from China over the next 18 months, one per month, based on current trends and available information, (I specifically asked Perplexity to also reference Chinese articles from inside China, if available, for anything that has gone unreported or under-reported in the West,) and arranged the predictions in the order they might occur:

    1. Deployment of over 4.5 million 5G base stations across China (Q2 2025)

    2. Introduction of advanced humanoid robots for industrial and service applications (Q3 2025)

    3. Launch of a large-scale AI model surpassing GPT-4 in capabilities (Q3 2025)

    4. First flight mission of the Qingzhou cargo spacecraft (September 2025)

    5. Unveiling of next-generation EV batteries with significantly improved energy density and charging speeds (Q4 2025)

    6. Introduction of domestically produced 5nm chips by Huawei and SMIC (Q4 2025)

    7. Demonstration of a quantum computer outperforming classical supercomputers in more complex tasks (Q1 2026)

    8. Completion of the Jilin-1 constellation of 130 AI-enabled satellites (Q1 2026)

    9. Breakthrough in high-efficiency solar cell technology, pushing conversion rates to new records (Q2 2026)

    10. Announcement of novel gene-editing techniques for disease treatment (Q2 2026)

    11. Deployment of AI-enabled satellites for advanced space debris avoidance (Q3 2026)

    12. Significant advancements in AI-powered healthcare diagnostics and personalized medicine (Q3 2026)

    13. Development of new biomanufacturing processes for sustainable materials production (Q4 2026)

    14. Unveiling of early 6G technology prototypes (Q4 2026)

    15. Launch of a medium-high orbit satellite for quantum communication network expansion (Q4 2026)

    16. Breakthrough in high-temperature superconducting materials (Q1 2027)

    17. Significant progress in advanced packaging technologies for heterogeneous integration (Q1 2027)

    18. Development of novel nanomaterials for energy storage and environmental applications (Q2 2027)

    The rate of technological innovations and breakthroughs in China is ACCELERATING.

    Later, I asked DeepSeek to evaluate Perplexity’s predictions. DeepSeek went through each prediction; DeepSeek agreed with most of them but adjusted the timelines for a few of them, but all of them still fall within 2027.

    I think that, a lot of Americans are in denial right now, but by the end of 2027, many Americans will come to the realizations and acceptance that China has overtaken America to become number-one in science and technology, not just in one or two or a few areas, but in almost EVERYTHING.

  16. KT Chong

    P.S. China will have more than one breakthroughs every month, but I specifically asked Perplexity to limit to one per month, and to give me the one most likely to be reported in the West as “groundbreaking”.

  17. Soredemos

    @KT Chong

    The future is doubtless Chinese, but I’m not sure why you think asking what is by definition a bullshit generator to predict the future demonstrates anything. Especially after asking a second bullshit generator to assess the first made up results.

  18. different clue

    ” AI trained on AI is notoriously bad. ”

    And AI trained on AI-trained AI will be notoriously worse.

  19. different clue

    ” I think that, a lot of Americans are in denial right now, but by the end of 2027, many Americans will come to the realizations and acceptance that China has overtaken America to become number-one in science and technology, not just in one or two or a few areas, but in almost EVERYTHING. ”

    If so, how will Americans handle it?

    Now would be a good time for an American Okayness Ordinaryism movement to get itself started and organized and ready to at least try to fill all the psychomental political vacuums which start opening up. A few tens of millions of American Okayness Ordinarians could then try shifting ” America” to think of National Survival instead of National Greatness, because right now American Survival seems unlikely over the long term. MAOKA. Make America OK Again. White letters on a little red hat.

    If the concept could be anchored into the minds and outlooks of several tens of millions of Americans for starters, then such a mass of American Okayness Ordinarians could begin launching different political parties and concepts to see which ones pass various Darwin tests.

    New Deal America Party ( NDAP)? Make America New Deal Again? ( MANDA)?

    National Green Survival Party? Survival Greenism In One Country?
    Make America Green Again? ( MAGreenA)?

    etc.?

    And as to . . . ” 1. Deployment of over 4.5 million 5G base stations across China (Q2 2025) ” . . . I remember suggesting in various places letting China do that. Let the ChinaGov marinate the Chinese population in 5G radiation over the next 50 years and see what the population-wide health effects are. Meanwhile, ban 5G of any kind in America so America can be the “control population” in this grand radiological marination experiment. But nobody listens to me.

  20. KT Chong

    I’ve already been using AI, to check my work and math, (and it’s very specialized math,) to make recommendations, (i.e., “if I like this and that, can you recommend something that you think I’ll also like?” “this script has stopped working, can you check for me what’s wrong and fix it?” etc.) and so far the AI has worked beautifully every time. So I don’t think it’s a bullshit generator.

    At the very least, it’s a really, really, good personal assistance.

  21. Ian Welsh

    AI for search is “check the sources”, but it’s a better search engine than anything else right now. It does other things fairly well, but I’d be careful of using it if I didn’t know how to do the work myself and thus couldn’t run a quick or detailed reality check on it.

    Someone linked its estimate of GDP losses for Canada with 25% tariffs, and it was good work: but I know enough about elasticity to check its work. I also know that it didn’t look it into second order effects, though it noted that and could have been asked to do so.

  22. KT Chong

    I’m kinda perfectionist at my work, so I had always double and triple checked my work painstakingly for my math, grammar mistakes, sentence flows, etc. Seriously, AI has saved me so much time because now I can just let AI proofread my works and reports. That’s it.

  23. Ian Welsh

    Huh. That’s a good idea actually. I can’t proofread my own work unless some time has gone by and that doesn’t work for blogging. I’ll give AI a try for proofing.

  24. Carborundum

    Near as I can see it, the bold type summary is essentially that LLMs are good at reading the trade press and stack overflow and summarizing it literately, which makes it seem smart to the normies. (This is, admittedly, more than most of them can do themselves. Fair play, I guess.)

    Where things currently fall down is going from stuff that relatively large numbers of people have talked about a lot to things like generalizing from first principles or spotting significant situational differences (e.g., capable of producing a generalized model of trade elasticity, but not adequately dealing with the implications of a high fraction of that trade being crude – notoriously inelastic, high fraction of US refining capacity not being able to substitute, etc.).

Powered by WordPress & Theme by Anders Norén