Podcast- The AI Bubble Is Bursting

New podcast I stumbled into called “Better Offline”. Just finished the episode called, “The AI Bubble is Bursting” which is pretty good. Ironically enough, I’d bet dollars to donuts AI did the transcription, cause it messed up more than a few words. Quotes are from the second in the series. I just realized I listened out of order… whoops. They are in proper order below.

Some bits I found interesting, like no one can say if it is actually profitable. And side note, this is from the transcription which is wonky.

In October twenty twenty three, Richard Windsor, the research director at large of Counterpoint Research, which is one of the more reliable analyst houses, hypothesized that open AI's monthly cash burn was in the region of one point one billion dollars a month, based on them having to raise thirteen billion dollars from Microsoft, most of it, as I noted in credits for its Azure cloud computing service to run their models.

It could be more, it could be less. As a private company, only investors and other insiders can possibly know what's going on in open Ai. However, four months later, Reuter's would report that open AI made about two billion dollars in revenue in twenty twenty three, a remarkable sum that much like every other story about open ai, never mentions profit. In fact, I can't find a single reporter that appears to have asked Sam Mormon about how much profit open ai makes, only breathless hype with no consideration of its sustainability.

Even if open ai burns a tenth of windsors estema about one hundred million dollars a month, that's still far more money than they're making.

“Salesforce chief financial officer Amy Weaver said in their most recent earning score that Salesforce was not factoring in material contribution from Salesforce's numerous AI products in its financial year twenty twenty five.

Graphics software company Adobe shares slid in their last earnings. It's the company failed to generate meaningful revenue from its masses of AI products, with analysts now worried about its ability to actually monetize any of these generative products. Service now claimed to its earnings that generative AI was meaningfully contributed to its bottom line.”

And I do love me a good ending rant, lol!

“ And the AI revolution, despite its spacious hype, is not really for us. It's not for you and me. It's for people at Satya Nadella of Microsoft to claim that they've increased growth by twenty percent. It's for people like Sam Altman to buy another fucking Porsche. It's so that these people can feel important and be rich, rather than improving society at all. Maybe I'm wrong, Maybe all of this is the future, maybe everything will be automated, but I don't see the signs. This doesn't feel much different to the metaverse. There's a product, but in the end, what's it really do? Just like the metaverse, I don't think many people are really using it. All signs point to this being an empty bubble. And I'm sure you're sick of this too. I'm sure that you're sick of the tech industry telling you the futures here when it's the present and it fucking sucks.

AI in production

From Wil Wheaton’s tumblr I found this interesting story on using AI in production that I find pretty accurate. It is really difficult to art direct, period. We actually made the call to walk away from some projects that used AI because we were worried it would come off the rails and not be able to hit deadline. I wish I could find the original text but duck duck go and google both just throw up a lot of AI stuff not related.

"the future of the internet: a garbage dump"

This week in AI, brought to you by, “Is it too early to have a drink?” Great article by Erik Hoel called, “Here lies the internet, murdered by generative AI”. Read the whole piece, it’s a real good account of what is happening to the internet right now in real time. I was looking for info on a new printer and the amount of AI trash is insane. Here are way to many pull quotes.

“The amount of AI-generated content is beginning to overwhelm the internet. Or maybe a better term is pollute. Pollute its searches, its pages, its feeds, everywhere you look. I’ve been predicting that generative AI would have pernicious effects on our culture since 2019, but now everyone can feel it.

...

What, exactly, are these “workbooks” for my book? AI pollution. Synthetic trash heaps floating in the online ocean. The authors aren’t real people, some asshole just fed the manuscript into an AI and didn’t check when it spit out nonsensical summaries. But it doesn’t matter, does it? A poor sod will click on the $9.99 purchase one day, and that’s all that’s needed for this scam to be profitable since the process is now entirely automatable and costs only a few cents.

...

Now that generative AI has dropped the cost of producing bullshit to near zero, we see clearly the future of the internet: a garbage dump.

...

This isn’t what everyone feared, which is AI replacing humans by being better—it’s replacing them because AI is so much cheaper. Sports Illustrated was not producing human-quality level content with these methods, but it was still profitable.

...

All around the nation there are toddlers plunked down in front of iPads being subjected to synthetic runoff, deprived of human contact even in the media they consume. There’s no other word but dystopian. Might not actual human-generated cultural content normally contain cognitive micro-nutrients (like cohesive plots and sentences, detailed complexity, reasons for transitions, an overall gestalt, etc) that the human mind actually needs? We’re conducting this experiment live. For the first time in history developing brains are being fed choppy low-grade and cheaply-produced synthetic data created en masse by generative AI, instead of being fed with real human culture. No one knows the effects, and no one appears to care. “

This week in AI.

All AI news is bad news. That pretty much sums it up. I won’t even get into the video aspect yet, that will need it’s own post.

Instacart is using AI art. It's incredibly unappetizing.

“The text for the ingredients and instructions for the above recipes, meanwhile, is also generated by AI, as disclosed by Instacart itself: "This recipe is powered by the magic of AI, so that means it may not be perfect. Check temperatures, taste, and season as you go. Or totally switch things up — you're the head chef now. Consult product packaging to confirm any dietary or nutritional information which is provided here for convenience only. Make sure to follow recommended food safety guidelines."“


'Rat Dck' Among Gibberish AI Images Published in Science Journal

“The open-access paper explores the relationship between stem cells in mammalian testes and a signaling pathway responsible for mediating inflammation and cancer in cells. The paper’s written content does not appear to be bogus, but its most eye-popping aspects are not in the research itself. Rather, they are the inaccurate and grotesque depictions of rat testes, signaling pathways, and stem cells.

The AI-generated rat diagram depicts a rat (helpfully and correctly labeled) whose upper body is labeled as “senctolic stem cells.” What appears to be a very large rat penis is labeled “Dissilced,” with insets at right to highlight the “iollotte sserotgomar cell,” “dck,” and “Retat.” Hmm.”


Microsoft and OpenAI warn state-backed threat actors are using generative AI en masse to wage cyber attacks

Russian, North Korean, Iranian, and Chinese-backed threat actors are attempting to use generative AI to inform, enhance, and refine their attacks, according to a new threat report from Microsoft and OpenAI.

The group’s use of LLMs reflects the broader behaviors being used by cyber criminals according to analysts at Microsoft, and overlaps with threat actors tracked in other research such as Tortoiseshell, Imperial Kitten, and Yellow Liderc.

As well as using LLMs to enhance their phishing emails and scripting techniques, Crimson Sandstorm was observed using LLMs to assist in producing code to disable antivirus systems and delete files in a directory after exiting an application, all with the aim of evading anomaly detection.

Good lord, AI is now in Squarespace.

“Write a paragraph about digital versus film photography using dark humor.” The web is gonna be nothing but this shit in under 2 months.

“Film versus digital photography - a battle as old as time itself, or at least as old as film and digital cameras. It's like that eternal struggle between black and white movies and Technicolor marvels. Film photography, with its pretentiousness and hipster vibes, making every snapshot look like a vintage masterpiece. You know you're in the presence of a true artist when they whip out their clunky film camera and start mumbling about the joy of waiting weeks to get their photos developed. On the other hand, digital photography is like a shameless selfie addict on Instagram, snapping away without a care in the world. Instant gratification, baby! Who needs soul and depth when you can slap on a filter, call it art, and gather a hundred likes before lunch? So, take your pick - the slow, "real" art of film or the instantaneous, self-gratifying world of digital. Both will capture your memories, but only one will make you look superior while doing it.”

The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content

Annnnd it begins. AI trained on AI spins out and becomes totally worthless. That, plus the internet being flooded with AI generated content will be interesting.

“Specifically looking at probability distributions for text-to-text and image-to-image AI generative models, the researchers concluded that “learning from data produced by other models causes model collapse — a degenerative process whereby, over time, models forget the true underlying data distribution … this process is inevitable, even for cases with almost ideal conditions for long-term learning.

“Over time, mistakes in generated data compound and ultimately force models that learn from generated data to misperceive reality even further,” wrote one of the paper’s leading authors, Ilia Shumailov, in an email to VentureBeat. “We were surprised to observe how quickly model collapse happens: Models can rapidly forget most of the original data from which they initially learned.”

In other words: as an AI training model is exposed to more AI-generated data, it performs worse over time, producing more errors in the responses and content it generates, and producing far less non-erroneous variety in its responses.

Photoshop Beta

So we have been testing the crap out of the new AI stuff in the Photoshop Beta and there is some pretty interesting stuff going on for sure. The “Remove” tool is the biggest improvement since the Healing Brush was introduced, what almost 20 years ago now?

This is just the first swipe. While not perfect, it’s getting us like 80% of the way there.

The new AI fill was just added this AM and I just went to extend the BG of an AI image I generated for a recording of one of my bands. While it’s mostly screaming jibberish, it’s a start and could give us something to poke and prod around with. Need more testing and I need more coffee.

Between this and getting AI running in Houdini, there is some very interesting times ahead for sure. Getting that all set up is the next “Tinker Time” for me for sure.

Todays Ai Prompt.

“professional drumset in a misty forest at dawn photorealistic”

I kinda knew it would do terrible with a drumset though. I figured with the Chad Smith post this would be fun.

Todays AI Prompt

“coffee and cigarettes in a diner early morning light photo”

Todays AI prompt

“oregon coast photo sunset with cliffs”