This week in AI.

All AI news is bad news. That pretty much sums it up. I won’t even get into the video aspect yet, that will need it’s own post.

Instacart is using AI art. It's incredibly unappetizing.

“The text for the ingredients and instructions for the above recipes, meanwhile, is also generated by AI, as disclosed by Instacart itself: "This recipe is powered by the magic of AI, so that means it may not be perfect. Check temperatures, taste, and season as you go. Or totally switch things up — you're the head chef now. Consult product packaging to confirm any dietary or nutritional information which is provided here for convenience only. Make sure to follow recommended food safety guidelines."“


'Rat Dck' Among Gibberish AI Images Published in Science Journal

“The open-access paper explores the relationship between stem cells in mammalian testes and a signaling pathway responsible for mediating inflammation and cancer in cells. The paper’s written content does not appear to be bogus, but its most eye-popping aspects are not in the research itself. Rather, they are the inaccurate and grotesque depictions of rat testes, signaling pathways, and stem cells.

The AI-generated rat diagram depicts a rat (helpfully and correctly labeled) whose upper body is labeled as “senctolic stem cells.” What appears to be a very large rat penis is labeled “Dissilced,” with insets at right to highlight the “iollotte sserotgomar cell,” “dck,” and “Retat.” Hmm.”


Microsoft and OpenAI warn state-backed threat actors are using generative AI en masse to wage cyber attacks

Russian, North Korean, Iranian, and Chinese-backed threat actors are attempting to use generative AI to inform, enhance, and refine their attacks, according to a new threat report from Microsoft and OpenAI.

The group’s use of LLMs reflects the broader behaviors being used by cyber criminals according to analysts at Microsoft, and overlaps with threat actors tracked in other research such as Tortoiseshell, Imperial Kitten, and Yellow Liderc.

As well as using LLMs to enhance their phishing emails and scripting techniques, Crimson Sandstorm was observed using LLMs to assist in producing code to disable antivirus systems and delete files in a directory after exiting an application, all with the aim of evading anomaly detection.