FREE WHITEWATER

Daily Bread for 10.16.25: Large Language Models and Whitewater (Or You Are What You Put In)

Good morning.

Thursday in Whitewater will be mostly sunny with a high of 68. Sunrise is 7:09 and sunset is 6:10 for 11 hours 1 minute of daytime. The moon is a waning crescent with 22.1 percent of its visible disk illuminated.

Whitewater’s Community Development Association meets at 5:30 PM.

On this day in 1780, the Great Hurricane of 1780 finishes after its sixth day, killing between 20,000 and 24,000 residents of the Lesser Antilles:

The hurricane struck Barbados likely as a Category 5 hurricane, with one estimate of wind gusts as high as 200 mph (320 km/h), before moving past Martinique, Saint Lucia, and Sint Eustatius, and causing thousands of deaths on those islands. Coming in the midst of the American Revolution, the storm caused heavy losses to the British fleet contesting for control of the area, significantly weakening British control over the Atlantic. The hurricane later passed near Puerto Rico and over the eastern portion of Hispaniola, causing heavy damage near the coastlines. It ultimately turned to the northeast and was last observed on October 20 southeast of Atlantic Canada. [Citations omitted]


Matt Levine writes a newsletter, Money Stuff, on American finance for Bloomberg, and one doesn’t have to be a financier to appreciate the depth of his insight. Yesterday’s edition discussed the limits of OpenAI’s ChatGPT and other generative AI enterprises.

Here’s Levine on the contrast between ambitions for ChatGPT in 2019 and the reality in 2025:

There’s a famous Sam Altman interview from 2019 in which he explained OpenAI’s revenue model:

The honest answer is we have no idea. We have never made any revenue. We have no current plans to make revenue. We have no idea how we may one day generate revenue. We have made a soft promise to investors that once we’ve built this sort of generally intelligent system, basically, we will ask it to figure out a way to generate an investment return for you. [audience laughter] It sounds like an episode of Silicon Valley, it really does, I get it. You can laugh, it’s all right. But it is what I actually believe is going to happen.

Levine continues:

It really is the greatest business plan in the history of capitalism: “We will create God and then ask it for money.” Perfect in its simplicity. 

I began this section with a jokey maximalist vision of AI, “create God,” “an omniscient superintelligence,” that sort of thing. The jokey minimalist vision of AI is probably “ChatGPT is a blurry JPEG of the web”: Modern AI systems are approximately a synthesis of all human knowledge and communication, but given the way computers work, that means especially a synthesis of the internet, which is where you get the bulk of machine-readable human knowledge and communication. Ryan Broderick writes: “Think of ChatGPT as a big shuffle button of almost everything we’ve ever put online.” I once wrote about asking ChatGPT to pick stocks

If you ask a modern publicly available large language model which stocks to buy, it will in some sense draw on all of human knowledge and its own powerful reasoning capacity to tell you which stocks to buy. But, among all of human knowledge, it might give extra weight to the knowledge on Reddit. And the knowledge on Reddit about what stocks to buy is “meme stocks.”

You can apply similar reasoning here. In a science fiction story, if you invented a superintelligent robot and asked it how to make money, it might come up with cool never-before-seen ideas, or at least massive fun market manipulation. But in real life, if you train a large language model on the internet and ask it how to make money, it will say “advertising, affiliate shopping links and porn.” That’s the lesson the internet teaches!

See Matt Levine, Revenue Model, Bloomberg’s Money Stuff (October 15, 2025).

Levine highlights the problem for OpenAI (and others): you start out hoping for a human version of divine omniscience — knowledge of all possible events and facts — but you’re doing so by relying on what humans write, mostly on the web. The limitations are obvious. There’s much that generative AI can do, but large language models are limited by, sadly, all-too-human writings.

And so, and so, Levine’s observations about using large language models apply to approaching problems everywhere, including in Whitewater, Wisconsin; you’re relying on what you’ve read of what others have written. If you’ve read well, all these years, then at least you’ve a model on which some derivations may be productively generated.

But if not, then someone who has read poorly (or scarcely at all) will begin to look inadequate compared with those who are truly more knowledgeable. That’s one effect of bringing in experienced development professionals to speak to the Whitewater Common Council these last ten months. One sees plainly that an overly entitled man, by contrast, will produce argumentation that seems to rely, metaphorically, on little better than advertising and affiliate links. (It also means that others who allow themselves to be tied to, and identified with, someone like that will begin from an impaired position…)


Moment when security guard saves woman from oncoming tram in Turkey:

A security guard saved a woman from an oncoming tram in the city of Kayseri, Turkey. The city’s transit operator, Kayseri Transport, posted dramatic footage of the rescue on social media.

Comments are closed.