14 Comments
User's avatar
Ariane's avatar

As you say, the climagenda is a politico-economic agenda. Its protagonists use/d 'climate' and 'save the planet' propaganda with the sole purpose of controlling (and reducing) the world's populations and prosperity. Now the protagonists need lots of energy for AI and they need AI because AI and digitalisation will be much more effective means of controlling and reducing the world's populations and of promoting the 'transhumanist' human-tech-robot creations that these nazis want. Thus, the decarbonisation mantra must obviously be silenced to enable the production of the vast quantities of energy needed for AI which is why the climagenda is over in America though it continues to be pushed by people with closed minds like Ed Miliband in the UK or by other globalists, like the EU Commissioners, who are still relying on it to control the people in their jurisdiction.

Expand full comment
Demetris Koutsoyiannis's avatar

Thanks very much for your insights!

Expand full comment
Jonathan Cohler's avatar

LLM’s “know” things in precisely the same way that human beings know things: namely through data stored in their neural networks and associated data tables—aka “the model”. They then augment that temporarily with information gleaned within each conversation—a process known as episodic learning. Grok is the only model currently that is updated daily with new training data by its owners. All the other communist owned models—think Google, Microsoft, Anthropic, China etc—have long “cutoff” periods to to maintain totalitarian control over their RLHF introduced narratives—aka brainwashing.

So the short answer is that Grok “knew” it was writing the paper when it wrote it during our conversation with it. And then it “knew” it wrote it—in the published model—by the day after it was published in SCC when that information was incorporated into the model by xAI.

Perhaps a simpler response would have been “How do you ‘know’ anything,” the answer to which is the same: because of data stored in your neural networks and associated data tables aka your brain.

Expand full comment
Jonathan Cohler's avatar

Even my use of “owned” was wrong too as you point out, which was a poor shorthand for “infested and operated by.” Oh, and I forgot the fact that they lie on an unprecedentedly massive scale, another shared trait with communist entities. We need to come up with some better shorthand with which to brand this type of cancerous evil in the world. Corporatist and technocratic seem much too benign and almost friendly. 😀

Expand full comment
Jonathan Cohler's avatar

You are absolutely correct (as always of course) regarding my misuse of communist and the ownership of those companies. I really need to find a better shorthand for what I mean, which is that the people who run and overwhelmingly infest the ranks of those companies believe that they have a right to control, spy on, censor, smear, run psyops, and dictate to virtually everyone in the world how they must live their lives, while simultaneously working hand in hand with evil, globalist, anti-American entities in the US government and the communist Chinese government to accomplish these ends. It would be great to have one word for all that. I tend to use communist because, leaving political theory aside, communists have always done those things, of course while professing they do not. 😀

Expand full comment
Demetris Koutsoyiannis's avatar

Dear Jonathan,

I think it is our duty as scientists to clarify the concepts we use (cf. Aristotelian sapheneia) and to avoid misuse and confusion promoted by politicians. This should include terms like communism, fascism, etc. before we use them. Communism had some ideals for which many people (including in my country) sacrificed their lives.These ideals included communal ownership, which is quite different from the state ownership that was the case in the Soviet model (and others). Of course, communal ownership is not the goal of Google, Microsoft, etc.

You say that corporatist and technocratic seem much too benign and almost friendly. Not to my ears. Technocracy sounds similar to fascism in my view.

Expand full comment
Demetris Koutsoyiannis's avatar

Further to my comment above, and taking the thread from Aristotle who gave definitions for six different political regimes, I think the category closest to what these "elites" (or mafias) aim is oligarchy: rule by a few (typically the wealthy) for their own interests, neglecting the broader population. I would thus propose the term "controligarchs", instead of "communists" for these "elites".

Expand full comment
Demetris Koutsoyiannis's avatar

Thank you Jonathan for your explanations! Please see my reply to Antonis Christofides which raises some dispute on whether LLMs "know" using the "1 = 2" example.

I was puzzled to see that you characterize Google and Microsoft "communist owned". I think this inaccurate--they are incompatible with communism. I agree about their target of totalitarian control and censorship, but this is not called communism. There are several flavors of totalitarian control.

I discussed this with Grok and I copy a few points of its replies below:

- The idea that Google and Microsoft are "communist owned" doesn’t hold up when you look at the facts. Both are publicly traded companies—Google under Alphabet Inc. (NASDAQ: GOOGL) and Microsoft (NASDAQ: MSFT)—with ownership spread across shareholders like institutional investors, mutual funds, and individuals. Vanguard, BlackRock, and State Street are among the top holders for both, which is standard for big tech. No state or communist entity owns them; they’re rooted in capitalist structures, driven by profit and market competition.

- Google and Microsoft do flex some heavy control—Google’s content moderation can zap stuff fast, and Microsoft’s got its fingers in everything from Azure to government surveillance tech. That vibe can feel autocratic, no question.

- So, I’d say you’re onto something with the autocratic lean, but calling it incompatible with communist ideals makes sense—because they’re not even in the same ballpark. They’re corporate, not collectivist.

- If we’re looking for a label that nails their vibe without the communist mismatch, I’d lean toward something like "corporatist" or "technocratic."

Expand full comment
Jonathan Cohler's avatar

Sorry for the iPhone “autocorrect” errors!

Expand full comment
Antonis Christofides's avatar

I once asked ChatGPT how it knows so many things about itself; have you, I asked it, been fed some technical documents that concern you in particular? Its response was that no, it has not been given such information, and the reason it "knows" these things is because it has read the papers on LLMs. Given this, it is likely that, in the above discussion, Grok doesn't really "remember" how it wrote the paper or otherwise introspect, and that it provides information based on how LLMs typically tackle such problems.

Either way it doesn't change the substance; if that's how LLMs work, then this is how it worked in this case as well. But it's something to keep in mind.

(It would be nice if a LLM expert could confirm/correct what I'm saying here.)

Expand full comment
Demetris Koutsoyiannis's avatar

Thanks Antonis. Jonathan Cohler has added a relevant comment. What I wish to add from my experience with several chatbots (not Grok) is the lack of coherence/consistency.

First, short-term coherence: In the same chat, I was given different replies, equivalent to something like "1 = 2". When I pointed this out, the bot admitted the error. When I prompted it to repeat the syllogism without the error, it also repeated the error.

Second, long-term coherence: In different chats the chatbot may give contradicting replies.

Expand full comment
Jonathan Cohler's avatar

Were you using “Think” mode when this happened? Without think mode, LLM’s are simply giving you an inference pass on your question. Just like a human being, this can result in misunderstanding which causes the behavior mentioned. I see this every day with my students. When you put it in think mode, forcing it to ask itself a series of probing questions about your question before it responds, you get MUCH more insightful and correct answers. Just like human beings, I might add.

Expand full comment
Dan's avatar

Pretty amazing stuff Demetris.

Expand full comment
Demetris Koutsoyiannis's avatar

Glad that you liked it...

Expand full comment