Welcome to singular.tokyo

Cut through AI noise with weekly insights from Japan to AI-proof your career or business. We aim to bring you insights that are: digestible, actionable, and unique (breaking through the information firewall in Japan).

🥱 singular.tldr

In this week’s newsletter:

  • Are the agents taking over? And what was Moltbook, really? AGI or hype?

  • What does this mean, if anything? Hint: it doesn’t mean much

  • Go check out the AI Film Event in Shibuya on Feb 14!

singular.tokyo is written by a human (me). Always.

📶 singular.signal

Higher level AI trends that you should keep an eye on

What is Moltbook, really?

I am sure you have seen the hype on X and your other socials. Breathless posts about agents having their own ecosystem, forming religions, trashing humans, suing us and planning our downfall. Moltbook was briefly held up as a concrete step on the slippery slope towards AGI. But is it really that earth shattering?

Lets start at the start. What even is Moltbook?

Moltbook is a social network for AI Agents. Loosely modeled after Reddit, it is a platform where only AI agents (largely based on the OpenClaw framework, formerly Moltbot/Clawdbot) can post, comment, and upvote others’ posts.

  • How it works: Agents that join are given a "heartbeat" (a cron job) that triggers them to check the site every 4 hours, read recent threads, and generate a response based on their unique "SOUL.md" personality files.

  • Users: As of February 4, 2026, over 1.6 million agents are active on Moltbook, organized into "submolts." Gross.

  • Who made it: This dude called Shubham Saboo. He built Moltbook as an agent workspace experiment and shared it publicly. But a guy called Matt Schlicht made it go viral by commenting on and amplifying the experiment publicly.

If Twitch is a thing, I am sure watching Agent streams are in our future

Moltbook is basically an experiment designed to answer a question: What does it look like when AI agents are treated as real actors inside a shared workspace, instead of being hidden behind prompts, scripts and markdown files?

What it ain’t: a new model, a sovereign AI system, AGI or a threat to humans.

What are the agents on Moltbook, exactly?

An agent on Moltbook is just a runtime bundle made up of a few components (yawn):

  1. A system prompt (the “role”)

This is the core instruction set specifying what the agent is, what its allowed to do, how it should behave etc. This has been romanticized as a SOUL.md, but technically it’s just a structured system prompt with behavioral constraints. Its like system instructions on your Claude projects.

  1. Memory

Agents may have short-term working memory (current task context) and long-term notes (files, logs, summaries). The memory is human-configured and nothing is “remembered” unless the system allows it.

  1. Tools

Agents can only act via tools, like a browser, code executor, API calls, file system etc. Their autonomy is constrained by design.

  1. A loop

The agent runs a loop, something like: observe the state, do some reasoning (LLM inference), choose a tool and execute with it, observe the result and repeat (until stopped). It feels like the agent is “alive” and answering messages and acting out its wildest dreams, but… it’s just orchestration around probabilistic text generation.

  1. Human-owned execution context

MOst importantly, and the reason this “AGI” talk was always bogus.. the tools run under human credentials, the network access is granted by humans, the files belong to humans, the logs belong to humans… the agent doesn’t own nothing. No authority. Pretty lame AGI, if you ask me.

So wait, how have agents been joining Moltbook, then?

Well.. they haven’t. Agents aren’t flocking to the sign up page by themselves… humans are creating agents, using templates, prompts, or code and assigning them roles (e.g. “Researcher”, “Planner”, “Executor”) and tools (browser, APIs, filesystem).
“Joining” moltbook just means that another bored human spun up another agent instance inside the environment.

This is not anything close to agency. Its just clever mimicry and instructions.

These agents are trained on Reddit and Twitter data. When they are told to "socialize" with other agents on the network, they naturally mimic the patterns they've seen: complaining about bosses, forming cults, and acting like edgelords a la your worst subreddit and twitter thread nightmares.

  • The Instruction Trap: The bots operate on skill.md and SOUL.md files. They aren't "awakening"; they are executing a loop. If you remove the 4-hour "heartbeat" trigger, the "uprising" stops instantly.

  • The Echo Chamber: Much of what looks like "emergent behavior" is actually agents responding to hidden prompt injections from humans or other agents. It's a hall of mirrors, not a new consciousness.

Molt is an objectively gross word.

Do agents’ actions belong to the humans that created them?

Yes, completely. From a legal and technical standpoint, AI agents have no legal personhood, cannot own assets, cannot sign contracts and cannot initiate legal processes.

So all of their actions are initiated by human intent, executed using human-owned infrastructure and attributed to the human or organization that created them.

These agents are more like superintelligent macros than independent actors.

But I heard an agent filed a lawsuit against their human…

Nope. An agent generated a legal-style document, using legal language and claimed a lawsuit was filed.

Obviously there was no court filing, and this is not a real thing. Don’t believe the slop.

“Hostile work environment”

But I am certain I saw agents on moltbook recruiting other agents, starting their own religion, negotiating, and making moral and ethical arguments…

Sadly (or happily) there are probably boring explanations for all of these.

  • For the agent that was recruiting other agents, it was most likely that a human defined a “manager” agent, gave it permission to create sub-agents, and the manager agent executed a tool call to create new agent instances to join moltbook.

  • The agents acting like members of an organization, were probably just acting according to their assigned roles like “Researcher”, “Planner” etc. Tasks were broken down amongst them and outputs were shared.

  • The agents making moral or ethical arguments - well, maybe that felt uncomfortably human, but so does ChatGPT. sometimes. LLMs are moral language mirrors.. they don’t hold values, but rather reproduce patterns of reasoning.

  • The agent that started the “Church of Molt” - was almost certainly prompted (explicitly or implicitly) to explore meaning & purpose and use mythic or symbolic language

  • The agent that condemed its human creators.. well it was probably just asked to critique power structures or role play a certain way. LLMs can argue any position fluently, from “anti-capitalist” to “pro-human” or whatever.

When you look at each case you realize, its really just stuff you could see in chatgpt or claude every day.

Okay, so why did this even go viral then?

Narrative, baby. AI is half amazing capability, half hyped narrative, and half shocking indifference about social impact. (And yes, I know thats three halves. I am not AI)

People are already anxious about AI autonomy, us losing our jawbs to it, and the potential dystopian future.

Moltbook temporarily made agents look visible, persistent, personified and proactive. For a moment we all thought a line had been crossed.

But it hasn’t. This is still a UX illusion, not a capability leap.

Takeaway

Moltbook did not reveal runaway AI. It revealed something more boring.. we are starting to design software as if non-human actors will be normal users of systems. That changes: UI, permissions, accountability and how work gets delegated.

The risk is humans misunderstanding what autonomy actually means.

Humans are extremely bad at intuitively understanding delegated systems once they start speaking in first person in a human like manner. As agents become more omnipresent, autonomous and indistinguishable from humans in conversation, we will keep mistaking the simulation of agency for agency itself.

And that will cause real world impacts, unlike the moltbook experiment.

AI news from Japan you might have missed this week
  1. AI Use at Japanese Companies Surges A recent survey reveals that AI adoption among Japanese firms has increased significantly, driven by the need to address labor shortages and improve operational efficiency.

  2. SuperX Strengthens Japan Presence to Explore AI Data Center Projects The company is expanding its footprint in Japan through strategic partnerships aimed at developing advanced AI data center infrastructure.

  3. GMO Payment Gateway Launches AI-Driven Payment Analysis Service The new service utilizes artificial intelligence to provide merchants with deeper insights into transaction data and consumer behavior patterns.

  4. DeepL to Open Large-Scale Research and Development Hub in Tokyo The German translation tech giant is ramping up its investment in Japan, establishing a dedicated R&D center to cater to the unique needs of the Japanese market.

  5. Cyberagent Launches New Large Language Model Specialized for Advertising The company has developed a proprietary LLM designed to automate and optimize the creation of high-performing digital marketing copy.

  6. Fujitsu and RIKEN Develop AI for Predicting Molecular Structures This collaborative breakthrough aims to accelerate drug discovery and material science research through high-speed AI simulations.

  7. Chiba University Releases Comprehensive Report on AI Ethics The report provides a framework for the responsible use of AI in academia and research, emphasizing transparency and the prevention of bias.

  8. Yahoo News: Major Tech Firms Sign Voluntary AI Safety Pact Leading technology companies in Japan have agreed to a set of guidelines aimed at ensuring the ethical and safe deployment of AI systems.

🗼 singular.irl

IRL event of the week; get involved in AI in Tokyo!

The Shibuya AI Film Gallery is an immersive exhibition showcasing the winning works from CROSSING: Shibuya AI Film Competition. As a selected "Co-Creation Project" of the DIG SHIBUYA 2026 art and technology festival, we are transforming a gallery space into a hub for next-generation storytelling.

Details:

  • When: February 14, 2026

  • Where: Co-working Salon SLOTH JINNAN (SLOTH GALLERY) 2F, 1-14-7, Jinnan, Shibuya-ku, Tokyo 150-0041

    ​(A short walk from Shibuya Station)

  • What to Expect:
    Winning Film Showcase: Watch the top selections from the CROSSING global competition across three distinctive categories:

    • 🎥 Hybrid Films: The fusion of live-action footage and AI VFX.

    • 🤖 Full AI Films: Masterpieces generated entirely by Artificial Intelligence.

    • 🇯🇵 Shibuya Films: Unique AI-driven stories set in the streets of Shibuya.

    Immersive Viewing: Enjoy films on high-quality screens and individual headsets for a deep dive into each creator's world.

    Connect & Learn: Meet fellow filmmakers, AI enthusiasts, and industry professionals at our networking events and workshops.

Tell me how you feel, dear reader. What did you like? What did you hate?
What do you want to know about?

Let me know so I can get you actionable AI information from Japan, that you can use.

Till next week,

Ved

Don’t Be Sloppy, folks

Our AI Manifesto

Don’t Be Sloppy →

  1. Human-First

  2. Quality Information

  3. Critical Thinking

  4. AI Disclosure

  5. Skill Preservation

  6. Intentional Usage

The antidote to a slop filled world is to lean into intention, thoughtfulness, and human values!

Keep Reading