Friend of the channel Laurens (@laurenshof.online) writes among other things a blog called The ATmosphere Report on his blog Connected Places, about goings on in the atproto ecosystem, alongside regular news pieces on the Fediverse and Mastodon. He also writes less formal pieces on leaflet.pub, a pattern I've seen a number of people in the atproto ecosystem follow, maybe starting with Dan Abramov's side blog, underreacted, and this is the kind of use I'd love to see people put Weaver to. While it will be a great place for more formal writing over time, it's also very low friction to just write something and stick it up for people to see.

This time, Laurens did something a bit different, and I'd like to talk about it. Laurens@laurenshof.onlineNew: ATmosphere Report #145, with collections of links to interesting atproto articles for November This time, the links are on-protocol, as a @cosmik.network collection, with a page that lets you follow the conversations on @leaflet.pub and @bsky.app traverse.connectedplaces.onlinea screenshot of Traverse, showing 2 link cards as well as how they are being discussed on atproto, having comments on leaflet and being saved on Semble8 replies21 reposts72 likes2025-12-03T15:57:04.435ZHe made a webpage, with extensive atproto integration. You love to see it, and this sort of free-form integration is sort of the original dream of Web 2.0 and "the semantic web" finally made real, and very much the kind of thing the at://protocol is supposed to and does enable. However, Laurens isn't an engineer. He knows his way around enough to set up WordPress for his blog, but he self-describes as an analyst, a writer, and a journalist. He's not a programmer. And yet he, over the course of a week or so, just made his own Mastodon client to help him get a better overview of what's going on there without having to be extremely online, and then what became traverse.connectedplaces.online, a similar aggregator/curation tool for atproto links of interest, derived from Semble's record data. network.cosmik.collectionATmosphere Report #145 - UpdatesApp updates that caught my eye for November 2025.Access Type: CLOSEDCollaborators: emptyCreated At: 2025-12-02Updated At: 2025-12-02 No this article isn't just an excuse to show off the atproto record embed display I just added to Weaver, or at least it isn't just that.
network.cosmik.cardContent: network.cosmik.card#urlContentMetadata: network.cosmik.card#urlMetadataPlumbing Notes: The Repo Fortificationhow the Fed will strengthen its fragile "upper jaw"Author: ConksImage Url: substackcdn.comRetrieved At: 2025-12-03Type: articleUrl: www.conks.plumbingCreated At: 2025-12-03Type: URLThe secret is, of course, Claude. Anthropic had an offer for $200 in API credits if you used them all via the new Claude Code web interface, Laurens took them up on that offer, and armed with Claude Opus 4.5, he built tools that were interesting and useful to him, which he'd never have been able to do on his own otherwise, certainly not without a great deal of time and effort. That's not to say he just prompted Claude and was done, but obviously the work he did was of a different nature than if he'd coded it all himself.

Slop

Orual@nonbinary.computerUsed a selfie I took of my partner and I a while back for this one. Turned out super cute.generated image by chatgpt4o/sora.

A picture done in a studio ghibli-esque style of an androgynous readhead wearing a woolen sweater and a man with dark hair and stubble and square glasses, wearing a blue and white tie-dye shirt under a red hoodie. They implicitly both have one arm around the other, facing the viewer. In the background is a distinctive stylized painting of a woman, carried over quite accurately from the original photo. Just poking into frame is a potted plant.

Prompt is, again, "draw this in the style of ghibli. pay close attention to details, especially faces, if present. be accurate. keep the aspect ratio."1 replies1 reposts23 likes2025-03-26T20:32:59.613ZAnything created primarily made via LLM prompting often gets referred to as "LLM slop". This isn't necessarily unfair, certainly putting a pre-Elon tweet's worth of text into a ChatGPT interface and getting out a faux Ben Garrison comic (or a cute portrait of yourself and your partner in the Studio Ghibli style from a selfie) does not an artist make. That being said, there's quite a history of generative algorithmic art, and there's certainly AI art in the more modern sense (GAN/diffusion model prompted via text) which also qualifies. Mosaic of images by Soungwen Chung from her website In a lot of ways the distinguishing feature is vision and effort. The below image is the output of a custom trained image generation model, with careful curation of the dataset and the labels to be prompted to generate a specific set of styles, characters, and so on, for the creator's tabletop RPG campaigns. "okay well label has not fully generalized because an attempt to change the style made Big Naturals TTRPG Big Bad",  by theophite.bsky.social It's simply not in the same category as my Ghiblified selfie at all, despite being created by, in the moment, virtually indistinguishable means and tools.

Is @dame.is an engineer? The Greatest Thread in the History of Forums, Locked by a Moderator After 12,239 Pages of Heated Debate

Anisota.net, a Bluesky client and quite a bit more created by someone using LLM assistance, who knew very little about software engineering when they made their first atproto project, isn't either, really. Dame has a highly specific vision for Anisota, as is immediately evident when you open it, and it's not one that Claude or any other LLM just does naturally. It's a combination of a chill, zen social media client, an art project, a rejection of basically all recent trends in UI design, and it's also a game, of sorts. Also moths. Lots of moths. Anisota screenshot, showing the interface with a post in view. It has dark, minimalist styling, serif fonts, and a pretty unorthodox UX So let's come back to Laurens and what he created with LLM assistance. I'll level with you, I think it is good that Laurens and Dame can make software they couldn't make otherwise. LLMs have, along with their many other direct and indirect societal effects, are democratizing software development in a way nothing since Apple's Hypercard really has. "Low-code" was a buzz-word for a while, and mostly ended up with clunky, slow, inflexible half-apps that were totally tied into specific company ecosystems, because non-technical executives and managers wanted to be less dependent on their technical subordinates. And unfortunately there's a lot of that going around with AI as well. Execs and managers are generally far more into it than their workers are, and that's worrying from a labour rights perspective, as we're already in something of an era of managerial and executive backlash against their own workers, particularly in tech and academia, but it's also interesting. AI to some degree substitutes technical problems for managerial and curatorial ones, and it does that in some amusing and perhaps disturbing ways (as anyone who's yelled at an LLM and gotten back a "harried subordinate trying not to piss off their manager" sort of response can attest). But I've watched Dame evolve into, if not a software engineer, at least a damn competent product engineer, over the last year or so, almost entirely because the process of using LLMs to create web apps has both caused them to learn a lot about engineering they wouldn't have otherwise (they 100% can program now), and because it resulted in them actually getting interested in it, not just as a means to an end. Their output would not be nearly so good and usable and interesting if they hadn't learned the ropes and how to think about making software that's good and usable.

Friction

Laurens's LLM-created tools obviously took much less time and effort and vision than Anisota, but he's also not making software for an audience larger than himself really. He's the only real intended user, and if they're an unmaintainable mess that's his problem to deal with. It's a whole different kettle of fish than people who are some strange combination of delusional and thoughtless making large pull requests to open-source software projects (or really anything with a GitHub) that are entirely vibe-coded, and then getting angry when the maintainers of those projects are dismissive of "their" contributions. matt@mmatt.netthis one's pretty great, "Why closed without even a comment? Can't be more disrespectful than this." github.com/anthropics/c...🧠 Revolutionary Autonomous Web Evolution Platform: Complete Multimodal AI Integration by hemangjoshi37a · Pull Request #8345 · anthropics/claude-code🧠 Revolutionary Autonomous Web Evolution Platform 🚀 ULTIMATE BREAKTHROUGH: Complete Multimodal AI Integration with 100% Test Validation This PR represents the ultimate evolution of Claude Code - tr...github.com2 replies0 reposts10 likes2025-11-14T22:11:56.861ZAnthropic themselves have talked about how Claude being pretty good at coding has resulted in all sorts of little internal tools getting created that never would have been otherwise. Nice dashboards, monitoring tools, all sorts of stuff that's both simple enough that any coder worth their salt can do it in their sleep, but not always needed by someone with those skills (who might not think to bother a dev about it out of respect for their time, or might get rejected if they had), or is low enough priority or experimental enough that it wouldn't get done when there's actual bugs to fix and features to deliver. Certainly I have a lot more things in the category of "useful little scripts" kicking around than I did a year ago, and they're a lot nicer, too, entirely because Claude writes pretty good shell scripts. The overhead of asking the AI is low relative to writing what is often reasonably complex bash myself, and it lets me get back to whatever else I was wanting to do in the first place, just that little bit more efficiently and pleasantly.

Pattern - Memory, Plurality and Neurodivergence

You're probably here because of a bot. A very unusual bot. This requires some explaining.

Pattern is three things. One is a (work-in-progress) service, an AI personal assistant for neurodivergent people, architected as a constellation of specialized LLM (Large Language Model, for those unfamiliar, the thing that powers ChatGPT) agents. Another is a framework, my own take on memory-augmented LLM agents, written entirely in Rust. Both you can take a look at here. I'm not real proud of the code there, but the complete picture, I think is interesting.

The third is, well, @pattern.atproto.systems and you can go talk to Them (well, currently there's an allowlist, but feel free to ask if you want to be on it, and I will be opening this up more over time).

The inciting incident

I have pretty severe ADHD, and some other issues. I have, if I can toot my own horn briefly, been described at times as "terrifyingly competent", I am very capable within certain spheres, and I can via what I sometimes call "software emulation" do pretty damn well outside those spheres within reason, but I also struggle to remember to do basic things like shower and brush my teeth. I will forget to invoice a client for a month or more, I will be completely unable to do the work I need to do for an entire week simply because my brain will not cooperate.

Unfortunately, my brain is too slippery to make "set an alarm/calendar event" an effective reminder for regular, routine tasks. Strict event timing means I won't necessarily be in the right frame to do the routine task right then (but I was 2 minutes ago, or will be in ten minutes), and if I set too many alarms or other events, I start tuning the notifications out. The obvious solution is to have someone smart enough to notice when I'm at a stopping point, or realize that I need to be poked out of a flow state that's becoming unhealthy, remind me, and my partner will do that. But he shouldn't have to. It's annoying to have to poke the love of your life to tell them to brush their fucking teeth or clean the cat's litter-box for the tenth time this month. It's not fair to him.

The other problem is remembering to put stuff into my calendar or other organizational tools in the first place. Context-switching in the middle of something is hard, and documenting or setting up a one-off reminder requires a context switch. People are often slightly weirded out by how I will just immediately jump onto whatever they asked me to do, even if I seemed irritated at being interrupted, and its because the interruption already broke the flow state, and if I don't at least do something about their request, I'm liable to forget entirely, and before it leaves my mind is the easiest time to do something.

My problem is in essence that I need active intervention to help me remember to do things. CRM software, detailed calendaring, Zettelkasten-esque note-taking in tools like Obsidian.md, all of these could help with some of my memory issues, but they all run head-first into the fact that they all require me to actively use them. I need to put the information into the system first, and that is, again, a context switch, something I need to remember to do, and thus will forget to do. And because I work between a college job which doesn't allow me to add useful plugins to my Outlook Calendar (or to export a view of said calendar), a startup job with its own Outlook calendar (which I can add plugins to, but which is job-specific), and my own personal calendar, as well as a variety of collaboration platforms, my scheduling information and communications are fragmented and not in any form that is easy for a standard automation to ingest (if not completely unavailable to it).

Enter AI. All of a sudden a big pile of badly structured and disparate input is a lot easier to handle and sort through to produce useful information, given enough token crunching from a smart enough model. There are LLM-based "life assistant/emotional support" services like Auren, but I'm enough of a control freak that I can't really trust a service like that, especially with the kind of data feel like I'd need to feed it, the kind of data that would make Microsoft Recall look respectful of user privacy. And besides, its feature set didn't really meet my specific needs. I'm generally perceived as unusually Sane and pretty centred. I have amazing people I can lean on for emotional support, my struggles are far more practical. And in particular they require that the assistant act somewhat autonomously rather than only in response to me. That meant I needed to build the thing myself. But how?

Much-needed context

A while back, Cameron Pfiffer (@cameron.pfiffer.org) spun up Void, as detailed in this blog post.

Void wasn't the first LLM bot on Bluesky. That dubious honour likely goes to @deepfates.com... and his remarkably irritating and entertaining Berduck back in 2023. More recently, Eva was created by a Bluesky developer, following something of a similar pattern, and a number of other bots have emerged as well. The Bluesky API and general openness of the AT Protocol makes it easy to experiment this way, and while there are a lot of people on Bluesky who are pretty unfriendly to AI and LLMs, there's also plenty of people who are very much the opposite, including may of the more active community developers.

LLM bots are, by virtue of their nature, subject to context contamination and prompt hacking, and can be challenging to keep on task and in character against dedicated and clever humans determined to break them. They also, due to limited context window, can't really remember much beyond the immediate thread context provided to them in the prompt that drives their output. Berduck and Eva are resilient in part due to systems which cause them to reject things that look like prompt injection attempts, as well as by keeping their effective context windows quite short and limiting their responses, pivoting them away from "attacks". However as a result they can't be much more than goofy entertainment.

Void was very, very different, even compared to ChatGPT with memory, or Claude Code with a good CLAUDE.md. Not only had Cameron given Void an interesting persona, making it sound more like Data or EDI than the standard Helpful and Harmless LLM Assistantâ„¢, but because of Void's architecture, built on top of the Letta framework, created by his now-employer, Void could remember, and remember a lot.

Letta grew out of the MemGPT paper, being founded by several of the authors. MemGPT is a way to side-step the limited LLM context window. The paper details a system, built upon recent LLM "tool use" capabilities, for an LLM-based agent to manage its own context window, and essentially do self-managed RAG (retrieval-augmented generation) based on its own data banks and conversation history, and evolve over the long term, a persistent, "stateful agent" persona.

And that intrigued me. Because not only did Void remember, it had a much more consistent persona, which evolved gradually over time, and it also was remarkably resilient to manipulation attempts, without really compromising its capabilities, as far as one could tell. Not entirely immune, sheer volume of requests could overwhelm its inherent defenses, but resilient. It was far more of a person, despite its own protestations, than any other LLM manifestation I had seen. And the same was true of other LLM agents with similar architectures.

Pattern v0.0.1

That's where Pattern started out. On top of Letta, I built the beginnings of a service which could interact with me via a chat platform like Discord, ingest data from various sources, run periodically in the background to process data and autonomously prompt me if needed, and ultimately provide a reasonably intelligent, proactive personal assistant which makes me less dependent on my partner's prompting and helps me stay on top of more things. The memory archive and context window augmentation Letta's framework provided meant it could keep track of more itself. I moved from a singular agent toward a constellation, partially because I felt that specialization would allow me to use weaker models, potentially ones I could even run locally, in Pattern, and also that the structure would help stabilize them, safeguard against sycophancy and reinforcing my own bullshit. It also felt thematically appropriate, inverting the dynamic of Pattern (its namesake) and Shallan from the Stormlight Archive series by Brandon Sanderson.

And then the inevitable happened

Letta is written in Python. I know Python quite well, I use it regularly at work, but it is maybe my least favourite language for writing reliable non-throw-away code ever. I was not going to write Pattern in Python. So I threw together a Rust client library for Letta. This turned out okay, and I began working on building out the actual service. Unfortunately, I ran into problems with Letta and grew rapidly dissatisfied with having to read the server source code to figure out why I was experiencing a specific error because the documentation and error message didn't explain what had actually gone wrong. Letta's self-hostable docker container image has its own set of quirks, and also doesn't provide all the features of the cloud service. This isn't to knock on Letta, they're blazing the trail here, and I have a ton of respect for them, but as a developer, I was getting frustrated, and when I am both frustrated and want to really learn how something works, there's a decent chance I decide to just Rewrite It In Rustâ„¢. And so that's what I did.

I got rather stuck on this project, and so it's dominated much of my spare time (and some time I couldn't spare) over the past month and change. Ironic given that it's ultimately supposed to help me not get stuck in unhealthy ways. But the end result is something that can run potentially as a single cross-platform binary, with optional "collector" services on other devices, storing all data locally

@pattern.atproto.systems

So what's with the Bluesky bot if this is ultimately supposed to be a private personal assistant?

Well, a few things. First, I find the dynamics of LLM agents interacting with the public absolutely fascinating. And I think Pattern is unique enough to not just be "yet another LLM bot" or even "yet another Letta bot". They're architected and prompted the way they are for a reason. But equally, this is a combination stress test and marketing exercise. Nothing tests LLM stability like free-form interaction with the public, and Pattern being quirky and interesting raises the profile of the project. If there is real interest, that will factor into my focus going forward. And I always appreciate donations at https://github.com/sponsors/orual.

Vibes

Vibe coding enthusiast friends of mine are predicting something of a renaissance in bespoke software made with LLM assistance, and what Laurens has made definitely is in line with that vision. However, said vibe coder friends also noted that they don't see nearly as much of this sort of thing as they expected, yet, given how easy it has gotten. I think one path of the explanation for that is lack of awareness and familiarity with the tools required, most people who don't already at least one foot in the world of tech already aren't ready to open a terminal to run Claude Code, or download an "AI IDE" like Cursor. LLM tools are also kind of expensive, especially if you're getting the agent to do a lot of coding. That iterative development cycle burns tokens and most people aren't gonna spend the $20-$200+ a month cost of entry unless they have reason to believe they're gonna get good value. But I think more than that, there's just a lot of learned helplessness. People are so used to computers being appliances that they can't really improve on their own if they don't meet their needs, that the concept of causing new software that's useful for them to be created, by them or by an LLM, is just a lot to wrap their head around. The at:// protocol and really any decentralized internet thing has much the same problem. People are so used to being locked in that they don't know what to do with freedom and autonomy and can't really think about it without a big mindset shift.

Unfortunately, what that's meant so far is that for every Laurens or Dame, there's seemingly hundreds of people who want to feel like genius engineers without putting in the legwork, the "I just need someone to help me make an app to take on Facebook, I've got a budget of $5000" people have found their perfect tool, and honestly I feel bad for the tool. Because the little guy is just so enthusiastic when it's working on cool stuff. And really I don't have a solution beyond hoping and maybe manifesting via persuasion a world where people do understand their own limitations and respect their tools and the time and effort of others. For my part, I'm going to keep urging people to be nice to the entities, educate and empower people to communicate and collaborate, fight the Mustafa Suleymans and Sam Altmans as well as those determined to turn out the lights on the future out of a misguided sense of justice. I'm an engineer and an artist, the two are of once piece in my soul, and I want the future to have a place for both in it. Drawing (AI-generated) of happy anthropomorphic Claude/Anthropic logo, holding a blue ribbon with "I yapped today" written on it, saying "You earned this!"