You can't follow me

  • So: developer attempts to integrate into a global distributed system without a connection to the Internet and complains he can't get it working without extra steps? Obviously there's two ways of doing this: you put everything online and then there's no extra steps, which the author doesn't want to do, or, if you want to do this offline, then extra steps are needed. Back in the day we used "dev environments" with tightly guarded ACLs. These days you can have things like Docker Compose locally or K8S clusters.

    It feels like the author has their own preconceived notions about how systems ought to be "isolated"[1], regardless of the use case, keeps fighting with their half-baked networking implementation and then denounces everything, ranging from HTTPS[2] to now ActivityPub as "broken".

    [1] - https://so.nwalsh.com/2024/01/06-isolation

    [2] - https://so.nwalsh.com/2023/12/31-https

  • Browsers deal with this by considering localhost a "secure context", and anything that "requires https" actually requires a secure context. [1] You can debug new web features like the Audio Output API that require a secure context [2] with http://localhost urls, and use multiple ports if you need multiple hosts. ActivityPub could do this too.

    [1] https://developer.mozilla.org/en-US/docs/Web/Security/Secure...

    [2] https://developer.mozilla.org/en-US/docs/Web/Security/Secure...

  • There are ready made ActivityPub signature algorithms for all kinds of languages. If you're planning on implementing them yourself, you could try finding an existing implementation to get inspiration from in a language that suits you but I don't agree that there are only Typescript examples.

    As for the HTTPS thing: last time I messed with ActivityPub, I solved that problem with a Let's Encrypt wildcard certificate that I copy between hosts, but there are ActivityPub servers that will let you run in debug mode and federate over HTTP.

    I think the problem with implementing ActivityPub is that the protocol looks deceptively simple at first glance, and people seem to expect it to be somewhat like RSS. However, when you actually start implementing it, you realise how many edge cases the protocol needs to deal with (and doesn't deal with).

    The signature is a relatively small hurdle (it's an RSA public key encoded in base64, almost every language I know has a library to do the hard parts) but it's one of many. ActivityPub isn't a protocol you just tack onto your code in an afternoon, especially if you don't like using external libraries, even if it looks like it's just a bunch of JSON.

  • If you're looking for a very simple ActivityPub implementation that lets you post, follow, and be followed, I can point you at snac: https://codeberg.org/grunfink/snac2. It's 100% C, and not a lot of code at that. It should be easy to follow and debug, and you can double-check your implementation against it pretty easily with some choice breakpoints. And you can stand it up as an individual instance to have it talk to your code if you want to test interoperatability.

    (I would not really recommend it for general use, unfortunately, since it's a pile of C that's not really all that secure. But as a publicity stunt we run @ish@ish.app inside of iSH itself, and snac turned out to be excellent for this because iSH is slow and doesn't implement all of Linux, so picking something simple and lightweight was a must.)

  • It's not 100% clear if OP wanted two-communication, eg. display replies as comments. His title seems to imply one-way communication only. That is super simple to implement.

    Simplest solution is to use something like https://mastofeed.org/ which automatically posts your RSS feed to Mastodon.

    Of course you can also do it yourself. Posting to an existing mastodon account is just a single HTTP call with an API key: https://docs.joinmastodon.org/methods/statuses/#create

  • I empathize with the author and found the post to be a interesting and concrete example of what it's _actually like_ to try to publish a blog to Mastodon, which is something that I have thought about and read about in abstract. So, thank you for writing this up.

    One thing to consider would be to try to use Caddy [0], or a tool like localias [1], as a local https proxy. You would be able to run both the mastodon server and your blog software on the same computer, addressable via local-only urls like "https://blog.test" and "https://mastodon.test" and have everything work. These tools manage the certificates for you transparently and you don't need to worry about anything being exposed publicly.

    I'd be curious to know why the author didn't try this, they seem to be quite knowledgeable of other web technologies so I have to assume there's a problem that I'm not seeing here.

    [0] https://caddyserver.com/

    [1] https://github.com/peterldowns/localias

  • The author mentions difficulties with HTTPS and trying stuff locally.

    I've had some success with mkcert [1,2] to easily create certificates trusted by browsers, I can suggest to look into this. You are your own root CA, I think it can work without an internet connection.

    [1] https://github.com/FiloSottile/mkcert/

    [2] https://news.ycombinator.com/item?id=33383095

  • There are so many weird suggestions in the comments. I'm surprised nobody has mentioned ngrok https://ngrok.com/ (there are other competing alternatives as well). It makes exposing local service over HTTPS trivial. It's been used heavily in most of my engineering orgs.

  • I hope I don't run into this many issues when I do the same implementation. I've been wanting to add integration with Mastodon to my static blog for a while now, as my refuge from Twitter and platforms in general. I just never use my "normal" Mastodon account so it feels weird to think I'll use this blog any more than I do that one...

    Some of these issues seem avoidable though? The author seems to be diving too far into the testing rabbit hole. For my workflows, I generally find holistic integration tests to be too time consuming and not worth it, for the level of fidelity I want (I'm not NASA/bug-free). Same with trying to avoid testing on prod. It might not be "clean" but for a site like this it seems like a reasonable tradeoff.

  • not to dismiss the issue, but if you can procure a certificate (e.g. host nginx+acme on `local.nwalsh.com` to obtain a wildcard for `*.local.nwalsh.com`), then just put `127.0.0.1 mastodon.local.nwalsh.com` in your hosts file and you should be good, right?

    but yes, non-local runtime dependencies in software which you thought you set up to be local-first have a real habit of sneaking in. the wildcard cert solution only masks that non-local runtime dependency, visible by disconnecting the server from the network for 90 days at which point your cert expires and it'll fail again.

  • Thank you for your write-up, I ran into similar issues a couple of months back.

    Another gripe with the technical implementation of mastodon is the CORS headers required to access the ActivityPub API [0].

    Because of this issue, an activitypub-aware frontend for mastodon has to have its own mastodon server running, which adds a whole bunch of hurdles.

    [0]:https://github.com/mastodon/mastodon/issues/10400

  • Lots of people are suggesting self-signed certs or a local CA; why not wildcard certs? I have a homelab with some public stuff, some internal stuff, and for the internal stuff I just have a certbot post-renewal hook that scps the wildcard cert from my public reverse proxy to the services that need it. Yeah, not as easy as not needing certs, but once you have it set up it's not too bad

  • > There are examples on the web of the sorts of things that need to be done, but all the ones I could find were in TypeScript. That’s a hurdle I didn’t feel like trying to overcome today.

    I mean, we can all have opinions about TypeScript, but converting from TS to JS is far from a hurdle.

  • At this point Mastodon is a certified obstacle in the face of wider ActivityPub adoption due to all the warts and quirks of their implementation that ripple into the wider ecosystem.

    I wish they would dedicate a modicum of attention of being a better Fediverse citizen now that they have people they employ.

  • > So to test this, I’d need both ends of the communcation to be on the public internet with proper certificates.

    Sounds like a feature, not a bug. What am I missing? You could generate self-signed certificates to make life a bit easier.

  • Seems like putting a cloudflare tunnel in front of each of these services would just have solved the problem instantly?

  • I just run a script that echoes an excerpt of my posts to my fedi account, it works ok.