Spent the morning after a bit of reading to hash out the new idea I had for Webmention storage. In an attempt to stick to using the filesystem as a form of truth, I'll be writing out to a index.txt that'll have the URL and relationship type of the Webmention as it stands. That way, if I ever need to, I can use that list to refresh all the relationship from a backup if the database was missing. I'm thinking about adding another field to represent the relative path on disk to a serialized representation of the Webmention, but for now, it'll be (hopefully) expanded when it's added to the MF2 property.

I'm excited for all of these changes to work across the temporary destination storage approach—people would be able to get this full Micropub experience with little commitment or sync it to Dropbox or the ilk to have file backups and edit things in real time. I think allowing for two-way sync (editing a file triggers an update in the database) would be a nice thing to have in the future and would allow for scripting (generating a bunch of files for entries that are backfed from an archive that can then be imported into the database). To be honest, if I continue to treat the database as mainly a caching and indexing layer over the filesystem, then it doesn't seem odd to have a “duplicate” write of everything (to SQLite and the filesystem). I just don't want it to result in duplication of storage.

Heh, I need to add proper pagination support back into this site. Or at least remove the limit per page. Working on paging logic is (not) going to be fun.

Finally getting to adding a form of a pipe parsing system to my site on the Micropub side of things. There isn't any normalization on the output of these services, so I'm going to have them try first looking for a MF2 item, MF2 document, JF2 document and failing after that. It's not like we inject an @context in these documents, it'd be amazing with schema checking. I think I might do that myself.

by • posted archived copycurrent

Made excellent progress on this. I'm becoming more aware of the need to have JF2 support to support other places that things can be piped from. It'll also become more required as I begin working on the Microsub facets of Lighthouse. I've managed to not focus on it, just it. I can also see it simplifying the logic for my templates on my site if I use that, but I'll have to see.

Finally getting to adding a form of a pipe parsing system to my site on the Micropub side of things. There isn't any normalization on the output of these services, so I'm going to have them try first looking for a MF2 item, MF2 document, JF2 document and failing after that. It's not like we inject an @context in these documents, it'd be amazing with schema checking. I think I might do that myself.

Or rather, it's not able to show my main feed because of some SQL issue. "dbtax", anyone?
by • posted archived copycurrent

Yeah so it looks like I ran into rusqlite/rusqlite!433 on GitHub. Very annoying and I need to find a way to either add a test for this or add an update trigger to fix up these values. I understand the former more, so I'll do that.

Made lots of changes to my site's parts this evening. Most of it is to lean more on protocols to get me what I need. Eventually, this site would essentially become a feed renderer of the content I store in my Micropub server (and other places). Very exciting.

by • posted archived copycurrent

Definitely need to figure out this bug that's happening when I try to hit some parts of the p3k suite of tools. That's no good. But also subtle incentive for me to work on my own implementations!

It's probably going to take me a long time, but I do want to import the stuff from my older site's implementation into this current one. I still have it, I just haven't made it compatible with this one. It's mainly JSON, though.

Working with Micropublish is making me realize that I need to do a bit of HTML sanitization when I get HTML content. It wrapped the article's HTML in a div tag and that broke my layout a bit.
Also, having the generator and syndication information populate at a later time is nice. I think it's the same for Webmentions when they're sent. I might just need to make a log view of actions/jobs for Koype so I can see what happens when.

I updated both Shock and Koype so now categories can be paginated, which makes them into feeds! Now to actually generate feeds (like RSS and the like) instead of using Granary