I'm curious about syncing my reading progress to places like Open Library. I, like most people, still send it to Goodreads (especially because there's the strong social aspect of it - I've gotten some good recommendations from peers on there) but I know that things like Open Library (and also Bookwyrm, though I'm inclined to use that when I begin making my site more directly compatible with ActivityPub versus this current syndication approach I'm taking now).
Been spending more and more time codifying some standards into a library. This came from wanting to refactor parts of Sele and seeing a chance to upstream some of these changes. Long term goal is to have a set of minimal implementations of clients and servers for Webmention, WebSub, Microsub and Micropub. From there, it's a playground.
I did a `tree -a | grep ".pem.pub" | wc -l` in my directory of SSH keys and got back 30 different keys. I'm definitely the type to make and toss them. Nigh time I looked into using a Yubikey (or some similar but internal) to manage this, lol.
Bad idea: making a service that mounts remote file systems from places like Vultr, Bunny, and DigitalOcean onto the Steam Deck to increase the amount of local storage it has.
- Computing how much time I've spent on a call in a month and showing it in my about page
- Recording the number of messages I've sent and received over the same period.
- Showing if I am actively in a call or was just in one within a five-minute window.
It's meant to be mainly something for me to tinker with and learn more about the XMPP APIs (as well as finding a way to bridge them to Micropub, so it should be fun!
Will be handy when I decide to build calendar views of my site
I think what I'd like to do in the far future is when I link out to specific platforms that require some sort of munging (on my site), I'd have it link visibly to a page that represents how my site shows it before sending them all the way to the original content. It'll have proper attribution and the like, as well as a way to ask for removal from my site (in which case, I'd probably reach for a publicly archived version). I might also only enable this, of course, for sites with no hint of MF2 support, with a CTA to ask them to mark up such pages. Gotta keep plugging!
Going to start reading CLI Apps in Rust to begin working on a CLI tool for working with IndieWeb algorithms! First thing I'd probably want to implement is a way to grab a token using IndieAuth and then immediately revoke it.
Gonna work on implementing a Micropub media endpoint for my site. I've been leaning a lot on remote URLs. The framework I use interestingly enough does not support multipart uploads out of the box (it's very bare bones) so there's a wee bit of coding I'll have to do. Another chance to give back though!
Once I can get a sense of “authentication” on my site—in the sense of being able to query for someone's identity, I think I'm going to start locking posts. I might partly automate it by some constraint, but I do think there's a slight benefit in not allowing read-access to everything I've posted.
I've been spending a lot of time working on the Rust libraries for IndieWeb things (namely the main collection of logic and the Microformats2 library). Now I want to start working on the desktop application that'll begin to consume a lot of these libraries.
Wrote something about how I'm using SQLite virtual tables in my site. https://jacky.wtf/2022/6/fDu9
Implementing Property Searching for Micropub
One of the proposed extensions to Micropub that I found fascinating is an extension to querying for a post list. It'd allow one to find a list of posts in their Micropub installation with any sort of querying. Koype currently supports looking up the MF2 of channels, categories, and entries. However, something I really wanted is the fields mentioned by Grant, the ability to filter over the properties of entries. This was something that would allow me to check if I've already interacted with something in my proposed social reader. Implementing this was not easy, though. I had to make a virtual table in SQLite, scan the source MF2 of all the entries on disk that are queried for (which was all, most of the time, in local tests!) and add two custom functions to SQLite to properly look for the wanted values in the right places.
Implementing the Virtual Table for Querying
I've read that SQLite's documentation is good. I did not find it to be the case. It's mainly because this begins to get into the plumbing of SQLite, and I'm still learning my way around it. Likewise, I also tend to peek into the source code, but reading C is not something I have done in a long time—I most definitely need a refresher. Instead, I leaned on the example code provided by the Rust library I'm using to interact with SQLite. Reading that and leveraging the documentation for virtual tables helped me get to a working table implementation. The goal was to provide a mapping between the properties stored on disk for an entry and its corresponding ID. This enabled queries like the following:
JOIN epv on epv.uid = entries.uid
mf2_json_has_value_in_property(epv.properties, 'like-of', 'https://lobste.rs') IS TRUE
ORDER BY entries.published_at DESC
This lets me look up any post that has a link to Lobsters as a like. What's not described by the name is that this function checks if the value of the property is either equal to or begins with the provided string. This kind of query could be translated to something like the following:
This kind of query I'd have to be careful with. If I'm looking up a URL, I'd probably want it to match against the authority value, but that'd require parsing every string to see if it's a valid URL and then doing that match. However, I now have a way to look up every link from Twitter that I've liked, replied to or engaged with. I have a ticket to see if I can allow for some sort of hint to allow for full-text searching of properties—that would allow me to search the contents of things I've posted, so I can check for deep links.
The concept of “virtual tables” aren't unique to SQLite (see Postgres's wiki or MariaDB's knowledge base for more info). However, what I'm doing should definitely make your nose turn up a bit—I'm reading any number of JSON files from disk (about 2,100 for my site at the time of writing, with about 5 – 15 new posts added to my site each day) each time a request is made to my homepage; making it to a DoS just by hitting Refresh in your browser a few times! I've added logic to short circuit the request on the site and applied an eager connection timeout in SQLite (I set it to five minutes before—don't ask) so it'll just return empty lists (at best) if it doesn't resolve it in time. I'd love ideas on caching or optimizing how I've done the lookup as well as even storing the information for entry properties. A bit of the conventional knowledge I have around these approachs are a bit moot since
See it live!
My website uses Koype as its CMS. As of v0.1.4, this has been available. It powers the feed of items at the bottom of my homepage that shows things I've been doing on GitHub from my site, as well as the things I've interacted with on Twitter. I'm hoping to tinker with this more overtime to see what other kind of queries I can build. I'm hoping more people implement such a query because it provides a cheap affordance (confirmation of pre-existing data) that can be helpful in social readers.
The astute computer scientist in you probably noticed that this kind of solution wouldn't work in constant time. A poor implementation could actually lock up my site (databases tend to)! I have logic for timing out database calls, though. I'm also curious about adding some benchmarks to see if my naive approach works faster than something like JMESPath. Frankly, I'd love to have something like JMESPath ship as an extension to SQLite's existing JSON methods.
Some other changes are that the indieweb Rust library supports capturing these properties (as well as the exists and not-exists fields) so any client or server using that library can pluck them out!
I need to look into making an integration to update my remote h-cards on other platforms. Changing my nickname would be too noisy (as that'd change my nickname on Twitter, GitHub or what have you). Tricky and finicky. The easiest to change would be my photo, for sure.
Going to spend tonight working on making a native IndieWeb app for GNOME in Rust. Doesn't have to do anything fancy—baseline logic would be signing in to a site's IndieAuth client, confirming what scopes were approved and showing the provided profile and endpoints exposed.
Is there a tool that can record one's battery life on a periodic or level basis? Preferably for UNIX-y systems. I want to keep a small report of the health of my devices. I think I can probably do some parsing of information from
upower if it's installed.
upower -i /org/freedesktop/UPower/devices/battery_BAT1 on my laptop gives me the following:
❯ upower -i /org/freedesktop/UPower/devices/battery_BAT1 native-path: BAT1 vendor: NVT model: Framewo serial: 0064 power supply: yes updated: Mon 30 May 2022 04:07:08 PM EDT (0 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging warning-level: none energy: 45.3684 Wh energy-empty: 0 Wh energy-full: 50.2964 Wh energy-full-design: 55.0088 Wh energy-rate: 15.015 W voltage: 17.377 V time to full: 19.7 minutes percentage: 90% capacity: 89.8096% technology: lithium-ion icon-name: 'battery-full-charging-symbolic' History (charge): 1653941204 90.000 charging History (rate): 1653941226 15.015 unknown 1653941222 15.030 charging 1653941216 15.184 charging 1653941214 15.169 charging 1653941212 15.138 charging 1653941161 14.235 charging
I need to work on integrating Webmentions with my website more. Mainly to see who's replying to me (I have to check manually—which is okay because it prevents the “notification jitters”) but to also help with contextualization of content.
This kind of makes me want to keep track of this for no other reason outside curiosity. I wonder if this is something I should put in my editor as a plugin (to collect these numbers and then throw it into SQLite for number crunching later.
Need to use this for my site's color palette. https://www.youtube.com/watch?v=UWwNIMHFdW4. Might rewrite a recap for research and reference.
Manton has a book that's available online and as print at https://book.micro.blog/. Been reading https://book.micro.blog/interview-tantek-aaron/ and noticing a lot of things that are cyclic when it comes to the IndieWeb space.
Namely, getting from “I need a Website to be on the IndieWeb” to “I use this as my presence online”. Hosting is a solved problem, but it requires quite a bit of investment up front and the funds to hold down things for people. I still think we require some sort of “linting” tool for sites to make sure they at least allow for some baseline discovery and interoperability. Angelo's working on something like this but not completely. I might end up doing this to help people onboard when they use Lwa but it'd also help to explain what can get you into an app before using it. Or at least interoperating with other sites and apps. A bit of the XMPP problem, but at least there's a registry and a way to check your implementation for support via https://compliance.conversations.im/.