logfile.ch now with Mastodon-based comments
All new posts on logfile.ch will now include a link to a Mastodon toot referencing the post from this blog’s new Mastodon account at @logfile@mstdn.io. Replies to this toot will be displayed in the Comments section at the bottom of each blog post. The replies are retrieved directly using JavaScript in your browser from mstdn.io, which means that I can keep my blog as a static web site and do not need to introduce additional technical complexity.
How does it work? The whole-setup is surprisingly simple: All my blog posts get a new mastodon
parameter in their YAML front-matter for Jekyll which expects the ID of the toot referencing this post on my Framapiaf profile. In my blog post template, I check for this parameter and if it is set, I display the Comments section with a little JavaScript to retrieve any replies to this toot:
<script src="/assets/js/purify.min.js"></script>
<script type="text/javascript">
fetch('https://mstdn.io/api/v1/statuses/101065476986200703/context')
.then(function(response) {
return response.json();
})
.then(function(data) {
if(data['descendants'] &&
Array.isArray(data['descendants']) &&
data['descendants'].length > 0) {
document.getElementById('mastodon-comments-list').innerHTML =
data['descendants'].reduce(function(prev, reply) {
mastodonComment = `<div class="mastodon-comment">
<div class="mastodon-comment-content">${reply.content}</div>
<div class="mastodon-comment-footer">[
<a href="${reply.account.url}" rel="nofollow">
${reply.account.acct}
</a> |
<a href="${reply.uri}" rel="nofollow">
${reply.created_at.substr(0, 10)}
</a>
]</div>
</div>`;
return prev + DOMPurify.sanitize(mastodonComment);
}, '');
}
});
</script>
There is only one downside to this approach: I can only create the toot (in fact I automate the creation of the toot using the great feed2toot tool and the RSS feed of my blog) after the blog post has been published, but I need its ID as part of the blog post. I have not yet found a good and easy solution for this, so I go through the additional hoop of publishing my blog post first, then wait for the toot to appear and then add the toot ID in a second step to the post.
lib.reviews - Open source, open data reviews on anything
lib.reviews is an open platform to post reviews and a one- to five-star rating on basically anything. Anything with a URL, that is, and sensibly you are not allowed to review individual persons unless they act as a business. The system is actually pretty clever: every review is attached to an object, which is identified by one or more URLs. And if the URL points to a supported source – at the moment primarily Wikidata – the system will automatically pull in metadata such as a description from the data source.
I also like that the site is actually quite easy-to-use and uncluttered and built with full internationalisation in mind: the interface is available in many languages and supports non-English reviews as well. The only UI issue I have is that the language your review gets assigned to is based on the interface language you have chosen. This is not that intuitive and leads to quite a few mislabelled reviews, because the reviewer might be using lib.reviews in e.g. Portuguese but write an English review, not noticing that this review will be labelled as being in Portuguese. (It also makes writing reviews in multiple languages a hassle, as you always have to switch the interface language.)
I believe the future for Internet reviews should lie in decentralised networks, either federated (e.g. using ActivityPub) or fully peer-to-peer (e.g. built on top of Secure Scuttlebutt). But lib.reviews, by being open source and open data and being available in the here and now, can be an important first step towards that goal: by being open source, the platform itself can evolve towards enabling decentralisation. And even if it doesn’t, by being open data the reviews posted on lib.reviews can form initial content for any future platform.
I have set-up an account on lib.reviews which I will be using to cross-post reviews I write on this blog. To avoid spam, lib.reviews is invite-only at the moment. If you want to try it out and would like to have an invite, feel free to get in touch.
Run multiple MPlus models on Linux
Are you using MPlus, the statistical package, on Linux and have become annoyed with the limitations of its command line client? In particular, have you ever wondered how to run multiple models with one command? If that is the case, you might want to consider putting the following into a shell script called, for example, mplus_run.sh
:
#!/bin/bash
for file in "$@"
do
/opt/mplus/8/mplus $file
kate ${file%.inp}.out &
done
This script will run MPlus for each input file you specify on the command line, and you can use shell expansions like *.inp
to run it on many files without having to specify them separately. By default, the script will open each output file generated by MPlus in a text editor in the background. I am using kate
, KDE’s text editor, to view the generated output files, but you can use any other graphical text editor. Just replace the name of kate
with your editor command in the above script. Note that the editor needs to be able to run in the background, so you can’t directly use a non-GUI editor such as vim, nano or emacs - you could open a new console window in your graphical user environment with the editor, however. If you do not want to automatically open the output files from MPlus, you can also just remove the line for running kate
.
You need to run chmod 755 mplus_run.sh
to make the script executable. You can either put it into your path and run it as mplus_run.sh
, or you can put it anywhere and create an alias, so that you can run it simply as mplus
. Edit your ~/.bashrc
or ~/.alias
file (depending on your Linux distribution) and add:
alias mplus='/path/to/where/you/put/mplus_run.sh'
That’s it. Now you can navigate to a folder full of MPlus models in various .inp
input files and can simply call:
mplus_run.sh *.inp
The shell script will run MPlus for each model and open the generated output file in the text editor you specified.
Disable USB autosuspend on Linux
Are you running Linux and your mobile phone won’t charge when plugged into your laptop? Or your mouse suddenly stops moving until you press a mouse button? Chances are it is USB autosuspend that is to blame. USB autosuspend is a power-saving feature in recent Linux kernels that powers off USB devices if the kernel thinks that those devices aren’t needed right now. Unfortunately, if you are trying to charge your iPhone, you are usually not using it with your computer, so chances are that your kernel will switch off power - making charging rather difficult. It can also cause problems with some USB mice which will stop responding after a short while until you press a mouse button to tell them to power on again.
The following instructions have been tested on OpenSuSE, but should not be too distro-specific. You find all your plugged-in USB devices in the virtual filesystem /sys/bus/usb/devices
. In this folder, you will find one folder per device and in each per-device folder you will find the file power/control
. If you want to review the current setting, you can output it like this:
cat /sys/bus/usb/devices/<your device>/power/control
Valid values are auto
for automatic autosuspend and on
for disabling autosuspend (keeping your device on
all the time). If you have problems, you want to set this value to on
like this:
echo on > /sys/bus/usb/devices/<your device>/power/control
If you do not want to go through the hassle of finding the right device folder (which you can find using lsusb
, by the way, see this useful post and you do not need the power-boost (e.g. because you are connected to a power supply), you can also disable autosuspend for all your USB devices with the following shell script:
#!/bin/bash
for dev in /sys/bus/usb/devices/*/power/control; do
echo $dev
echo on > $dev
done
Note that this command needs to be called every time after you plug-in the device (and every time after you reboot your computer). For a permanent solution, you need to use specific USB management tools or the solution given here.
Mastodon - Which instances to choose?
Due to the demand in the last view days, many of the earlier Mastodon instances — including the “flagship” instance at mastodon.social — have benn closed temporarily to avoid getting overloaded. Thanks to federation, this is not a big deal, there are literally hundreds of alternatives out there. However, this only brings a new challenge: Which instance to choose?
This is not a trivial matter, as (as has been pointed out also elswhere): You trust your instance operator not to read your private toots and details and not to go away suddenly, as this means you’ll lose your account and need to start from scratch.
There is tooter.today, which recommends an instance to choose, but frankly their choice of prioritising small instances (of about 60 active users) does not seem to be the best approach in my opinion to address the concerns above. So, which instances do I recommend for new users?
Currently, I would choose one of the following:
- social.weho.st - They are a great team from the Netherlands working on an ambitious project to provide Matrix, Nextcloud, VPN etc. for their members and have now also set-up a public Mastodon.
- mastodon.zaclys.com - This is the Mastodon instance from Mère Zaclys, a French non-profit providing various open web services (Nextcloud, photo galleries etc.) to their members that has been around for a long time already.
- social.tchncs.de - This one technically does not belong on this list, as it is also currently closed for registrations. As its admin has hinted at reopening registrations soon, however, I still wanted to recommend it. The whole tchncs.de universe has been around for a while now, as well, and seems to be a good choice for me.
This list is not based on any hard stats or a fancy ranking formula, but on my personal “gut feeling”. I would thus recommend to take it with a grain of salt, but if one Mastodon instance seems just like any other for you, maybe this recommendation can help.
Mastodon, the new cool kid in town
In case you have missed it: there is quite a hype going on right now about Mastodon, which apparently — according to Motherboard — “is like Twitter without Nazis”.
In fact, Mastodon is a new (open-source) implementation of an OStatus server software. This means it can federate with the existing GNUsocial universe out there (which never took off to the same degree) but is a completely new implementation with its own client API and a web interface looks similar to the original TweetDeck, the Twitter client. Gargron (Eugen Rochko), its lead developer, focusses on Mastodon being easy to install (Mastodon provides Docker packages) and its interface being easy-to-use. This has lead to a proliferation of public instances. And these are also badly needed - the sudden mainstream interest has led many instance operators (including Gargron himself on the mastodon.social flagship instance) to temporarily close their doors to new user registrations.
As Mastodon has its own API, GNUsocial mobile apps won’t work on Mastodon instances. However, with Tusky for Android and Amaroq for iOS there exist already two very capable mobile apps. This Mastodon 101 has some more information to get started.
Let’s see how this sudden interest will continue in the coming weeks. PCMag believes Mastodon will go the route of all the other forgotten social networks out there. Maybe. On the other hand, even Jack Dorsey has now (implicitly) tweeted about the new kid on the block:
Thank you. ❤️ you too https://t.co/iZNruLx3Ml
— jack (@jack) April 6, 2017
I will take this as a good sign: Maybe this time really will be different and an open, decentralised social network will finally push into the mainstream. If you want to try it out, you can follow me on GNUsocial as @arx@gnusocial.de from your favourite Mastodon instance.
"The Deck, Adieu"
Yesterday, news made the round that The Deck, the ad network for indie bloggers that took pride in being unobtrusive and not spying on its users, announced it would shut down. Today, Daring Fireball published its obituary:
In early 2006, Jim Coudal started The Deck, with Jeffrey Zeldman and 37signals (now Basecamp). I joined in early February, making Daring Fireball the fourth site in the network. Andy Baio, Jason Kottke, and The Morning News joined soon thereafter. In March, we had a group dinner in Austin during SXSW. I remember a palpable sense of accomplishment. I remember thinking, This is going to work.
How can indie blogging be sustained in the future? No one seems to have the definite answer yet. Will it be micropayments through something like Brave or the upcoming Flattr Plus? Or will blogging for money become a thing of the past?
Jules Verne on IPFS
To test my IPFS node set-up a bit further, I decided to pin some larger files in addition to this blog (which is rather lightweight). So here you go, if you are interested you can now retrieve some Jules Verne audiobooks (in original French) courtesy of the awesome LibriVox project via IPFS:
- Vingt mille lieues sous les mers at /ipfs/QmZ2KcDWhmN6kBEQRmLc2fTUV9SHghynFNtY1KKv7yrcaq
- Le tour du monde en quatre-vingts jours at /ipfs/QmSMA8BEQMaM4YQyc1G6H1ZYHxQBFR3fDBhK4iMMqYfgi3
- L’île mystérieuse at /ipfs/QmXG4sdeuMXqmz9bwpPUCr6HmiQdWZGEsTpQVFKtZTG3iR
- Voyage au centre de la terre at /ipfs/QmNWr3HYnBCWdb8u36yxKBrXPpmww6D6sMTHsiMZFgokUf
So far, my tiny little VPS is holding up really well.
Distill - Academic machine learning journal for the 21st century
Distill is a new, web-based, peer-reviewed journal for academic research on machine learning. As YC Research put it in their announcement:
The web has been around for almost 30 years. But you wouldn’t know it if you looked at most academic journals. They’re stuck in the early 1900s. PDFs are not an exciting form.
Distill is taking the web seriously. A Distill article (at least in its ideal, aspirational form) isn’t just a paper. It’s an interactive medium that lets users – “readers” is no longer sufficient – work directly with machine learning models.
Ideally, such articles will integrate explanation, code, data, and interactive visualizations into a single environment. In such an environment, users can explore in ways impossible with traditional static media. They can change models, try out different hypotheses, and immediately see what happens. That will let them rapidly build their understanding in ways impossible in traditional static media.
I have to say I’m pretty excited about this and really hope it catches on in other areas as well. They provide many tools and workflows for researchers to publish their work in a web format. The peer review uses GitHub for an open process: you create a new repository, develop your paper in it and submit it for review to Distill.
This is also my only gripe about Distill: You can keep your repository private while it’s still under review (for publishing it needs to be public and CC-BY-licensed), but due to their reliance on GitHub this will of course require a paid account there. While 7$ / month won’t be too much for most (but maybe not all) researchers, it’s still a bit disappointing for an otherwise so open project.
logfile.ch is now available on IPFS
You can now read this blog on IPFS, the new decentralised data store, using its IPNS hash:
ipns/QmVvWyHBz86fr7oHXrcbYCnLgwd2SGDQad5WvZ3e176Ex9
The link points to the public IPFS gateway, but if you run your own node you should also be retrieve logfile.ch using the above hash (or you can use tools such as ipfs-firefox-addon or ipfs-chrome-extension to get your browser to automatically rewrite links to the public gateway to access your local node instead).