Geocaching was admittedly not very high on my priority list in the last years and so I hunted the odd cache only with the help of my mobile phone and the c:geo app. As I did not log into the web user interface of Geocaching.com it escaped my attention that there were changes somewhere along the way.
So yesterday I realized that free accounts now also can download GPX files that can be processed without any further intervention by my eTrex 30 device. Just copy the files to the GPX folder on the SD card and you are good to go. Also in comparison to the previously available LOC files, the (XML) GPX files contain Groundspeak Extensions (use "View Source" in a browser as the schema displays only as an empty page) that are understood by the eTrex device for a very nice geocaching experience:
This makes the special handling described in my own (old) post eTrex 30, QLandkarteGT and Geocaching on GNU/Linux superfluous. This is very nice as the instructions would need to be updated anyhow as the QLandkarteGT project is now dormant and replaced by the QMapShack project. Switching to Debian Buster forces me to relearn my workflows in this new tool, but I am not yet as fluent with it as I was with the old one. Somehow the top level approach of working with tracks feels counter intuitive for my tasks but maybe I just need more practice to get over this.
But all in all this new(?) development makes the eTrex much more attractive for hunting caches. The one thing that did not change is the precision of the eTrex device compared to my mobile phone. The former is still superior in this respect.
So Machine Learning has become one of the hottest topics of today. Programs in the domain of "Convolutional Neural Networks" (CNN) capable of recognizing cats in images are described in an awe-struck tone as a form of "Artificial Intelligence" even though they are essentially simple operations on a massive scale yielding candidates sorted by probability. It is of course difficult not to be captured by the hype, but I still think it is worthwhile remembering some of the problems that are already known at this time.
With a few things coming together, I have finally decided to look into yet another programming language. On the one hand learning a new language is usually a good way to broaden the fundamenetal understanding of programming, but this time I do have a specific goal in mind that I want to reach.
Only recently I have been looking at various 32 bit Cortex-M ARM micro-controllers with the vendor provided C programming environments. Knowing modern programming languages like Haskell, OCaml or Go got me so frustrated with these environments that I decided to find a modern language also fit for this embedded space and no, I don't think Python is the answer to every problem just as Java wasn't. Ultimately, I would like to see a functional programming language in this space, but I am open to other approaches fitting the embedded space.
Being an FSF Emacs user for decades, I am also averse to these proprietary programming environments that may have some nice features but ultimately only try to lock the developer into a vendor controlled ecosystem. Usually they are all Eclipse based anyway but are incompatible with one another. It gives me the shivers installing the n-th copy of Eclipse for yet another Cortex-M based micro-controller and I really wonder how we ended up in this very frustrating assembly of walled in environments.
So this is the start of a small series of posts that evaluate how easy it will be to setup a programming environment on GNU/Linux for Rust based on GNU Emacs. Ultimately I want to target micro-controllers without a full GNU/Linux system running on them, so the aim is to also get a cross-compilation setup working.
One of my laptops still features a 1366x768 display which is very small for GUIs encountered today. For example the Xilinx Vivado GUI features a file menu that is so tall that the "Exit" choice will not even be visible on this laptop. Also for some other dialogues the important buttons for an interaction are simply not visible.
One option to cope with this certainly is to reduce the font size until things fit again, but another elegant solution is to configure the Xorg server with a large "virtual desktop". In such a setup the logical desktop is larger than the physical display and the LCD is just a view port onto that logical desktop following the mouse. Such a setup can be enabled dynamically on a modern Xorg server with the XRandR extension.
Late last year the ageing effects of my old GNU/Linux desktop system became so severe that ignoring them would soon not be an option anymore. One of the hard disks developed problems a while ago and although I was able to fix it so that finally the extended SMART test passed again without errors, it still continued to report errors in the form of "unreadable (pending) sectors". The on-board USB controller also complains about one of the internal USB ports for a long time and one DDR3 ram module had to be replaced as diagnosed by the wonderful Memtest86.
On top of all this, the Club 3D Radeon X1300PRO dual DVI graphics card started to occasionally hang the whole system a few seconds after waking up from suspend. Or at least that is what I suspect as the system never recorded error messages in the log files. The display however did visibly degrade and I think I saw some drm error message flash by at some point.
Be that as it will, I was glad that I got the chance to replace the system in time and gradually move stuff off a functioning system instead of attaching disassembled hard disks to a new system. The AMD Ryzen 5 2400G system from ARLT Computer, available without Windows, looked like very good value for money. Together with an HDMI to DVI adapter it should also easily power my two DVI monitor setup and so it did not take long until one of them stood beside my desk for installation. As the Ryzen CPU was introduced early in 2018 and the Linux 4.9 kernel at the core of Debian Stretch was released end of 2016, it was clear that I needed to go for the yet unreleased Debian Buster based on Linux 4.19.
All in all things went smoothly until I turned to the Xilinx tool chain that already gave me minor problems described in my previous post Fixing DocNav 2017.04 on Debian Stretch.
Updating one of my laptops from Ubuntu 16.04.5 LTS (Xenial Xerus) to Ubuntu 18.04.1 LTS (Bionic Beaver) was a completely straightforward process. Ubuntu certainly succeeded in creating an end user compatible GNU/Linux distribution.
The only thing that did not work for me immediately was hibernating the machine. This however is an important part of the workflow for this machine. As I carry it around a lot, I certainly don't want to shutdown the system every time I move it. My usual work space containing a terminal with four tabs, GNU/Emacs and Firefox is not very difficult to launch, but it is still a nuisance doing this multiple times a day. Suspending the system to RAM is certainly fast and solves the startup problem nicely, but sometimes (e.g. over the weekend) it happens that the system looses power and potentially some recent work.
Hibernation is the best compromise. It solves the power problem as the system dumps all active memory to swap and shuts down the system completely. On boot the system notices the presence of a hibernation image and re-initializes from it. Although a bit slower than suspend, it is still a lot faster than shutting down the whole system every time.
A few years ago I found out that my local waste disposal company offered an Android App mymüll.de with the capability to remind me the day before the collections. Back then it was easy to install and it saved me a few times from missing to put out the garbage cans, so I did not think any further of it. Only when the App on a later update started to request privileges that were surely not needed for a simple reminder service, I revisited the topic.
In the meantime I have an established calendaring solution in the form of my own NextCloud server and I wondered why it would not be possible to import the dates into a shared calendar. Most Apps on Android nowadays seem to exist because of the data collection features much rather than the actual functionality, so getting rid of yet another one is a worthwhile aim. Although people not seem to care, I for myself have decided not to install apps anymore requiring privileges not related to their intended task.
It is not a coincidence that the first post in this still young year is about security. Since I realized last year how far the current internet with its "data capitalism" has strayed from its beginnings, I did a lot of reading to understand the situation in more detail. Data and Goliath by the renowned security expert Bruce Schneier was a depressing eye opener and I am currently still reading his new book Click Here to Kill Everybody which reiterates the problems in light of more recent events. If you are looking for more in depth information from somebody with a long track record, I can strongly recommend those books as a starting point.
One of the lessons I recently learned is that real security is extremely hard to achieve - even by the best in the field. It is also pretty much impossible for a non-specialist to evaluate the security of any given solution without much more transparency into the security design process (threat models) and the implementation methods used to avoid them (protocols, etc.).
The Linux kernel and the GNU tool chains on the other hand offer a variety of hardening features to protect a GNU/Linux system from certain vulnerabilities. Having a tool to quickly evaluation which of those methods are in effect on a given system would be a welcome tool in the toolbox, especially when custom build systems are involved rather than the well known distributions.
The application security expert Tobias Klein provides a nice shell script to do exactly that. As checksec.sh only requires the Bourne Again Shell (bash), it is immediately usable on pretty much every GNU/Linux system out there.
While at university in the previous millennium, I was very much in love with the Smalltalk language and especially the Smalltalk system. Built on top of the core language, the latter includes the possibility to study and modify every aspect of the whole system through the class browser and image persistence. Such a setup allowed a freedom of creativity that I since miss in programming. The mythical Lisp machine must have been of a similar pedigree but having never used such a machine this is only what I can guess by reading about it.
This nice blog post by J.V. Toups reminded me of those days but also looks at this from a very different, yet interesting, angle. Just how much potential of our digital computers do we lose every second by having divided the world into a tiny class of "producers" and a gigantic class of "users" with all the rest of the world in it. Being patronized, members of the last class unfortunately have no idea whatsoever about how empowering computers really can be. If the system is designed to include the user, then it only takes a small amount of time to learn the basics of how to "program" them.
Working in a Unix console enables the use of sophisticated pipelines in day to day administrative work. Sometimes such complex pipelines can only be assembled in an interactive and iterative series of prototypes. In such a situation up - the Ultimate Plumber can speed up work significantly by shortening the iteration cycles.