But since I switched one of my workstations to Ubuntu 22.04 I was unable to login using this RSA key. Running ssh with debug enabled showed the likely culprit:
debug1: Offering public key: /home/user/.ssh/id_rsa
debug1: send_pubkey_test: no mutual signature algorithm
The message sent me on the right track, Ubuntu 22.04 has disabled RSA keys support by default. I’m not arguing with that, I don’t really like using RSA since better alternatives are around so I don’t want to change this default, but still I would like to be able to reboot my Debian 10 servers. So a command line option later I was able to use RSA keys only when I want them:
As absolutely nobody knows or uses I maintain an ansible role that can setup a Debian or Ubuntu machine with full disk encryption on Hetzner Robot (baremetal dedicated machines).
But wait, you shout, Hetzner usually runs consumer grade stuff without kvm’s – how do you enter your password at bootime. Easy, the role sets up a minimal boot environment with a dropbear ssh server where you can login and do cryptroot-unlock.
While developing the role I realised that it was impossible to unlock a Debian 10 machine, even though I was 100% sure ansible was adding the proper key logging in to the boot envirnment was impossible, I kept getting
Permission denied (publickey).
I lost some good hours troubleshooting being sure ansible was somehow not adding the proper key. Until I searched the web and realised the version of dropbear shipped with Debian 10 does not support the ed25519 keys I so cheerfully use for the added security and elegant shortness.
So the fix was, for Debian 10 machines, to maintain a rsa key to use when logging in to boot the machines.
For years I used to run both a tech blog and a blog in my native language. At some point I realized I haven’t posted in years, backed it up and shut everything down.
Since then a few things happened: I got into 3D printing, have a sometime interesting $dayjob, gave up on Facebook (never been much active on twitter). All those are going towards me wanting to share stuff (I’m sure all 3 of you reading this will be happy) and having no place to do it. So without further ado
P.S. if you wonder what’s with all the 2009-2016 posts already here: those are posts I imported from my old blog. When I have time I recover a few of them, especially if I find them still relevant. For kicks and giggles I might even recover some Symbian stuff I posted around 2009. Depending on mood and me forgetting or not to push the relevant buttons old articles might keep their place or appear as posted recently.
I’m going head first into the oracle db world. I was trying to create an spfile from the pfile and of course it didn’t work:
SQL> create spfile from pfile="/oracle/app/product/12.1.0/dbhome_1/dbs/initORCL.ora";
create spfile from pfile="/oracle/app/product/12.1.0/dbhome_1/dbs/initORCL.ora"
ERROR at line 1:
ORA-00972: identifier is too long
The reason is simple enough, you have to use single quotes instead of double quotes. But it took me a while to find out this so here it is for all other beginners.
SQL> create spfile from pfile='/oracle/app/product/12.1.0/dbhome_1/dbs/initORCL.ora';
I got my nagios server banned by fail2ban because of errors in the postfix mail.log log. I know that I can simply whitelist the nagios server but I prefer it working perfectly.
Checking the logs I could see this error repeating itself on each check:
Mar 25 13:01:13 xxx-123 postfix/smtpd: connect from nagios.example.com[220.127.116.11]
Mar 25 13:01:13 xxx-123 postfix/smtpd: improper command pipelining after QUIT from nagios.example.com[18.104.22.168]:
Mar 25 13:01:13 xxx-123 postfix/smtpd: disconnect from nagios.example.com[22.214.171.124]
Apparently postfix is picky about having extra input after a QUIT or DATA command, see details here.
It turns out that I haven’t updated nagios plugins in a while. Even if I kept nagios up-to-date the plugins were at 2.0.3. Updating to 2.1.1 fixed the issue and now I simply see a connect/disconnect in the postfix logs when nagios performs a check.
The no-check-certificate is required because at this point wget has no way of checking the certificate either. If you want to ensure the validity of the file download it from a working system and scp it to the remote problem server.
Be it because of the BIOS update to a beta or because of my drives but my RAID10 keeps failing. I documented before how to repair such a broken array but I didn’t want to go ahead with it too many times as data corruption is only one step away. Knowing that at least one of the disks has some minor issues (mdadm kicked it out some time ago when the disks were running under linux) I decided to check smart details and only keep only two of the disks in RAID1. I was curious if one can read SMART details when the disks are still members of the Intel RST array. Since I had all the data off the disks it was safe to test.
I found out thet the Intel SSD Toolbox shows SMART data for all disks in a system, not only SSDs and not only Intel. Look at Other Drives and scroll to the right as under Intel Solid-State Drives it shows the RAID volumes.
So, having nothing better to do and for no good reason I decided to update my workstation’s BIOS to the latest version released by Gigabyte. Since ignoring the “If it works don’t fix it” mantra is always a good idea. Beautiful, after update two of my disks from a four disk RAID10 array were showing as Non-RAID Disk. I had backups but shuffling 2TB+ of data is never fun.
Initial reports were all grim, the Intel RST BIOS does not allow repairing. Thankfully a good soul had always found the answer, source thread herethank-you adamsap.
Usual disclaimer: this worked for me, I have no guarantee it will work for you, and the method is not advertised as working and/or suported by Intel
Reset the volume (all disks) as non-member from the Intel BIOS. Ignore the warning that all data will be lost. The utility only touches the metadata related to RAID membership.
Create a new array with the all same disks and be sure to use the same settings related to strip size, RAID type, etc. I was in luck since my array was still visible since some disks still were attached.
Download TestDisk from http://www.cgsecurity.org. I used the Windows version since my Windows install was on a different disk. I never heard of this utility but seems to be really, really useful at data recovery.
Run TestDisk after reading the steps on their site. Be sure to read the documentation there to know what you are doing. In brief (so I’m sure you read the original docs) you have to: search for your partition(s) on the raid volume – if everything was recreated with the same settings it should find it quickly in a few seconds – and save the partition table.
After the partition table is saved reboot.
The array should be back with all the data.
I compared checksums for some of the data against backups and it turns out everything is back.
My old home router based on a sandy bridge dual core celeron and a gigabyte motherboard got “stolen” by my wife to use as a desktop as her old laptop was getting pretty slow.
I ran for a while Tomato on a Cisco E3200 router but it wasn’t able to keep up with my home connection (300 down / 100 up). Even if the router has gigabit ports it was only able to nat at ~100-150Mbit and openvpn was limited to around 10Mbit
The decision came (due to what is available in my part of the world) to the Fitlet X or the PC Engines APU1D4.
The fitlet has 4 intel LAN ports and a quad core AMD 1GHz cpu (two generations newer than the APU). This was really apealing oposed the APU’s dual core bobcat and 3 realtek based NICs.
Eventually I settled with the APU due to the two internal mini-pcie slots and being only half the cost of the Fitlet. (Consider that you have to buy RAM for the fitlet-x and that you don’t have any internal mini-pcie left, the only one is used by the FACET card with the 3 LANs)
I won’t go into detail about the build or do a full review as this has already been made. I will only go trough the bits of information I had trouble finding before and after buying it.
Throughput: without heavy use (squid, snort, etc.) you should see 400-500 Mbit WAN->LAN (limited by the realtek NICs). I know Mbit is not a good measure of a router/firewall performance but this is what matters to me at home. I saw mentions of 600 Mbit. I was eager to deploy it so I didn’t do any testing so all I can say is that 300Mbit works fine without any strain.
OpenVPN: it does around 50Mbit for me using AES-CBC-128. This was really a tough one as I didn’t find any useful values before buying and it was important for me. It’s a bit disappointing but very usable. The Bobcat T40E doesn’t support AES-NI but as far as I found out from others it’s not really helping OpenVPN either. There is low hope that newer versions of OpenVPN will perform better. The Fitlet-X CPU should be 15% faster due to IPC gains on it’s newer core so you should see a bit more.
Temperature: it appears that ~60 deg. C in idle is normal. Coming from Intel CPUs this worried me at first but seems normal for this CPU
Wireless: if you go the pfSense route as I did get the Compex WLE200NX usually sold together with the APU. It’s atheros (best for pfSense) and what most pfSense developers using the APU have.
SSD: don’t buy the 16GB crap SSD that is offered together with this board. Get a cheap ADATA/whatever instead. It’s probably going to be 32GB and at least twice as fast
Case: important due to the solution PC Engines chose for cooling. Note that this case doesn’t offer space for a 2.5 SSD/HDD (even if one SATA port is onboard), additional USBs (even if headers are present) or for a second set of antennas (even if two mini-pcie devices can be installed)
Cookies are used on this website. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.