How to fix Windows Hello facial recognition locking your screen while the PC is use

If this is happening to you, you may have Personify ChromaCam installed. Stop the matching service in Serices.msc, then uninstall all Personify apps (as well as Logitech Capture).

Fixed the problem for me with the Logitech BRIO 4K webcam on Windows 10.

from jdrch

How to resolve the Resilio Sync service “Error 1069: The service did not start due to a logon failure.” on Windows

So you restarted your PC and Resilio Sync isn’t running. When you try to manually start the service, you get the Could not start the Resilio Sync Service service on Local Computer. Error 1069: The service did not start due to a logon failure. message.

Here’s how to fix it:

  1. In Services.msc (which I assume you’re already in to have seen the error message), right click Resilio Sync Service -> Properties -> Log On
  2. (You may be able to skip Steps 2) to 6). I’m just regurgitating what worked for me) Enable the Local System account radio button
  3. Click Apply
  4. Click OK
  5. Click Start the service
  6. After the service has started, click Stop the service
  7. Repeat Step 1)
  8. Click This account:
  9. Click Browse...
  10. In the Select User windows, click Object Types...
  11. Uncheck Built-in security principals
  12. Click OK
  13. Enter your username in the Enter the object name to select (examples) field
  14. Click Check Names
  15. Select your username
  16. Click OK
  17. Enter your user password in the Password: and Confirm password: fields
  18. Click Apply
  19. Click OK
  20. Repeat Step 5)

Resilio Sync should now start normally.

from jdrch

How to reset a default app preference in Samsung One UI 3.1 (Android 11)

Couldn’t find any writeup for this specifically elsewhere, so here’s how:

  1. Go to Settings -> Apps -> Choose default apps -> Opening links
  2. In the Installed apps list, tap the app you no longer want to be the default for something
  3. In the ensuing dialog, tap Clear defaults

The above worked on a Samsung Galaxy Tab S7 Wi-Fi.

from jdrch

How to resolve the “Could not create MokListXRT: Out of Resources” Debian boot error on Dell computers

Uh oh, you just rebooted your Debian Dell machine to effect a system update, only to get the following error message:

Debian error message reading "Could not create MokListXRT: Out of Resources"

Could not create MokListXRT: Out of Resources
Something has gone seriously wrong: import_mok_state() failed: Out of Resources

You can resolve the above by doing this.

from jdrch

How to fix the nfsfind “find: cannot open /path/to/directory: No such file or directory” error on OpenIndiana/Illumos

My OpenIndiana machine recently emailed me the following error message, with the subject line Cron <root@DellOptiPlex390MT> [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind:

find: cannot open /znapzend/DellOptiPlex390MT/ROOT/openindiana: No such file or directory
find: cannot open /znapzend/DellOptiPlex390MT/export/home/judah: No such file or directory
find: cannot open /znapzend/DellOptiPlex390MT/export: No such file or directory
find: cannot open /znapzend/DellOptiPlex390MT/export/home: No such file or directory
find: cannot open /znapzend/DellOptiPlex390MT/ROOT: No such file or directory

Here’s how I fixed it

1st, let’s interpret the error message. Each line is saying that the find command cannot open a certain path, because there is no file or directory at that path. Both those paths exist, so how can find not locate them? More on that later.

The Cron <root@DellOptiPlex390MT>email subject line tells us that this error message is coming from the root crontab (or, more accurately, the cron daemon running as root), and that it’s occurring at the -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind line.

Let’s look at the root crontab to see if we can find a matching line. Sure enough, there it is:

15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind

The above line means “at 0315 every Sunday, run -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind.” So what is nfsfind? You can find the full description in the Solaris docs. nfsfind cleans stale temporary files out of your NFS shares once a week, presumably to prevent the shared filesystems from running out of space.

Now that we know what nfsfind does, let’s take a look at it using our editor of choice. I prefer nano, invoked here under my own user account as I do not want to accidentally edit a system script:

$ nano /usr/lib/fs/nfs/nfsfind
if [ ! -s /etc/dfs/sharetab ]; then exit ; fi

# Get all NFS filesystems exported with read-write permission.

DIRS=`/usr/bin/nawk '($3 != "nfs") { next }
        ($4 ~ /^rw$|^rw,|^rw=|,rw,|,rw=|,rw$/) { print $1; next }
        ($4 !~ /^ro$|^ro,|^ro=|,ro,|,ro=|,ro$/) { print $1 }' /etc/dfs/sharetab`

for dir in $DIRS
        find $dir -type f -name .nfs\* -mtime +7 -mount -exec rm -f {} \;

The penultimate line of the nfsfind script has the script’s only find command. By process of elimination, this would be where the error message is coming from. It’s safe to assume find isn’t malfunctioning and its options are syntactically correct, so the error message is probably showing up because it’s being fed the wrong input ($dir).

find $dir tells us the find command is operating on a variable $dir, which from the for dir in $DIRS line is each successive value in $DIRS. From the DIRS= line we see that DIRS comes from whatever is found in /etc/dfs/sharetab.

Let’s look at /etc/dfs/sharetab, invoking nano with the same privileges as before:

$ nano /etc/dfs/sharetab
/znapzend/DellOptiPlex390MT/ROOT/openindiana    -       nfs     sec=sys,rw=@,root=@
/znapzend/DellOptiPlex390MT/export/home/judah   -       nfs     sec=sys,rw=@,root=@
/znapzend/DellOptiPlex390MT     -       nfs     sec=sys,rw=@,root=@
/rpool1 -       nfs     sec=sys,rw=@,root=@
/znapzend       -       nfs     sec=sys,rw=@,root=@
/znapzend/DellOptiPlex390MT/export      -       nfs     sec=sys,rw=@,root=@
/znapzend/DellOptiPlex390MT/export/home -       nfs     sec=sys,rw=@,root=@
/znapzend/DellOptiPlex390MT/ROOT        -       nfs     sec=sys,rw=@,root=@

Now, some background:

  • /znapzend is the mountpoint for rpool1/znapzend/, a ZFS fiesystem I created as a destination for znapzend
  • rpool1 itself is mounted at /rpool1. I shared it via NFS using # zfs set sharenfs=on long before I created rpool1/znapzend

Clearly, all of rpool1‘s child datasets inherited its sharenfs=on property upon their creation. Moreover, the child datasets are also mounted:

$ mount | grep znapzend
/znapzend on rpool1/znapzend read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4c10008 on Fri Apr 30 21:59:04 2021
/znapzend/DellOptiPlex390MT on rpool1/znapzend/DellOptiPlex390MT read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4c10034 on Sat May  1 10:00:03 2021
/znapzend/DellOptiPlex390MT/ROOT on rpool1/znapzend/DellOptiPlex390MT/ROOT read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4c1003a on Sat May  1 10:00:04 2021
/znapzend/DellOptiPlex390MT/ROOT/openindiana on rpool1/znapzend/DellOptiPlex390MT/ROOT/openindiana read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4c1003c on Sat May  1 10:03:00 2021
/znapzend/DellOptiPlex390MT/export on rpool1/znapzend/DellOptiPlex390MT/export read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4c1003d on Sat May  1 10:03:22 2021
/znapzend/DellOptiPlex390MT/export/home on rpool1/znapzend/DellOptiPlex390MT/export/home read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4c10040 on Sat May  1 10:03:33 2021
/znapzend/DellOptiPlex390MT/export/home/judah on rpool1/znapzend/DellOptiPlex390MT/export/home/judah read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4c10042 on Sat May  1 10:03:48 2021

It seems I made 2 mistakes here:

  1. I forgot to # zfs set sharenfs=on the child datasets
  2. I probably unnecessarily mounted the child datasets. As you can see from my znapzend tutorial link, znapzend uses ZFS zpool/dataset paths, not filesystem paths (created by mount operations)

(As a corollary, this is probably why ZFS uses the term dataset and not filesystem. All ZFS filesystems are datasets, but not all ZFS datasets are filesystems. A dataset becomes a filesystem only when it is mounted.)

But that still doesn’t explain why find chokes on those paths. Let’s try to navigate to them ourselves using cd:

# cd /znapzend/DellOptiPlex390MT/ROOT/openindiana
-bash: cd: /znapzend/DellOptiPlex390MT/ROOT/openindiana: No such file or directory

Wait, what? How can there be no file or directory at that path? The answer lies in the sequence of sequence of events that led to that location being considered a filesystem (note the emphasis) path to begin with. 1st, rpool1 was created with sharenfs=on. Much later rpool1/znapzend and rpool1/znapzend/DellOptiPlex390MT were created. Both those datasets inherited the sharenfs=on.

rpool1/znapzend was then mounted at /znapzend, which also mounted all of its current and future datasets. All of the above became filesystems by virtue of being mounted and NFS shares by virtue of inheriting their parent dataset(s)’ sharenfs=on setting.

The future datasets came into being when znapzend created them recursively as zfs receive destinations. However, because each dataset is actually a family of snapshots, each has no actual corresponding filesystem, despite the apparent path! This is why # cd fails to find anything.

We can fix this problem by 1st unsharing the “problematic” dataset (actually this along is sufficient to solve the problem):

# zfs set sharenfs=off rpool1/znapzend

and then also unmounting it (good practice, since the location isn’t intended to be generally accessible to non-ZFS operations anyway):

# zfs unmount rpool1/znapzend

For those who may be confused about the continued accessibility of the destination dataset to znapzend after unmount, remember that all datasets on a zpool are accessible via ZFS (note the emphasis) operations as long as that zpool has not been exported.

Checking the contents of /etc/dfs/sharetab again:

$ nano /etc/dfs/sharetab


/rpool1 -       nfs     sec=sys,rw=@,root=@

Both find and ‘cdwork on/rpool1`, so we can be sure there will be no further errors of the kind detailed at the outset.

from jdrch

How to setup znapzend with local backup on OpenIndiana

This guide will show you how to set up znapzend on OpenIndiana to backup a ZFS fileystem to a different ZFS filesystem.


  1. The backup destination ZFS zpool, e.g. destinationzpool, has already been created
  2. The backup destination ZFS dataset, e.g. destinationzpool/destinationzfsdataset has already been created
  3. Both of the above are on the same machine as the source dataset

See the Oracle Solaris docs for instructions on Steps 1 to 3. At that link, pick the latest Solaris release and search the ensuing page for “ZFS.”

Step 1: Set up the pkgsrc repo

As of this writing, znapzend does not exist in the OpenIndiana repos, so you’ll have to set up the 3rd party pkgsrc repo that has it.

Step 2: Install znapzend

# pkgin install znapzend

Step 3: Configure znapzend

The official documentation is overly complicated for this purpose. A simple example config is:

znapzendzetup create --recursive SRC '1h=>15min,1d=>1h,1w=>1d,1m=>1w,1y=>1m' sourcezpool DST '1h=>15min,1d=>1h,1w=>1d,1m=>1w,1y=>1m' destinationzpool/destinationzfsdataset

Explaining the above command:

  • znapzendzetup: Set up znapzend
  • create: Generate a new backup config
  • --recursive: Backup all child datasets on sourcezpool
  • SRC: Source zpool settings begin here
  • '1h=>15min,1d=>1h,1w=>1d,1m=>1w,1y=>1m': For each comma-separated parameter, take a snapshot at interval equal to the value to the right of the arrow, and then destroy that snapshot after the value to the left of the arrow, e.g. 1h=>15min means take a snapshot every 15 minutes and destroy all such snapshots after they’ve existed for an hour. See official documentation for more options
  • sourcezpool: The zpool you want to backup
  • DST: Destination zpool & ZFS dataset settings begin here
  • '1h=>15min,1d=>1h,1w=>1d,1m=>1w,1y=>1m': Same as before
  • destinationzpool/destinationzfsdataset: The destination zpool and ZFS dataset

Step 4: Check your znapzend config

# znapzend --noaction --debug

The output of the above command may appear to freeze and not return you to a prompt. If that happens just hit CTRL + C until the prompt shows up again. As long as the output has no errors, your config should be good.

Step 5: Start znapzend in the background

# znapzend --daemonize

Step 6: Wait for the smallest interval set in Step 3

Step 7: Check that znapzend is creating snapshots as configured

# zfs list -t snapshot

You should see snapshots with %Y-%m-%d-%H%M%S (znapzend‘s default) timestamps in their names.

Step 8: Kill znapzend

There are many ways to do this, but I prefer using htop (install it from the OI repos if you haven’t already):

  • # htop
  • Press F4 to filter
  • Type znapzend
  • Use the arrow keys to highlight any matching entries
  • Press F9 while each matching entry is highlighted to kill it
  • Press F10 to exit

Step 9: (Optional) Disable other snapshot utilities

CoW filesystems like ZFS sometimes have difficulty with a large number of snapshots combined with low space. Also, destroying snapshots is computationally expensive, and taking them too frequently can slow down the machine.

Therefore, it’s a good idea to disable other snapshot utilities if they are likely to cause such issues. For example, if you use Time Slider:

  • Open Time Slider
  • Uncheck Enable Time Slider
  • Click Delete Snapshots
  • In the window that follows, click Select All
  • Click Delete

The deletion process may take a very long time (I suggest running it overnight.) If any Time Slider snapshots remain after the bulk deletion, just run it again and that should take care of the rest.

Step 10: Generate a znapzend service manifest

Wouldn’t it be nice if you could start znapzend at boot? Unlike Linux and FreeBSD, OI’s crontab syntax lacks an @reboot option, so anything that starts with OS has to be a service. While @reboot makes things simple, service creation has some advantages, such as alerting the user when a service isn’t running as expected.

sudo svccfg export znapzend

Copy the output of that command to a text editor. You could also possibly use | or echo to combine this and the following manifest file creation steps.

BONUS: You can use svccfg export combined with the following steps to turn just about any background executable into a service!

Step 11: Create the znapzend service manifest XML file

Create the following file /var/svc/manifest/site/znapzend.xml using your preferred method, then copy the output of Step 10 to it and save the file.

I use:

# nano /var/svc/manifest/site/znapzend.xml
  • Copy the output from Step 10 into nano
  • Press CTRL + O to save the file
  • Press CTRL + X to exit

Step 12: Validate the manifest file

# svccfg validate /var/svc/manifest/site/znapzend.xml

Step 13: Import the manifest file

# svccfg import /var/svc/manifest/site/znapzend.xml

Step 14: Start the znapzend service

# svcadm enable znapzend

And that’s it. Now you have automatic, incremental, rotating, easily restored backups of your OS filesystem.

from jdrch

Which FreeBSD directories to backup for bare metal recovery

Like OpenIndiana, FreeBSD uses a ZFS root filesystem by default. Ergo, the backup method is approximately the same: backup everything that’s a ZFS filesystem and exclude the rest. Additional exclusions:


from jdrch

Which Linux directories to backup for bare metal recovery

This post pertains mostly to Debian(10 and distributions based on it) and Raspberry Pi OS, but can most likely be extended to other distributions.

For Debian, backup everything except the following:


For Raspberry Pi OS, backup everything except the following:


from jdrch

How to set up email notifications on OpenIndiana

Do you want your OpenIndiana instance to let you know when something has gone wrong or if a cron job has failed? Here’s how you do it.

First, tell the operating system to email you if anything goes into the maintenance, offline, or degraded states:

# svccfg setnotify -g to-maintenance,to-offline,to-degraded mailto:YourEmailAddress

For failed cron jobs and the like, add the following lines to /etc/mail/aliases:

root:           YourEmailAddress
YourUsername:          YourEmailAddress

Ensure there are no other conflicting root & YourUsername definitions (read: lines beginning with either of those.) If there are, either comment them out or resolve the conflicts using the commented instructions in the file.

Save /etc/mail/aliases when you’re done editing, then run the following in the terminal:

# newaliases

Now test your config by sending an email directly to YourEmailAddress¹, checking its inbox each time:

echo "This is the body of the email" | mail -s "This is the subject line" YourEmailAddress

Then try sending an email to root:

echo "This is the body of the email" | mail -s "This is the subject line" root

Finally, try sending an email to YourUsername:

echo "This is the body of the email" | mail -s "This is the subject line" YourUsername

Those should all work.

¹ The emails I sent using this method did not have a subject line, so the -s might not work OpenIndiana. To be honest, I stole this line from Debian tutorial, so 🤷‍♂️. In any case, automated emails will have their own programmatically generated subject lines and bodies.

from jdrch

How to fix Postfix emails sent to Gmail addresses not being received

So you’ve followed the instructions to mailutils and postfix, but the test echo "This is the body of the email" | mail -s "This is the subject line" your_Gmail_address command isn’t resulting in any emails showing up at your_Gmail_address.

What’s going on? Here’s how I solved the problem (on Debian 10).

As difficult as email delivery is to set up and troubleshoot, the good news is that emails that fail to be delivered generally get bounced, and the bounced email contains useful error information.

Let’s open our inbox and see what’s there. In the terminal, run $ mail. You should see something like this (if you don’t, check your system mail logs. Doing so is outside the scope of this post):

$ mail
"/var/mail/jdrch": 1 message 1 new
>N   1 Mail Delivery Syst Sun Apr 25 14:22  70/2479  Undelivered Mail Returned to Sender

Hit Enter to read the email. Look for a line beginning with Diagnostic-Code:. In my case, the line is:

Diagnostic-Code: X-Postfix; mail for loops back to myself

loops back here is informative. In computing terminology, looping back means that somewhere along the way a destination location resolved to the source location. In a network, e.g. email, context, those locations are typically URLs or IP addresses.

Gmail’s SMTP server URL is Let’s figure out what that resolves to using dig, which returns the IP address of input URLs resolved using the machine’s DNS settings¹:

$ dig

; <<>> DiG 9.11.5-P4-5.1+deb10u3-Debian <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1243
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 4096
;    IN      A

;; ANSWER SECTION: 2   IN      A

;; Query time: 0 msec
;; WHEN: Sun Apr 25 14:29:16 CDT 2021
;; MSG SIZE  rcvd: 71

In ;; ANSWER SECTION: we see that the Gmail SMTP server URL is resolving to, which always means the machine itself. So emails sent to Gmail addresses are coming right back to the source machine.

Now we know our problem is on the DNS side. It could be a DNS server, DNS filtering service, firewall, or something similar. In my case, it was 2 Pi-hole blocklists, and whitelisting the server URL fixed the issue:

$ dig

; <<>> DiG 9.11.5-P4-5.1+deb10u3-Debian <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19729
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 1232
;    IN      A

;; ANSWER SECTION: 599 IN      A

;; Query time: 25 msec
;; WHEN: Sun Apr 25 14:33:09 CDT 2021
;; MSG SIZE  rcvd: 71

Aha, now ;; ANSWER SECTION: contains an IP address that at least looks more correct than (TBH I have no idea what Google’s SMTP server IPs are.)

The echo test command at the outset now works.

from jdrch