Minimizing IOPS on Synology
Feb 25, 2024
I’ve been using a Synology NAS for a few years now. It’s running few self-hosted services via Docker Compose, but overall, I use it mostly as backup storage. I’m the only person on my gitea instance, and nobody’s using it at night. I also don’t mind the unavailability of miniflux, and Postgres backups can be scheduled during similar times as other cron jobs.
I finally had some time to minimize IOPS so that my 4 HDDs could quietly hibernate for the majority of the time. They’re loud when working, and SSDs are a tiny bit too expensive for the storage size I use - 16TB in RAID 10.
Must-reads found on the internet
Of course, my problem is not new, so there are 2 excellent guides I found on the internet that try to address the problem:
- Official Synology guide https://kb.synology.com/en-us/DSM/tutorial/What_stops_my_Synology_NAS_from_entering_System_Hibernation - general steps and actions without really digging deep into the system
- Guide on Reddit for DSM7 - following this will heavily modify Synology’s DSM behavior, including updates and health checks
Reddit guide copy in case it disappears
A lot of people (including me) do not use their NASes every day. In my case, I don’t use NAS during work days at all. However, during the weekend the NAS is being used like crazy - backup scripts transfer huge amounts of data, a TV-connected mediaPC streams video from NAS, large files are being downloaded/moved to NAS etc etc.Turning off/on NAS manually is simply inconvenient plus it takes somewhat long time to boot up. But the hibernation is a perfect case for such scenarios - no need to touch the NAS at all, it needs only ~10 seconds to wake up once you access it via network and goes to sleep automatically when it’s no longer used. Perfect. Except one thing. It is currently broken on DSM7.
The first time I enabled hibernation for my NAS, I quickly discovered that it wakes up 6-10 times per day. All kind of activities were chaotically waking up the NAS at different times, some having a pattern (like specific hours) and others being sort of random.
Luckily, this can be fixed by the proper NAS setup, though it requires some tweaking around the multiple configuration files.
Preparations
Before changing config files, you need to manually review your NAS Settings and disable anything which you don’t need, for example, Apple-specific services (bonjour), IPv6 support or NTP time sync. Another required step is turning off the package autoupdate check. It is possible to do a manual updates check periodically or write your script which will trigger the update check on specific conditions, like when the disks are awake. This guide from Synology has a lot of useful information about what can be turned off: https://kb.synology.com/en-us/DSM/tutorial/What_stops_my_Synology_NAS_from_entering_System_Hibernation
No big issue if you miss something in Settings at this moment - DSM has a facility to allow to understand who wakes up the NAS (Support Center -> Support Services -> Enable system hibernation debugging mode -> Wake up frequently), this can be used later to do some fine-tuning and eliminate all remaining sources of wake ups.
There are 3 main sources of wake up events for DSM: synocrond, synoscheduler and, last but not least, relatime
mounts.
synocrond tasks
The majority of disk wakeups comes from synocrond activity, both from actually executing scheduled tasks and wakeups caused by deferred access time updates for assorted files touched by the tasks during execution (relatime
mode).
synocrond is a cron-like system for DSM. The idea is to have multiple .conf-files describing periodic tasks, like an update check or getting SMART status for disks.
These assorted .conf-files are used to create /usr/syno/etc/synocrond.config
file, which is basically an amalgamation of all synocrond’ .conf files in one JSON file. Note that .conf-files have priority over synocrond.config
. In fact, it is safe to delete synocrond.config
at any time - it will be re-created from .conf-files again.
Locations for synocrond .conf-files:
/usr/syno/share/synocron.d/
/usr/syno/etc/synocron.d/
/usr/local/etc/synocron.d/
I put descriptions of the synocrond tasks in a separate post: https://www.reddit.com/r/synology/comments/10iokvu/description_of_synocrond_tasks/
Actual execution of scheduled tasks is done by synocrond
process, which logs execution of the tasks in /var/log/synocrond-execute.log
(which is very helpful to get statistics which tasks are being run over time). In fact, checking /var/log/synocrond-execute.log
should be your starting point to understand how many synocrond task you have and how often they’re triggered. There are multiple “daily” synocrond tasks, but usually they are executed in one batch.
There are many synocrond tasks, and depending on your NAS usage scenario, you might want to leave some of them enabled.
General strategy here is that if you don’t understand what a given synocrond task does, the best approach would be to leave the task enabled, but reduce its triggering interval - like setting it to occur “weekly” instead of “daily”.
For example, having periodic SMART checks is generally a good idea. However, if you know that your NAS will be sleeping most of the week, there is no point to wake up disks every day just to get their SMART status (in fact, doing this for years contributes to a chance of something bad to appear in SMART).
If you are sure you don’t need some synocrond task at all - then it’s ok to delete its .conf file completely. For eg. there are multiple tasks related to BTRFS - if you don’t use BTRFS or BTRFS snapshots, these can be removed.
Tweaking synocrond tasks
In my case I removed some useless tasks and for others (like SMART related) I set their interval to “monthly”. Good observation is that these changes seems to survive between DSM updates, according to synocrond.config
and NAS logs.
Here are the steps I did to eliminate all unwanted wake ups from synocrond tasks:
Normal synocrond tasks
- builtin-synolegalnotifier-synolegalnotifier
sudo rm /usr/syno/share/synocron.d/synolegalnotifier.conf
- builtin-synosharesnaptree_reconstruct-default
- inside
/usr/syno/share/synocron.d/synosharesnaptree_reconstruct.conf
replaceddaily
withmonthly
- inside
- builtin-synocrond_btrfs_free_space_analyze-default
- inside
/usr/syno/share/synocron.d/synocrond_btrfs_free_space_analyze.conf
replaceddaily
withmonthly
. BTRFS-specific, could have removed it
- inside
- builtin-synobtrfssnap-synobtrfssnap and builtin-synobtrfssnap-synostgreclaim
- inside
/usr/syno/share/synocron.d/synobtrfssnap.conf
replaceddaily
/weekly
withmonthly
. BTRFS-specific, could have removed it
- inside
- builtin-libhwcontrol-disk_daily_routine, builtin-libhwcontrol-disk_weekly_routine and syno_disk_health_record
- inside
/usr/syno/share/synocron.d/libhwcontrol.conf
replacedweekly
withmonthly
- replaced
"period": "crontab",
with"period": "monthly",
- removed lines having
"crontab":
- inside
- syno_btrfs_metadata_check
- inside
/usr/syno/share/synocron.d/libsynostorage.conf
replaceddaily
withmonthly
. BTRFS-specific, could have removed it
- inside
- builtin-synorenewdefaultcert-renew_default_certificate
- inside
/usr/syno/share/synocron.d/synorenewdefaultcert.conf
replacedweekly
withmonthly
- inside
- check_ntp_status (seems to be added recently)
- inside
/usr/syno/share/synocron.d/syno_ntp_status_check.conf
replacedweekly
withmonthly
- inside
- extended_warranty_check
sudo rm /usr/syno/share/synocron.d/syno_ew_weekly_check.conf
- builtin-synodatacollect-udc-disk and builtin-synodatacollect-udc
- inside
/usr/syno/share/synocron.d/synodatacollect.conf
replaced"period": "crontab",
with"period": "monthly",
(2 places) - removed lines having
"crontab":
- inside
- builtin-synosharing-default
- inside
/usr/syno/share/synocron.d/synosharing.conf
replacedweekly
withmonthly
- inside
- synodbud (DSM 7.0 only, see below for DSM 7.1+ instructions)
sudo rm /usr/syno/etc/synocron.d/synodbud.conf
synodbud
Since some recent DSM update (maybe 7.1) synodbud has become a dynamic task (meaning it is recreated by code). In his case, the creation of its synocrond task is done in synodbud binary itself, whenever it’s invoked (except with -p
option).
Running synodbud -p
allows to remove the corresponding synocrond task, but one need to disable executing /usr/syno/sbin/synodbud
in the first place.
synodbud
is started by systemd as a one-shot action during boot:
``` [Unit] Description=Synology Database AutoUpdate DefaultDependencies=no IgnoreOnIsolate=yes Requisite=network-online.target syno-volume.target syno-bootup-done.target After=network-online.target syno-volume.target syno-bootup-done.target synocrond.service
[Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/syno/sbin/synodbud TimeoutStartSec=0 ```
So in order to prevent task creation for synodbud, one need to disable this systemd unit (all commands are as root
):
systemctl mask synodbud_autoupdate.service
systemctl stop synodbud_autoupdate.service
and then properly disable its synocrond task:
synodbud -p
rm /usr/syno/etc/synocron.d/synodbud.conf
rm /usr/syno/etc/synocrond.config
- reboot
- check in
cat /usr/syno/etc/synocrond.config | grep synodbud
that it’s gone
If you want to later launch DB update manually, do not run /usr/syno/sbin/synodbud
executable but instead /usr/syno/sbin/synodbudupdate --all
.
autopkgupgrade task (builtin-dyn-autopkgupgrade-default)
This one is tricky. In DSM code (namely, in libsynopkg.so.1
) it can be recreated automatically depending on configuration parameters.
So:
- inside
/etc/synoinfo.conf
setpkg_autoupdate_important
to no - make sure
enable_pkg_autoupdate_all
is no inside/etc/synoinfo.conf
- inside
/etc/synoinfo.conf
setupgrade_pkg_dsm_notification
to no sudo rm /usr/syno/etc/synocron.d/autopkgupgrade.conf
- remove
/usr/syno/etc/synocrond.config
,sync && reboot
and validate that/usr/syno/etc/synocrond.config
doesn’t have theautopkgupgrade
entry.
FYI, this is how they check it in code:
if ( enable_pkg_autoupdate_all == 1 || selected_upgrade_pkg_dsm_notification == 1 ) goto to_ENABLE_autopkgupgrade;
pkg-ReplicationService-synobtrfsreplicacore-clean
Another tricky one, this time because it originates from a package. For some reason I don’t have Replication Service anymore in DSM 7.1 update 3, maybe Synology removed it from the list of preinstalled packages. The steps below were done for DSM 7.0.
- inside
/var/packages/ReplicationService/conf/resource
replace"synocrond":{"conf":"conf/synobtrfsreplica-clean_bkp_snap.conf"}
with"synocrond":{}
sudo rm /usr/local/etc/synocron.d/ReplicationService.conf
Commiting changes for synocrond
After applying all changes, remove /usr/syno/etc/synocrond.config
and reboot your NAS. Do cat /usr/syno/etc/synocrond.config | grep period
afterwards to confirm that newly generated synocrond.config
has everything ok.
Note: you might need to repeat (only once) removing /usr/syno/etc/synocrond.config
and reboot the NAS as it looks like rebooting the NAS via UI can cause synocrond to write its current (old) runtime config to synocrond.config
, ignoring all new changes to .conf files. So if you have edited any synocrond .conf file, always check if your changes were propagated after reboot via cat /usr/syno/etc/synocrond.config | grep period
.
Make sure to check synocrond tasks activity in the /var/log/synocrond-execute.log
file after few days/weeks. Failing to properly disable builtin-dyn-autopkgupgrade-default
and pkg-ReplicationService-synobtrfsreplicacore-clean
will cause them to respawn - synocrond-execute.log
will show it.
synoscheduler tasks
This one has the same idea as synocrond, but uses different config files (*.task
ones) and its tasks scheduled to execute using standard cron utility (using /etc/crontab
for configuration).
Let’s look at /etc/crontab
from DSM:
```
minute hour mday month wday who command
10 5 * * 6 root /usr/syno/bin/synoschedtask –run id=1 0 0 5 * * root /usr/syno/bin/synoschedtask –run id=3 ```
One can decode cron lines like 10 5 * * 6
into a more readable form using sites like crontab.guru
The command part runs a corresponding synoscheduler task, having IDs 1 and 3 in my case. But what it does actually? This can be determined using synoschedtask
itself:
root@NAS:/var/log# synoschedtask --get id=1 User: [root] ID: [1] Name: [DSM Auto Update] State: [enabled] Owner: [root] Type: [weekly] Start date: [0/0/0] Days of week: [Sat] Run time: [5]:[10] Command: [/usr/syno/sbin/synoupgrade --autoupdate] Status: [Not Available]
So it tells us for the task with id 1:
- it is named DSM Auto Update
- it’s a weekly task, which executed every Saturday at 5:10
- it runs
/usr/syno/sbin/synoupgrade --autoupdate
Similarly, synoschedtask --get id=3
returns
User: [root] ID: [3] Name: [Auto S.M.A.R.T. Test] State: [enabled] Owner: [root] Type: [monthly] Start date: [2021/9/5] Run time: [0]:[0] Command: [/usr/syno/bin/syno_disk_schedule_test --smart=quick --smart_range=all ;] Status: [Not Available]
Or, one can just query all enabled tasks using command synoschedtask --get state=enabled
.
The last one runs (yet another) SMART check, which can be left enabled as it executes once per month.
In order to modify a synoscheduler task, you need to edit a corresponding .task file. Also note that setting can edit from ui=1
in the .task file allows the task to be shown in DSM Task Scheduler and edited from UI (this is the case for Auto S.M.A.R.T. Test
).
synoscheduler’ .task files are located in /usr/syno/etc/synoschedule.d
. You can either change task triggering pattern to something else or disable the task completely. In order to disable a task, you need to set state=disabled
inside the .task file.
For eg. /usr/syno/etc/synoschedule.d/root/1.task
can look like this:
id=1 last work hour=5 can edit owner=0 can delete from ui=1 edit dialog=SYNO.SDS.TaskScheduler.EditDialog type=weekly action=#schedule:dsm_autoupdate_hotfix# systemd slice= can edit from ui=1 week=0000001 app name=#schedule:dsm_autoupdate_appname# name=DSM Auto Update can run app same time=0 owner=0 repeat min store config= repeat hour store config= simple edit form=0 repeat hour=0 listable=0 app args= state=disabled can run task same time=0 start day=0 cmd=L3Vzci9zeW5vL3NiaW4vc3lub3VwZ3JhZGUgLS1hdXRvdXBkYXRl run hour=5 edit form= app=SYNO.SDS.TaskScheduler.DSMAutoUpdate run min=10 start month=0 can edit name=0 start year=0 can run from ui=0 repeat min=0
FYI: the cryptic cmd=
line is simply base64-coded. It can be decoded like this: cat /usr/syno/etc/synoschedule.d/root/1.task | grep "cmd=" | cut -c5- | base64 -d && echo
(or simply look it in synoschedtask --get id=1
output).
When you done editing .task files, you need to execute synoschedtask --sync
. Running synoschedtask --sync
properly propagates your changes to /etc/crontab
.
Disabling writing file last accessed times to disks
Basically, you need to disable delayed file last access time updating for all volumes. One setting is in UI (volume Settings), another should be done manually.
First, go to Storage Manager. For every volume you have, open its “…” menu and select Settings. Inside:
- set Record File Access Time to Never
- if there is Usage details section, remove checkbox mark from “Enable usage detail analysis” (note: this step might be not necessary actually, it needs some testing)
Secondly, there is an additional critical step. I spent a lot of time figuring it out as syno_hibernation_debug
was totally useless for this particular source of wakeups.
You need to remove relatime mount option for rootfs. Basically, same thing as Record File Access Time = Never
, but for DSM system partition itself.
This can be done by setting noatime
for rootfs. Execute (as root):
mount -o noatime,remount /
This does the trick, but only until NAS is rebooted. In order to make it persistent, the simplest way is to create an “on boot up” task in Task Scheduler, which will do remount on every NAS boot.
Go to Control Panel -> Task Scheduler. Click Create -> Triggered Task -> User-defined script. Set Event to Boot-up. Set User to root. Then, in Run command section paste mount -o noatime,remount /
. Reboot NAS to confirm it works.
After applying all changes, you can execute mount
to check if all your partitions and rootfs (the /dev/md0 on /
line) have noatime
shown:
``` root@NAS:/# mount | grep -v “sysfs|cgroup|devpts|proc|configfs|securityfs|debugfs” | grep atime
/dev/md0 on / type ext4 (rw,noatime,data=ordered) <— SHOULD HAVE noatime HERE sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,nosuid,nodev,noexec,relatime) <— this one is harmless /dev/mapper/cachedev_3 on /volume3 type ext4 (rw,nodev,noatime,synoacl,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group) /dev/mapper/cachedev_4 on /volume1 type btrfs (rw,nodev,noatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno) /dev/mapper/cachedev_2 on /volume5 type btrfs (rw,nodev,noatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno) /dev/mapper/cachedev_1 on /volume4 type btrfs (rw,nodev,noatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno) /dev/mapper/cachedev_0 on /volume2 type btrfs (rw,nodev,noatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno) … ```
Another possible place to check is /usr/syno/etc/volume.conf
- all volumes should have atime_opt=noatime
there. This is what DSM should write for “Never” in UI Settings for a volume.
Finding out who wakes up the NAS
Suppose that you have done all tweaks, there are no unexpected entries appearing in synocrond-execute.log
, you have full control over synoscheduler/crontab
and executing sudo mount
shows no lines with relatime
for your disks and /
.
But NAS still wakes up ocassionally. This is the situation where the Enable system hibernation debugging mode checkbox comes useful.
You can enable it via Support Center -> Support Services -> Enable system hibernation debugging mode -> Wake up frequently.
Before enabling it, make sure you cleaned up all related logs (like from previous execution of this tool). After enabling, leave NAS idle for few days to collect some stats. Then stop the tool and download the logs archive (using the same dialog in DSM UI) to analyze it. The debug.dat
file is just a .zip file with logs and configs inside.
Internally this facility is implemented as a shell script, /usr/syno/sbin/syno_hibernation_debug
, which turns on kernel-based logging for FS accesses and monitors in a loop if /sys/block/$Disk/device/syno_idle_time
value was reset (meaning someone woke up the disk). In that case it just prints the last few hundred lines of the kernel log (dmesg
) with FS activity log.
syno_hibernation_debug
writes its output into 2 files in /var/log
: hibernation.log
and hibernationFull.log
. In the downloaded debug.dat
file they are located in dsm/var/log/
.
You can search inside the hibernation.log
/hibernationFull.log
file for lines having wake up from deepsleep
to quickly jump to all places where the disks were woken up. By analyzing lines preceding the wake up, you can understand which process accessed the disks.
File dsm/var/log/synolog/synosys.log
also has all disk wake up times logged.
Tweaking syno_hibernation_debug
I found few inconviniences with syno_hibernation_debug
. First, I adjusted dmesg
output a bit to make it more readable:
- sudo vim /usr/syno/sbin/syno_hibernation_debug
- replaced
dmesg | tail -300
withdmesg -T | tail -200
- replaced
dmesg | tail -500
withdmesg -T | tail -250
(twice)
Second, by default journal settings for syno_hibernation_debug
do logrotate for hibernationFull.log
too often, causing disk wake ups during debugging which are caused by syno_hibernation_debug
itself. For example:
[Sun Oct 10 10:46:49 2021] ppid:7005(synologrotated), pid:1816(logrotate), READ block 77520 on md0 (8 sectors) [Sun Oct 10 10:46:49 2021] ppid:7005(synologrotated), pid:1816(logrotate), READ block 77528 on md0 (8 sectors) [Sun Oct 10 10:46:49 2021] ppid:7005(synologrotated), pid:1816(logrotate), dirtied inode 28146 (ScsiTarget) on md0 [Sun Oct 10 10:46:49 2021] ppid:7005(synologrotated), pid:1816(logrotate), dirtied inode 23233 (SynoFinder) on md0 [Sun Oct 10 10:46:49 2021] ppid:7005(synologrotated), pid:1816(logrotate), READ block 2735752 on md0 (24 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(sh), READ block 617656 on md0 (32 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), READ block 617824 on md0 (200 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), READ block 617688 on md0 (136 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), dirtied inode 42673 (log) on md0 [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), READ block 120800 on md0 (8 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), READ block 120808 on md0 (8 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), READ block 113888 on md0 (8 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), dirtied inode 50569 (pstore) on md0 [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), dirtied inode 42679 (disk-latency) on md0 [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), READ block 120864 on md0 (8 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), READ block 89200 on md0 (8 sectors) [Sun Oct 10 10:46:49 2021] ppid:1816(logrotate), pid:1830(du), dirtied inode 41259 (libvirt) on md0 [Sun Oct 10 10:46:49 2021] ppid:7005(synologrotated), pid:1816(logrotate), dirtied inode 29622 (logrotate.status.tmp) on md0 [Sun Oct 10 10:46:49 2021] ppid:7005(synologrotated), pid:1816(logrotate), WRITE block 2798320 on md0 (24 sectors) [Sun Oct 10 10:46:52 2021] ata2 (slot 2): wake up from deepsleep, reset link now
So you can adjust logrotate settings to prevent wakeups caused by hibernationFull.log
being too large:
- inside
/etc/logrotate.d/hibernation
after the lines havingrotate
add linesize 10M
(in 2 places) - do same for
/etc.defaults/logrotate.d/hibernation
(this one not necessary, but just in case) - reboot to apply new config
This is how /etc/logrotate.d/hibernation` can look like:
/var/log/hibernation.log { rotate 25 size 10M missingok postrotate /usr/syno/bin/synosystemctl reload syslog-ng || true endscript } /var/log/hibernationFull.log { rotate 25 size 10M missingok postrotate /usr/syno/bin/synosystemctl reload syslog-ng || true endscript }
This allows to reduce the rate of archiving hibernationFull.log
by logrotate.
(optional) Adjusting vmtouch setup
If you really need some specific service to be run periodically, you can try to leave it enabled, but make sure its binaries (both executable and shared libraries) are permanently cached in RAM.
Synology uses vmtouch -l
to actually do this trick for a few own files related to synoscheduler. Likely it was an attempt to prevent synoscheduler to wake up disks whenever it is invoked.
This is done using synoscheduled-vmtouch.service
:
``` root@NAS:/# systemctl cat synoscheduled-vmtouch.service
/usr/lib/systemd/system/synoscheduled-vmtouch.service
[Unit] Description=Synology Task Scheduler Vmtouch IgnoreOnIsolate=yes DefaultDependencies=no
[Service] Environment=SCHEDTASK_BIN=/usr/syno/bin/synoschedtask Environment=SCHEDTOOL_BIN=/usr/syno/bin/synoschedtool Environment=SCHEDMULTI_BIN=/usr/syno/bin/synoschedmultirun Environment=BASH_BIN=/bin/bash Environment=SCHED_BUILTIN_CONF=/usr/syno/etc/synoschedule.d//.task Environment=SCHED_PKG_CONF=/usr/local/etc/synoschedule.d//.task Environment=SCHEDMULTI_CONF=/etc/cron.d/synosched...task ExecStart=/bin/sh -c ‘/bin/vmtouch -l “${SCHEDTASK_BIN}” “${SCHEDTOOL_BIN}” “${SCHEDMULTI_BIN}” “${BASH_BIN}” ${SCHED_BUILTIN_CONF} ${SCHED_PKG_CONF} ${SCHEDMULTI_CONF}’
[X-Synology] ```
A quick and dirty way to add more cache-pinned binaries is to put them here in synoscheduled-vmtouch.service
, using systemctl edit synoscheduled-vmtouch.service
. Or, if you’re familiar with systemd good enough, you can create your own unit using synoscheduled-vmtouch.service
as a reference.
Docker
Using Docker on a HDD partition might prevent disks to hibernate. Both dockerd and containers itself can produce a lot of I/O to docker storage directory.
While technically it is possible to eliminate all dockerd logging, launch containers with ramdisk mounts, minimize parasitic I/O inside containers etc, in general the simplest strategy might be relocating docker storage out of HDD partition. Either to an NVMe drive or to a dedicated ramdisk, if you have enough RAM installed.
Personal setup
Next, I could move on to things that are specific to my particular setup.
Shutdown services
I nightly stop Synology’s not needed services
- Task Scheduler -> Create -> Scheduled Task -> Service. I disabled the following services
- Log Center
- Cloud Sync
- Syncthing
- Hyper backup
Then there’s another scheduled task to start them in the morning.
Shutdown docker containers
This was surprisingly trickier than I expected. I was going to write a scheduled script to just run docker compose down
, but this is flawed on Synology. It will trigger alerts about an unexpected shutdown of containers and send email alerts in my case. Instead, containers must be stopped via a Synology-specific API. This meant that I’ve created a scheduled bash script with the following content running as root:
bash -c "/var/services/homes/arathunku/nas/bin/nice-stop.sh"
Simple one line that executes nice-stop.sh
where the most of the logic is.
nice-stop.sh
uses synowebapi
to stop containers and doesn’t generate any alerts if everything goes well. I’ve also added an easy way to add names of containers that should be skipped.
#!/usr/bin/env zsh
# synology doesn't have newest bash!
# requires community pkgs with zsh with modules installed
# script must run as root or it will ask for password
# SKIP_CONTAINERS=("atuin" "postgres16")
SKIP_CONTAINERS=()
if [[ "${1}" == "all" ]]; then
SKIP_CONTAINERS=""
fi
cd "$(dirname "$0")/.." || exit 1
docker-compose ps | grep " Up " | awk '{ print $1 }' | while read container ; do
echo "== container: '${container:?}'"
if [[ " ${SKIP_CONTAINERS[*]} " =~ " ${container:?} " ]]; then
echo -e "\t...SKIP"
else
echo -e "\t...STOP"
sudo synowebapi \
--exec api=SYNO.Docker.Container method="stop" \
version=1 name="${container}" > /dev/null
fi
done
nice-stop.sh
is in zsh
because Synology has bash
at v4.4
and I had neither time or patience to debug scripts in old version of bash.
Cronjobs
I’ve moved all cronjobs to similar hour, for me it’s 9-10am when I’m not home but on a dog walk. Previously, I was spreading them throughout the day for no good reason. This must be done in two places - Task Scheduler and any cronjobs in running Docker services.
Logging
Disabling services works well for achieving overnight IOPS silence, but what about during the day? Even then, I’m not a heavy user of any services, and I’d like to see the disk hibernate most of the time. Some services I run are verbose even on just INFO/WARN/ERROR
logging level, and they have been stable for years, so I’ve decided to drop logging altogether.
I’ve modified docker compose
config and switched the logging driver to none.
x-app: &x-app
logging:
# driver: local
driver: none
restart: unless-stopped
services:
atuin:
<<: *x-app
restart: unless-stopped
image: ghcr.io/atuinsh/atuin:732f882
command: server start
ports:
- "2700:8080"
depends_on:
- postgres16
# rest of config ...
A few services I’m running use a self-hosted Postgres, and there are also some tweaks that can be made to Postgres’s config, mainly:
log_statement = none
log_min_messages = error
log_min_duration_statement = -1
client_min_messages = error
You can read more about these toggles in official docs.
Ramdisk
Before any docker containers start, I create tmp
ramdisk location via a scheduled task.
# running as root
mkdir -p /tmp/ramdisk
mount -t tmpfs -o size=1024m ramdisk /tmp/ramdisk
This is ready-to-go location I use for volumes in docker containers for anything that doesn’t need to be stored safely on HDD disk, cache data, log files if logs cannot be easily disabled.
Validate
To ensure my changes had a positive impact on IOPS, I tried using Synology’s Resource Monitor app, but I found it unreliable. I noticed that Resource Monitor itself, or just using web UI, generated additional IOPS. Instead, I switched to using iotop
in the terminal. By default, it’s not available, it comes with synogear
(sudo synogear install
).
Running iotop -a
over an extended period of time showed that after everything is settled and cron jobs are no longer running, the system has minimal/no IOPS. The -a
flag displays accumulated information over time, so you can leave it open for a bit and find the worst offenders.
When adding a new service to docker, I review and follow the steps above, keeping IOPS when system is not used to zero as much as possible. At some point I plan to just throw money at the problem, buy 4-8TB SSDs and stop worrying about it but it will have to wait for better SSD prices.