1

Is there a command or script that can tell me for how long the server has been up since installation?
(either by checking system installation date (+ removing shutdown-time) or by checking main disk uptime)

something similar to what "crystal disk info" can do for windows

Thanks

gekigek99
  • 113

2 Answers2

0

I do not believe we have something like that.

This will show when a system was installed:

$ sudo tune2fs -l /dev/sda1 | grep 'Filesystem created:'
Filesystem created:       Sat Jun 14 18:29:43 2014

Replace /dev/sda1 with the device name you need (1st column of df / will show you what you want). But I do not believe I ever saw a command to show how long a system has been turned on or off.

uptime

will only show how long the current uptime is.

This will list the last 1000 reboots/shutdowns:

last -1000 reboot shutdown

But that lacks the moment the system was turned on after a shutdown.

edit:

How about an approach like this (it will list the hours a disk has been active):

 smartctl --attributes /dev/sda | grep Power_On_Hours

Replace /dev/sda with your device name and install it with sudo apt smartmontools if not available yet. Not totally perfect as it assumes you started using the disk when the system was installed.

Rinzwind
  • 309,379
0

I you had already installed the downtimed daemon (apt install downtimed) to your system during installation, you could have obtained that information as shown in the following example:

$ downtimes
down  2020-07-12 16:25:15 -> up 2020-07-12 16:25:54 =    00:00:39 (39 s)
down  2020-07-19 22:23:17 -> up 2020-07-19 22:23:57 =    00:00:40 (40 s)
down  2020-07-22 21:38:07 -> up 2020-07-22 21:38:47 =    00:00:40 (40 s)
down  2020-07-29 19:35:47 -> up 2020-07-29 19:36:28 =    00:00:41 (41 s)
down  2020-09-01 12:11:55 -> up 2020-09-01 12:12:36 =    00:00:41 (41 s)
down  2020-09-03 10:08:59 -> up 2020-09-03 10:09:40 =    00:00:41 (41 s)
down  2020-09-03 10:13:16 -> up 2020-09-03 10:15:25 =    00:02:09 (129 s)
down  2020-09-08 18:24:28 -> up 2020-09-08 18:25:07 =    00:00:39 (39 s)
down  2020-09-22 18:06:52 -> up 2020-09-22 18:07:31 =    00:00:39 (39 s)

But, the current version of downtimes command does not display the cumulative time; you have to calculate yourself.


Another option is the last command (as long as the /var/log/wtmp file is not "rotated"). See the following example:

$ last reboot
reboot   system boot  5.4.0-1025-aws   Tue Sep 22 18:07   still running
reboot   system boot  5.4.0-1024-aws   Tue Sep  8 18:25 - 18:06 (13+23:41)
reboot   system boot  5.4.0-1022-aws   Thu Sep  3 10:15 - 18:24 (5+08:09)
reboot   system boot  5.4.0-1022-aws   Thu Sep  3 10:09 - 10:13  (00:03)
reboot   system boot  5.4.0-1022-aws   Tue Sep  1 12:12 - 10:09 (1+21:56)
reboot   system boot  5.4.0-1021-aws   Wed Jul 29 19:36 - 12:11 (33+16:35)
reboot   system boot  5.4.0-1020-aws   Wed Jul 22 21:38 - 19:35 (6+21:57)
reboot   system boot  5.4.0-1018-aws   Sun Jul 19 22:23 - 21:38 (2+23:14)
reboot   system boot  5.4.0-1018-aws   Sun Jul 12 16:25 - 22:23 (7+05:57)
reboot   system boot  5.4.0-1018-aws   Sun Jul 12 15:22 - 16:25  (01:02)
reboot   system boot  5.4.0-1015-aws   Sun Jul 12 14:01 - 15:21  (01:20)

The shows the system's "uptimes". But, again you have to sum them up yourself.

FedKad
  • 13,420