Fresh install and cluster build of 6.4.0
Downloaded 6.4.1 and did `rpm -Uvh` and got a not so comforting response:
###
[root@splunkindex14 ~]# rpm -Uvh splunk-6.4.1-debde650d26e-linux-2.6-x86_64.rpm
warning: splunk-6.4.1-debde650d26e-linux-2.6-x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 653fb112: NOKEY
Preparing... ########################################### [100%]
This looks like an upgrade of an existing Splunk Server. Attempting to stop the installed Splunk Server...
Stopping splunkd...
Shutting down. Please wait, as this may take a few minutes.
....................................................................................................................... [ OK ]
Stopping splunk helpers...
[ OK ]
Done.
1:splunk ########################################### [100%]
complete
Remove *.pyc *.pyo in /opt/splunk
rmdir: failed to remove `/opt/splunk': Directory not empty
[root@splunkindex14 ~]# service splunk restart
Restarting Splunk...
splunkd is not running. [FAILED]
SOFTWARE LICENSE AGREEMENT
THIS SOFTWARE LICENSE AGREEMENT (“AGREEMENT”) GOVERNS THE LICENSING,
INSTALLATION AND USE OF SPLUNK SOFTWARE. BY DOWNLOADING AND/OR INSTALLING SPLUNK
Questions:
1) What are `*.pyc` `*.pyo` in /opt/splunk ... and why are we trying to delete them?
2) Why is the upgrade trying to delete "failed to remove `/opt/splunk': Directory not empty"... I kind of like my data :)
3) For scripted update of a cluster, answering ULA again for the nodes is a bit, redundant at best, disruptive at least. Is their a recommended RPM upgrade switch to bypass this service restart and ULA catch?
Thanks,
↧