I have a kitchen instance that's using docker + habitat that was pausing/waiting forever at at this location during kitchen converge:
[... snip ...]
75.23 KB / 75.23 KB - [==============================] 100.00 % 42.00 MB/s
☛ Verifying core/zlib/1.2.11/20180608050617
✓ Installed core/cacerts/2018.03.07/20180608102212
✓ Installed core/gcc-libs/7.3.0/20180608091701
✓ Installed core/glibc/2.27/20180608041157
✓ Installed core/haproxy/1.6.11/20180609190214
✓ Installed core/linux-headers/4.15.9/20180608041107
✓ Installed core/openssl/1.0.2n/20180608102213
✓ Installed core/pcre/8.41/20180608092740
✓ Installed core/zlib/1.2.11/20180608050617
✓ Installed kmott/haproxy/1.8.13/20180807232806
★ Install of kmott/haproxy/1.8.13/20180807232806 complete with 9 new packages installed.
D Attempting to execute command - try 1 of 1.
D [SSH] kitchen@localhost<{:user_known_hosts_file=>"/dev/null", :port=>32791, :compression=>false, :compression_level=>0, :keepalive=>true, :keepalive_interval=>60, :timeout=>15, :keys_only=>true, :keys=>["/Users/kmott/IdeaProjects/ford/ford-plans/haproxy/.kitchen/docker_id_rsa"], :auth_methods=>["publickey"], :verify_host_key=>false, :logger=>#<Logger:0x00007f9b68b88188 @level=4, @progname=nil, @default_formatter=#<Logger::Formatter:0x00007f9b68b88110 @datetime_format=nil>, @formatter=nil, @logdev=#<Logger::LogDevice:0x00007f9b68b880c0 @shift_period_suffix=nil, @shift_size=nil, @shift_age=nil, @filename=nil, @dev=#<IO:<STDERR>>, @mon_owner=nil, @mon_count=0, @mon_mutex=#<Thread::Mutex:0x00007f9b68b88048>>>, :password_prompt=>#<Net::SSH::Prompt:0x00007f9b68b88020>, :user=>"kitchen"}> (sh -c '
TEST_KITCHEN="1"; export TEST_KITCHEN
[ -f ./run.pid ] && echo "Removing previous supervisor and unloading package. "
[ -f ./run.pid ] && sudo -E hab svc unload kmott/haproxy/1.8.13/20180807232806
[ -f ./run.pid ] && sleep 5
[ -f ./run.pid ] && sudo -E kill $(cat run.pid)
[ -f ./run.pid ] && sleep 5
echo "Running kmott/haproxy/1.8.13/20180807232806."
[ -f ./run.pid ] && rm -f run.pid
[ -f ./nohup.out ] && rm -f nohup.out
nohup sudo -E hab sup run --peer jenkins-master --topology standalone --strategy rolling --channel stable & echo $! > run.pid
until sudo -E hab svc status
do
sleep 1
done
sudo -E hab svc load kmott/haproxy/1.8.13/20180807232806 --topology standalone --strategy rolling --channel stable
until sudo -E hab svc status | grep kmott/haproxy/1.8.13/20180807232806
do
sleep 1
done
[ -f ./nohup.out ] && cat nohup.out || (echo "Failed to start the supervisor." && exit 1)
')
Running kmott/haproxy/1.8.13/20180807232806.
nohup: appending output to ‘nohup.out’
✗✗✗
✗✗✗ Connection refused (os error 111)
✗✗✗
✗✗✗
✗✗✗ Connection refused (os error 111)
✗✗✗
✗✗✗
✗✗✗ Connection refused (os error 111)
✗✗✗
✗✗✗
✗✗✗ Connection refused (os error 111)
✗✗✗
No services loaded.
The kmott/haproxy/1.8.13/20180807232806 service was successfully loaded
The haproxy service I was trying to start had a pkg_binds requirement that I forgot to add. For some reason, it just stayed at that last prompt above for successfully loaded, and never returned control back to the caller (or error'd out saying you have a pkg_binds requirement that cannot be met).
It should be pretty easy to reproduce by adding a kitchen suite for any package that has a pkg_binds requirement, but leave out hab_sup_bind from the provisioner section.
I have a kitchen instance that's using docker + habitat that was pausing/waiting forever at at this location during
kitchen converge:The haproxy service I was trying to start had a
pkg_bindsrequirement that I forgot to add. For some reason, it just stayed at that last prompt above forsuccessfully loaded, and never returned control back to the caller (or error'd out saying you have apkg_bindsrequirement that cannot be met).It should be pretty easy to reproduce by adding a kitchen suite for any package that has a
pkg_bindsrequirement, but leave outhab_sup_bindfrom the provisioner section.