OpenShift 4 images ssh issue


TLDR;1

We aim to move OpenShift 4. It brings pretty new features compare to 3x generation2. In company, we have so many security rules which make system secure. One of them is to being sure that our images should be PCI-DSS compliance.

In this week, to test PCI-DSS components, I wrote Ansible role which checks images, even if this role does not change anything in the system. It was necessary to create my own stack.

In that case, creating full stack (with gateway, OpenShift master nodes, workers) is not a good option since I have to check base image without any change. Also, it’s totally waste of resource because of installation will take a lot time.

I decided to create just 1 instance from AWS Console, than expected to connect via ssh client and test necessary scenarios3.

Nopp. It’s not possible.

I don’t know 4 what kind of mechanism RedHat did but my public key were not in the authorized_keys. So I couldn’t connect.

In the beginning, I was expecting that our great firewall in company does not allow me to connect. However, after trying same scenario from my own AWS account. After this, I was sure RedHat did kind of trick and didn’t locate my key as other images do.

In the end, I created ticket for RedHat and explain situation.5 They confirmed that it’s not possible6 and I have to create full stack by using automation. Only in way, I can reach my goal.

After 8 7 hours which I spent on debugging ACL and security groups I want to share pure knowledge with you.

Result: CoreOS images do not add your public key if you run it without automation to the instance: