-
Hi, I’m a new user to Smallstep and all the associated cli tools and am looking into the best way to implement step SSH certificates with SSO amongst the many hosts I connect to regularly. One of these hosts is particularly unique, however, as it is a high performance computing (HPC) cluster where I am only one of hundreds (or thousands?!) of users who connects regularly via SSH. Ultimately, this means that I don’t have permission to install custom applications or scripts, with rare exceptions, and definitely cannot modify global SSH configs in /etc/ and other “core Linux” directories. Looking forward to hearing your thoughts on this, and best wishes for the New Year to the Smallstep team! Cheers, Michael |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 15 replies
-
Hi @mshamash, the docs you link are part of our SaaS offering https://smallstep.com/sso-ssh/, it allows you to link your identity provider (IdP), and give access to the users in your IdP to your servers with some policy. To be able to use certificates you need to change /etc/ssh/sshd_config, you need three parameters, for example:
TrustedUserCAKeys defines the file containing the public part of the key used to sign the user certificates. HostCertificate is the SSH certificate for that host, users will a HostKey is the private key used by SSH, there are some defaults, and you can omit it if you use one of the defaults. Our open-source CA allows you to create the host, and user certificates, the easiest way to do it is to bootstrap your CA using step ca init --ssh That will command will create root and intermediate certificates for TLS as well as user and host key for signing SSH certificates. It will also create some templates to automatically configure users using # step ssh certificate -f --sign --host <hostname> <path-to-ssh-public-key>
step ssh certificate -f --sign --host tynyca.local /etc/ssh/ssh_host_ed25519_key.pub Once you configure your hosts you will need an automatic way to refresh the host certificates, by default they expire after 30 days, you can do it with a cron or a systemd timer. Here are some docs, but they are for X.509 (TLS) certificates instead of SSH ones. At the end of this script, there's an example to configure crond for this. The default certificate for users is 16 hours, the best way to refresh them is configuring an IdP and using OAuth/OIDC although you can also use the default JWK provisioner created in the bootstrap. The configuration files created using The other problem to solve is the user accounts in the hosts, if you already have one, then there is no problem, you should be able to login with your certificate, if not, then you need a way to provision it, that's where our SaaS comes handy. Our open-source version comes with a simple way to do it. When you create your user certificate you need to pass the In this blog post, you can find some helpful instructions for some of the above, as well as a handy script to bootstrap a server in AWS, but you should be able to adapt to your needs. |
Beta Was this translation helpful? Give feedback.
-
Hi @maraino and thanks for the detailed reply. Since editing /etc/ssh/sshd_config is a must for SSH certificates to function properly, it looks like I won't be able to use it on the majority of servers where I'm currently a user (one of many) and not an admin with authority to edit global SSH configs. Hopefully certificate authentication for SSH becomes more widespread and adopted by our compute cluster provider! |
Beta Was this translation helpful? Give feedback.
Hi @maraino and thanks for the detailed reply. Since editing /etc/ssh/sshd_config is a must for SSH certificates to function properly, it looks like I won't be able to use it on the majority of servers where I'm currently a user (one of many) and not an admin with authority to edit global SSH configs.
Hopefully certificate authentication for SSH becomes more widespread and adopted by our compute cluster provider!