I have been working towards migrating a bunch of legacy applications off of elastic beanstalk onto a kubernetes infrastructure. At Begin we use KoPS to manage different Kubernetes clusters, and on them we run a variety of different tech stacks. We use a combination of AWS Secret Store and K8s configmaps to manage variables that get injected to the applications(maybe I’ll write about that someday) but it’s fairly straight forward to manage.
One group of the recent applications I was migrating were just static React apps
that was running on an Apache/PHP Beanstalk. I didn’t realize the full configuration of the app at the time, so I started out
by simply configuring the app to inject the environment variables at startup time, and then running
and then serving that via nginx in a container running on my cluster. This allowed us to use a single container to run in both dev/test and prod since it compiled the code at boot time and injected the necessary vars.
At first this worked. In our dev and test environments everything seemed fine, and while there was some additional tuning we could do to tell K8s to wait for the compilation to finish before serving traffic, there wasn’t much else to it.
INTRODUCING Production Then we went to production. Up until now our team had been small and I had just been running a single replica for both the dev and test deployments. When we went to production I bumped that up to 3 just to have some additional redudency, and heres where I discovered the problem.
The ~Workaround~ Solution (for now)
I decided using a Job in Kubernetes would be a simple soution to compile the code, and then the running containers themselves
would just need to basically be nginx serving the static code. This basically worked out to using a Job that ran a container using the app code. This injected the ENV like before, ran
react-scripts build like before, and then just zipped up the contents and stored them on S3. I set some additional variables at deploy time to denote the deployed branch, env, etc.
I chose S3 because the eventual goal is to just have that be the destination - compile the assets, store them on S3, serve them on S3. Someday.
Then my deployment was a custom nginx container that I have a script that pulls my packaged file off S3, and serves it up. The only trick was waiting for the compile to complete before attempting to pull the new code. I discovered k8s-wait-for which I configured as a initContainer to pend until the compilation is complete. Then when Nginx starts it can easily pull the assets!
- Because I’m using Kops, I configured it’s OIDC provider to configure a ServiceAccount with permissions to write to my S3 bucket.
- k8s-wait-for needs permission to examine objects. There needs to be a ServiceAccount/Role/Rolebinding granding the appropriate permissions