Published on

Why Laravel Vapor was Right for Me

Author

I’ve been watching Jack Ellis talk about his upcoming Vapor course over the last week. I’ve seen a lot of folks curious if Vapor/AWS was worth the money and if it was right for them.

We had a use case for Vapor and adopted it at the office as soon as they started selling it. It was the right thing for a subset of our apps. I wanted to document my use-case and the rationale behind it being right for us so I can share it with the curious folk.

Background

I’m a developer in a big university’s IT department. My group is responsible for something like twenty different applications, all of varying size, age, and tech stacks. Only about seven of them are Laravel apps. We’re definitely trending up on Laravel for a lot of apps, but I expect that will slow down now. We have them deployed in a couple ways: on-prem with both mod_php & FPM, on AWS Elastic Container Service, and [now] on Vapor.

The big initiative right now is moving our on-prem stuff into The Cloud. The goal is the elimination of mutable infrastructure. This initiative is coming from the developers, so an unspoken secondary goal is have as little infrastructure as possible. We aren’t being paid to think about servers, so IMO the fewer of those we have, the better!

We’ve locked our AWS accounts down so developers cannot do anything in the AWS console themselves. If they want resources, they need to write some Terraform, get it code reviewed, and run it through Jenkins. This makes it possible to enforce consistency between the dev, QA, and production environments: you can’t forget to configure something if it’s all done by scripts kept in git.

When we develop Laravel apps, we’re taking advantage of the full range of framework features: queuing, job dispatching, broadcasting, & scheduler see use in all of our Laravel apps.

How Does Vapor Fit?

I would have liked to get all of this working on the AWS serverless platform myself, but but it’s kind of a lot. The first step is just getting PHP working on Lambda — Bref exists, but its using Serverless instead of Terraform. Then I’d need to figure out the wiring for config management, SQS, API gateway, CloudWatch events, monitoring, and all that kind of stuff.

It would have taken a lot of time. My working knowledge of AWS isn’t stellar, so I figured I’d spend a week getting a workable first-pass. It probably wouldn’t have had all the fancy stuff with SQS/DynamoDB that Vapor does. Then again: it took The Otwell eight months to make Vapor, so “eh a week” is perhaps a severe under-estimation.

The first app I wanted to deploy was for an “oh shit, vendor went out of business” project. This was being developed at 5,000mph and we didn’t have any firm plans for how to deploy it. I figured it would probably be Dockerized and deployed on ECS. Vapor helpfully went on sale right around the time we were talking about deploying dev.

So the value proposition of Vapor ended up looking like this: I would spend several days building Terraform scripts to run one Laravel app on AWS, the salary cost of which is well in excess of the US$400 price on a year of Vapor. The AWS costs associated with the app are something our institution had already decided to pay, with or without Vapor, because we have chosen AWS as our cloud provider.

It was a no-brainer. If I couldn’t get Vapor working, support was doing refunds. I’d only have lost a few hours and could move on to an in-house solution.

Scalability didn’t enter into it. The low AWS costs for all the serverless stuff were a plus, but certainly not part of the decision. It was entirely down to Vapor being an easy way to deploy Laravel with all its bells and whistles on AWS.

Vapor-izing

Once we signed up, getting the first app deployed was easy. The instructions were clear and I got my app all vapor.yml-ed up.

I had some hiccups around DNS and certificates. I initially didn’t add any domains to Vapor, since university.edu has its own nameservers. Vapor wouldn’t let me create certificates for a domain it didn’t know about.

I emailed support, and they told me to add the domain even if I was not planning on using Vapor/Route53 to manage it. Easy enough.

My second “oops” was creating the cert in us-east-2. I didn’t know anything about API Gateway at the time, so it didn’t occur to me that it would be using CloudFront. The problem was obvious when I went to deploy an app; I got it sorted out myself.

I deploy from Jenkins. It was easy to wire up. Here’s my pipeline stage, updated for PHP 7.4:

stage ('Deploy to Vapor')
{
    agent {
        docker {
            image 'edbizarro/gitlab-ci-pipeline-php:7.4'
            args '-u root:root'
        }
    }
 
    steps {
        script {
            GIT_COMMIT=checkout(scm).GIT_COMMIT
        }
 
        sh 'composer global require --prefer-dist --no-ansi --no-interaction --no-progress --update-with-dependencies laravel/vapor-cli'
        sh "PATH=\$PATH:\$COMPOSER_HOME/vendor/bin/ vapor --version"
        sh "php --version"
 
        // Kind of annoying, but the vapor-cli needs the vendor/composer/installed.json to exist so it can
        // validate your vapor-core version is compatible w/ your vapor.yml `runtime: php-XY` selection.
        sh "composer install --quiet --no-dev"
 
        // Hack to support using two Vapor teams -- the project ID differs between non-prod and prod
        sh "[ \"${BRANCH_NAME}\" = 'production' ] && sed -i 's/id: xxxx/id: xxxx/' vapor.yml || true"
 
        withCredentials([string(credentialsId: 'vapor-api-key', variable: 'VAPOR_API_TOKEN')]) {
            sh "PATH=\$PATH:\$COMPOSER_HOME/vendor/bin/ vapor deploy --no-ansi ${BRANCH_NAME} --commit='${GIT_COMMIT}'"
        }
    }
 
}     

And with that, I had eliminated the infrastructure diversion from the project plan, saving something like 89 hours.

After I had deployed my dev environment, I realized I wanted to do some event broadcasting as part of an export feature — the export could take a few minutes, so an async push notification to the browser letting them know it was ready was ideal. Vapor didn’t have a way to do websockets out of the box on AWS, but Taylor had tweeted a suitable solution: Pusher.io.

Pusher is a SaaS websocket provider with lots of lipstick. My export was only for a handful of admin users, so their free tier was fine. Ten minutes later, the infrastructure was sorted out and I was deep in the bowels of client-side Echo code.

Later on, when I wanted to deploy a second app, I already knew the pitfalls. It took about an hour, most of which was spent waiting on my hostmaster to add DNS entries.

Shortcomings

Vapor is a good product, but I have run into some shortcomings.

The first is a restriction on assigning vanity hostnames that are more than three levels. Vapor will let me use my-app.university.edu, but not my-app.college-of-arts.university.edu.

This isn’t the end of the world for me, since I’m a member of our IT department and can whatever I need added to the university.edu zone. I’ve got a lot of friends who work for separate IT departments attached to our colleges, and it’s a much more difficult (read as: political) process for them. They can manage their college’s zone already, but Vapor won’t work with that.

I have emailed Vapor support and spoken with both Mohamed & Taylor, but fixing that is not a priority. Which is fair enough: how many Vapor customers are even using a third-level domain instead of something like coding-cool.ly, nevermind going even deeper?

The second problem I have is that setting up projects initially isn’t very infrastructure-as-code-y. I create projects & DBs in the UI. Between the Vapor CLI & vapor.yml file, I think I can do everything the UI can. But I would really like a declarative way to get DBs and caches. If I could do these from the vapor.yml, I think I’d be happy.

I may be able to address that with some clever Jenkins-ing. It hasn’t been a big enough problem (yet) to spend time on. But I am [allegedly] the IaC champion, so somebody will eventually call me on my bullshit and I’ll have to come up with a solution.

My final complaints are more about the Vapor product. The billing account being the super-admin that can do everything is kind of a gross SaaS thing. I can’t have our AP person logging in with that level of access.

I think it’s the only account that can invite users to teams. I kind of expected better, since they have a product for that. Adding users to your team first requires them to make a Vapor account, so I have to reach out to people and have them do that before I can add them. It’s a nuisance. I can’t enforce enabling MFA for them, either.

If anybody has questions, hit me up on Twitter. I’m happy to talk about my experience with the product. But you might be better off talking to Jack Ellis?