Wake up your RDS Aurora Serverless before running your migrations

When you’re using an RDS Aurora Serverless DB instance with Laravel Vapor, you have the option to scale it down to zero capacity units when it’s been idle. This is great for development environments — it only takes a few seconds to come back up, and while it’s hibernating, you’re saving loads of money.

One downside is that your migrations may fail during deployments. When Vapor goes to run the php artisan migrate command, RDS often won’t wake up before Laravel’s DB connection attempt times out:

An error occurred during deployment.

Message: Deployment hook failed.
Hook: migrate --force

In Connection.php line 678:
                                                                               
  SQLSTATE[08006] [7] timeout expired (SQL: select * from information_schema.  
  tables where table_schema = public and table_name = migrations and table_ty  
  pe = 'BASE TABLE')                                                           
                                                                               

In Connector.php line 70:
                                       
  SQLSTATE[08006] [7] timeout expired  

I wrote an artisan command to poke the DB a couple times, and added this to my vapor.yml as its first deployment step.

<?php

// app/Console/Commands/WakeUpDatabase.php

namespace App\Console\Commands;

use Illuminate\Console\Command;
use Illuminate\Support\Facades\DB;

class WakeUpDatabase extends Command
{
    protected $signature = 'db:wake {retries=5 : attempts to make} {wait=5 : time to wait between retries, in seconds}';
    protected $description = 'Wakes up a potentially-inactive serverless RDS database';

    public function handle()
    {
        $retries = (int) $this->argument('retries');
        $wait_between = ((int) $this->argument('wait')) * 1000;

        retry($retries, fn () => DB::select('SELECT 1'), $wait_between);

        $this->info('Database is up');
    }
}

Add php artisan db:wake to your vapor.yml deploy: section, right above the migrate command, in each environment you have an Aurora Serverless DB.

Problem solved!

Bootstrap vs. Tailwind

People occasionally get into slap-fights over the tried-and-true Bootstrap vs. newer CSS frameworks like Tailwind. I’ve worked with both — and each one has a place in the toolkit. Maybe not your individual toolkit, depending on what your job requires, but at $UNIVERSITY there was a place for both.

SUNDAY, SUNDAY, SUNDAY…

Do I need a CSS framework?

I think so! Not necessarily Bootstrap with all of its opinions on how your site should look, but you want a couple things and it doesn’t make sense to build them yourself.

The first thing you want is a CSS reset/normalization. Browsers have different default styles (lookin’ at you, form elements), and you want them to all be consistent so your changes tested in Chrome aren’t a trainwreck in Safari.

The next thing you want is a unit system. There shouldn’t be any situation in which you’re specifying dimensions/margins/padding with pixels at this point — there are too many devices and too many screen sizes.

Instead, you want relative units, but you also want your “size 5 font” to be something relatively close to your “size 5 padding”, for your own sanity. You may be able to get by with rem units, but I’ve never actually tried that, so I can’t say if everything behaves consistently.

The final thing you want in a framework is for it to deal with screen size breakpoints. I don’t want to write a bunch of CSS media queries for everything. Please give me xs-col-12 and lg-col-6 so I can get on with my life.

Of course, there’s a lot more frameworks can give you. But then we start moving into the realm of frameworks having opinions, and you need to think about what you’re picking.

Bootstrap

We all see Bootstrap’s distinctive style on websites every single day. It became ubiquitous because it’s easy to use. You can copy-and-paste the official template or any component from the docs and you’re 80% of the way done with whatever you’re doing.

As an Enterprise Application Developer, having a pile of ready-made components is a boon. You don’t necessarily care about your branding in the Enterprise — maybe it’s internal software, or you have captive customers — so a logo and changing the primary brand colour covers your needs.

You’ve then got access to the 24 pre-designed Bootstrap components. You do not need to think about web design very much. All your time can be spent worrying about implementing your app and not “hmmm, fuck, the left padding on my card is off by 3px“.

Bootstrap has an almost-invisible feature: all of the components are made with accessibility in mind. This isn’t something a lot of developers think about at the get-go, but by using Bootstrap, you’re getting 80% of the work right out of the box. When somebody does say “well what about screen readers” at the end of a project, fixing the Bad Part of your app is can be achieved in a sprint instead of requiring a major re-engineering effort.

As a developer, you can be on web-design-auto-pilot with Bootstrap. It’s a huge time-saver. It’s particularly valuable if you’re hiring folks out of those coding bootcamps — they may not have spent long on CSS, so “here’s copy-and-paste components” keeps them moving right along.

I know all us old folks are like, “hah whatever CSS it isn’t that hard“, but it is a huge topic with a lot of nuance to understand. The IE era being over doesn’t automatically make CSS simple.

Of course, there are downsides. Bootstrap still depends on jQuery, so that’s a 30kb asset that you might use for one dropdown. In the world of Enterprise Software, this might not be a big deal.

I’ve been using more Vue widgets; having Vue + jQuery loaded feels unnecessary, and there’s a temptation for developers to reach for jQuery (since it’s already there). This entrenches it in your app, when you might not wanna do that. Bootstrap v5 will drop the jQuery dependency, so there’s an opportunity to shave some page load time off coming soon.

You might catch flak when your site looks like every other Bootstrap site. This is a totally fair criticism, and a result of Bootstrap being so opinionated that it has control over 95% of your website’s design.

If “looking like Bootstrap” is a problem for your site, do not try to customize Bootstrap’s components. Adjust the variables all day long — some new colours and different corners go a long way — but Bootstrap’s opinions are its strength. Once you try to fundamentally change the look of a card, you’ve left the happy path and are deep in the woods. It’d probably be better to pick a different framework at this point.

Tailwind

Tailwind stands in stark contrast to Bootstrap: it’s a utility framework. There’s no .card class for a div that gives you an accessible pre-designed card — you’ve gotta design that yourself.

I think the easiest way to explain Tailwind is an example. From the Tailwind docs:

This feels a lot like writing in-line style="width: 100%'" attributes on every element. That is essentially what Tailwind is, but you’re using the utility classes instead of style tags, so you get the responsive stuff and a unit system. Tailwind comes with an adjustable colour palette too, so all of the utilities feel very consistent and logical.

If you’re staring at a blank web page and you want to build your app, Tailwind might seem daunting. It’s on you to design every single aspect of your pages. The framework is only going to help you insomuch as it keeps your size 1 padding relative to your size 2 padding.

You, the developer, will need a strong command of CSS to use Tailwind.

Now I know what you’re thinking, “this is an atrocity, what a horrible mess!” and you’re right, it’s kind of ugly. In fact it’s just about impossible to think this is a good idea the first time you see it — you have to actually try it.

Tailwind Documentation: Utility First

If you know what you want your page to look like, Tailwind will get you there, because there are no wrong opinions baked-in to it that you’ve gotta fight.

While I mostly used Bootstrap at $UNIVERSITY for the consistency/ease, our fundraising group had their own marketing team and they provided us with bespoke designs. They knew having a button be 3px to the left resulted in an additional 0.25% of revenue, and they didn’t care that Bootstrap wanted the button to be 3px to the right instead — they needed it their way.

In that situation, having Tailwind provide me with consistent units and otherwise getting out of my way is exactly what I need. A CSS framework’s opinions are never going to match the marketing team’s opinions, so being “close to the metal” (so to speak) with CSS is perfect.

The UI Trap in Microsoft Teams

I have moved orgs and no longer live in Microsoft’s Slack competitor, Teams. So before I forget the frustration, I wanted to write up the severe, crippling flaw in Teams’ UI.

Here’s a screenshot. Note the leftmost sidebar: a “chat” tab, and then a separate “teams” tab:

Ignore the un-compacted chat layout. After a few months, you get used to it 🤷

“Teams” is the chatroom function, whereas “chat” are where your DMs are. If you’re in one tab, there’s not a great way to see what’s happening in the other. You have the “Activity” section too, but that’s another area that swaps you out of chat entirely, so it doesn’t really help.

This causes a huge problem — people live in the Chat tab, or they live in the Teams tab. Over time, the fact that people only wanted to live in one meant they’d stay in the most flexible tab: chat.

This degraded the entire product for me. I tried really hard to get people to use Teams so folks could follow the discussions they were interested in. In one group, I was successful — but everyone else ended up DMing me or setting up small group chats.

The DMs hurt information availability, since you need to be invited to a group chat, and the search functionality in DMs isn’t as good as in the team chat rooms.

On the Language of Job Posts

One of the articles in this week’s Diversify Tech newsletter was interesting: Not Applicable: What Your Job Post is Really Saying.

The whole post is great, and you should read it. Here’s what really struck me:

I recently had the opportunity to participate in the recruiting process for early- and mid-career developers at my company. The first thing that I did was to review the language that was typically copied-and-pasted into each job listing. I immediately saw words like “drive”, “influence”, “solve”, “impact”, and “lead”. What these words communicate is that we were looking for very confident and influential engineers whose primary focus would be on problem-solving. We further related that many of our key engineers had worked at companies like Google, Netflix, and eBay.

What would all this mean to an early-career developer? Someone who lacked the confidence of their more advanced peers? Someone who was looking for the opportunity to learn and grow, maybe under the guidance of a mentor? Someone who had enthusiasm but lacked experience?

It’s simple. The message is, “You don’t belong here.”

 Coraline Ada Ehmke

I think I’ve only written a job post a couple times, but I definitely switch to this other “job post” language. Until I read this post, I didn’t even realize I was code-switching.

I guess once you’ve read a bunch of ads written like this, you just assume that’s how they should be written. When you stop to think about it — who the hell speaks like this in normal life?

Anyways, now that the blind spot has been pointed out, I can work on fixing it. I don’t expect I’ll be writing job posts in the near future, but I can at least whine about them being bad (and explain why they’re bad) to the powers that be 😛

Adding stubs to your Laravel project

The project that I am currently working on is primarily not using Eloquent — instead, I’m using a JSON:API item class from a package, spiced up with some additional code I’ve mixed in.

I’m making lots of these models, so I did a quick source-dive into the framework to figure out how the make:something commands are implemented. Turns out, writing your own is really easy.

<?php

namespace App\Console\Commands;

use Illuminate\Console\GeneratorCommand;

class JsonModelMakeCommand extends GeneratorCommand
{
    protected $name = 'make:json-model';
    protected $description = 'Create a new JSON:API model';
    protected $type = 'Model';

    protected function getStub()
    {
        return $this->laravel->basePath('/stubs/json-model.stub');
    }

    protected function getDefaultNamespace($rootNamespace)
    {
        return $rootNamespace.'\Models';
    }
}

For the stub, you just have to create stubs/json-model.stub at the root of your project. It has a couple of substitution tokens. The {{ rootNamespace }} one is the App/ namespace — that concern is the code I added.

<?php

namespace {{ namespace }};

use {{ rootNamespace }}Models\Concerns\ItemIsArrayable;
use Illuminate\Contracts\Support\Arrayable;
use Swis\JsonApi\Client\Item;

class {{ class }} extends Item implements Arrayable
{
    use ItemIsArrayable;

    protected $type = '{{ class }}';
}

You should add a note to the README about your new make command, so other developers notice it.

It’s worth noting that Laravel 7 added stub customization. What I showed above takes things a step beyond into new types classes that the framework does not have out of the box. If I had entirely eschewed using Eloquent, I could have updated the model stub instead of adding a whole new command.

Starting a new Laravel app

I’ve started working on a new Laravel app. That isn’t uncommon, but it is a good opportunity for some blogging!

I wanted to do more restaurant reviews — but the world has other ideas — so here’s some thoughts on how I’m setting the new app up instead.

Oracle

The database is unusual for this app: I am replacing a web app using a legacy Oracle database. There’s still tons of business logic external to the app (yay, batch jobs) written against this DB, so moving it to postgres isn’t in the cards right now.

This usually means it’s time for another oci8 or pdo_oci adventure. I’ve been building these extensions on-and-off for the last 15 years or so, and it’s never fun.

  • Oracle [used to] make it difficult to download the instant client (but credit where it’s due, this no longer needs an account / EULA acceptance, so scripting setup is possible now)
  • The PHP extension build scripts usually fail to detect where the instant client is installe to, so I have to go source-diving to figure out what it’s expecting
  • If you’re using a product that comes with PHP in a non-standard location (lookin’ at you, Zend Server), the build scripts may fail to find some of the PHP devel stuff, like the php-config command
  • You have to rebuild whenever you do a minor PHP version upgrade, since no vendor is shipping pre-built Oracle extensions for PHP (thanks, Oracle licensing)

Even when you do have oci8 or pdo_oci set up, you run into random bugs that nobody has bothered fixing, because who the hell is even using this nonsense?

But deploying to Vapor offers me a blessing in disguise: getting the Oracle driver built as an additional Lambda layer sounds like a HUGE pain in the ass, so the team came up with something totally different instead:

The Laravel UI talks to a JSON:API implemented with NodeJS & Express. Node talks to Oracle using the node-oracledb package.

This neatly solves a second problem I would have had to deal with — the Oracle DB is on-premises, so the code needs to run in our VPC with a VPN connection back to the datacenter. Vapor uses its own VPC with no VPN set up. The express app will live in my VPC, and Laravel will make API calls to it over the internets.

Installing the node-oracledb package is MUCH easier then building a PHP extension. I’m even able to bundle the Linux instant client with the repo, so there’s zero extra setup — yarn install && yarn start and you’re in business.

But why do you even need Laravel“, you ask yourself, “if you have a perfectly good Node app…”

Well, a couple reasons. First of all, because we’re Laravel devs. But we want to use Livewire and eventually build some async jobs in. Eventually, we want to migrate the Oracle stuff to postgres and tie it directly into the Laravel app, so this Node piece is a transitory shim that will be going away in a year or two.

Eloquent for Oracle

Before we decided on using Express, we had considered Laravel -> Oracle directly. Once you clear the driver hurdle, there’s still a problem: Eloquent doesn’t have an Oracle grammar.

Yajra has a package that fixes this: laravel-oci8. Even though we did not go this route, I’m confident that package would have worked out well — I use another of Yajra’s packages, laravel-datatables.

Yajra is a great developer 💓

JSON:API

If you attended Laracon Online 2020, you saw Matt Stauffer’s talk on the JSON:API spec. It’s a very reasonable standard for making REST APIs, so I decided to run with it.

That does introduce some complexity — my models need to be decorated by a bunch more Stuff before the API sends them to the client.

I was writing an OpenAPI spec by hand for a couple of the API endpoints, so the first thing I needed were some OpenAPI models for the JSON:API “stuff”. I found exactly what I needed in a gist, and that got me started.

On the Express side, I went with Yayson to make things easier. It’s not a complete solution though — the first time I wanted to emit an error response, the library left me hanging.

I would have preferred something more complete like ethanresnick/json-api, but it REALLY wants you to be using a supported ORM, and there’s not much love for Oracle in developing-ORMs-for-node community.

Writing my own adapter looked tedious, so Yayson was the better choice: all I needed to do was write an error model & presenter.

On the Laravel side, I’m working with swisnl/json-api-client. It’s still early days, but this seems like a very thorough client implementation. I was going to write repositories for all my models; this has a base class for a repository that Just Works.

Laravel Authentication

Authenticating users for this project is very different from my usual approach: plug in to our SSO system and generate user models for people I don’t have in the database yet.

All of the auth is instead outsourced to the Express API, and I needed to plug authentication into the JSON:API client’s repository base class.

My API has two authentication options: forwarding a cookie for our SSO system, or providing an API token header (for a few select admin users). Once you authenticate, the API exposes your user info & permissions, so Laravel can grab those.

I did this with a custom guard & user provider. The default session guard almost did what I wanted, but it had a bunch of stuff for a ‘remember me’ token and other types of auth that I didn’t want to deal with.

I rolled my own simple guard. The GuardHelpers trait made it painless — if you’re writing a guard, you probably want to mix that in.

The user provider is just a shim to hit the JSON:API repository. You need to implement five methods; four of mine just throw a NotImplemented exception. It’s only being called from the guard I wrote, so I know those extra methods won’t ever be called.

Wiring up the JSON:API client so it forwards the SSO cookie took me awhile to figure out because what I actually wanted was more complex: forward the cookie for web requests, but use an admin API token (from my env vars) for tinker, console commands, and async jobs.

Tinker is important to our teams’ development workflow, and having to go copy-paste a cookie to make it work would have made for a poor developer experience.

I ended up with a two-piece solution: bind the JSON:API ClientInterface to a class that extends the package’s client, but adds an extra API token header as a default. Admin API token auth becomes the “default” for the Laravel app.

Then, to get the cookie forwarding working, I added a middleware to the web route group, right before the authentication middleware. This becomes the default for web requests. It felt like a very clean approach, since middleware receives the request and can grab the cookie value off that.

Here’s the cookie one, as an example:

<?php

namespace App\Repositories\ApiAuth;

use Psr\Http\Client\ClientInterface as HttpClientInterface;
use Psr\Http\Message\RequestFactoryInterface;
use Psr\Http\Message\StreamFactoryInterface;
use Swis\JsonApi\Client\Client;

/**
 * SSO cookie authenticated JSON:API client, for general use by the UI.
 */
class SsoClient extends Client
{
    public function __construct(string $api_url, string $sso_token, string $cookie_name, HttpClientInterface $client = null, RequestFactoryInterface $requestFactory = null, StreamFactoryInterface $streamFactory = null) {
        parent::__construct($client, $requestFactory, $streamFactory);

        $this->setBaseUri($api_url);
        $this->setDefaultHeaders($this->mergeHeaders([
            'Cookie' => sprintf('%s=%s', $cookie_name, $sso_token),
        ]));
    }
}

And the binding, which I’ve placed in a dedicated middleware and applied to the web group:

<?php

namespace App\Http\Middleware;

use Closure;
use App\Repositories\ApiAuth\SsoClient;
use Swis\JsonApi\Client\Interfaces\ClientInterface;

class BindSsoCookieToApiClient
{
    public function handle($request, Closure $next)
    {
        // The admin API token client is the default (bound in AppServiceProvider),
        // but for HTTP requests we want to use the SSO client so it forwards the cookie.

        $cookie_name = config('my-app.cookie-name');

        $apiKeyClient = app()->makeWith(SsoClient::class, [
            'api_url' => config('jsonapi.base_uri'),
            'cookie_name' => $cookie_name,
            'sso_token' => $request->cookie($cookie_name),
        ]);

        app()->instance(ClientInterface::class, $apiKeyClient);

        return $next($request);
    }
}

Since my SSO cookie isn’t set by Laravel, there is one last step: add it to the EncryptCookies exception list, so Laravel doesn’t blank the value out. I somehow had failed to notice this default middleware for the last two years, and I had been grabbing it from the $_COOKIE superglobal instead … doh!

Cosmetics

I have a standard Bootstrap theme & site layout that I’ve been using in our Laravel apps. Until now, I’ve just been copy-pasting it from app to app.

That seemed like a maintenance time-bomb. With Bootstrap 5 drawing closer, I took steps to defuse the bomb and packaged my UI up as a UI preset. User-definable presets became a thing in Laravel 6, but the laravel/ui package has been on my mind a lot more since 7 launched.

My initial package ejected everything into the app. This didn’t feel like a good fix: devs would customize things and upgrading would still be impossible.

I looked at a few other packages to see if they handled that problem. Turns out, Laravel has a solution and I’d missed it: packages can register a view namespace. I was ejecting a couple views for the layout & some standard UI components, so I pulled them back into the package.

Unfortunately, the same functionality doesn’t exist for the SASS/JS stuff, so that’s still being ejected. If anybody has a good solution for that, ping me @owlmanatt … some of it should be ejected, but I don’t want stuff like my social icon styles to be changed, since they’re done per our organization’s branding guide.

Developer Experience

My role increasingly involves creating positive developer experiences: writing Terraform modules for our different infrastructure patterns, doing the early bits of development projects to set the scene for other teams to flesh out, and writing loads of documentation.

Just like we give user experience a lot of thought, I’m constantly evaluating the developer experience (DX) that my decisions result in.

Auto-wiring the authentication stuff to the JSON:API repositories is one less thing future developers have to think about. The new UI package, plus my tools package, means our dev teams can go from generating a new Laravel app skeleton to the meat of their projects immediately.

When I was setting up the Express project, I tried to make it “feel” like a Laravel app. Express is a barebones framework, so there’s not much structure. Setting it up with controllers, putting the routes file where you’d see it in Laravel, and adding dotenv will make our developers more comfortable crossing over.

Both the Laravel app & API include setup steps in the README that have been tested on Windows, OS X, and our usual Linux VM. This is a critical step — nothing kills momentum like “I spent all week figuring out how to set up my dev environment”.

Laravel’s easy thanks to Homestead/Valet, but Express offers a pretty shite DX out-of-the-box. It doesn’t detect code changes, so you have to stop/start the app constantly. I went through and set up Nodemon so nobody has to think about this when they’re working.

It’s all minor stuff, but the lifespan of our apps is usually around 10 – 15 years. Every minor thing that I can do to make developing & maintaining the app easier adds up to a lot of saved time over such a long lifespan!

Why Laravel Vapor was Right for Me

I’ve been watching Jack Ellis talk about his upcoming Vapor course over the last week. I’ve seen a lot of folks curious if Vapor/AWS was worth the money and if it was right for them.

We had a use case for Vapor and adopted it at the office as soon as they started selling it. It was the right thing for a subset of our apps. I wanted to document my use-case and the rationale behind it being right for us so I can share it with the curious folk.

Background

I’m a developer in a big university’s IT department. My group is responsible for something like twenty different applications, all of varying size, age, and tech stacks. Only about seven of them are Laravel apps. We’re definitely trending up on Laravel for a lot of apps, but I expect that will slow down now. We have them deployed in a couple ways: on-prem with both mod_php & FPM, on AWS Elastic Container Service, and [now] on Vapor.

The big initiative right now is moving our on-prem stuff into The Cloud. The goal is the elimination of mutable infrastructure. This initiative is coming from the developers, so an unspoken secondary goal is have as little infrastructure as possible. We aren’t being paid to think about servers, so IMO the fewer of those we have, the better!

We’ve locked our AWS accounts down so developers cannot do anything in the AWS console themselves. If they want resources, they need to write some Terraform, get it code reviewed, and run it through Jenkins. This makes it possible to enforce consistency between the dev, QA, and production environments: you can’t forget to configure something if it’s all done by scripts kept in git.

When we develop Laravel apps, we’re taking advantage of the full range of framework features: queuing, job dispatching, broadcasting, & scheduler see use in all of our Laravel apps.

How Does Vapor Fit?

I would have liked to get all of this working on the AWS serverless platform myself, but but it’s kind of a lot. The first step is just getting PHP working on Lambda — Bref exists, but its using Serverless instead of Terraform. Then I’d need to figure out the wiring for config management, SQS, API gateway, CloudWatch events, monitoring, and all that kind of stuff.

It would have taken a lot of time. My working knowledge of AWS isn’t stellar, so I figured I’d spend a week getting a workable first-pass. It probably wouldn’t have had all the fancy stuff with SQS/DynamoDB that Vapor does. Then again: it took The Otwell eight months to make Vapor, so “eh a week” is perhaps a severe under-estimation.

The first app I wanted to deploy was for an “oh shit, vendor went out of business” project. This was being developed at 5,000mph and we didn’t have any firm plans for how to deploy it. I figured it would probably be Dockerized and deployed on ECS. Vapor helpfully went on sale right around the time we were talking about deploying dev.

So the value proposition of Vapor ended up looking like this: I would spend several days building Terraform scripts to run one Laravel app on AWS, the salary cost of which is well in excess of the US$400 price on a year of Vapor. The AWS costs associated with the app are something our institution had already decided to pay, with or without Vapor, because we have chosen AWS as our cloud provider.

It was a no-brainer. If I couldn’t get Vapor working, support was doing refunds. I’d only have lost a few hours and could move on to an in-house solution.

Scalability didn’t enter into it. The low AWS costs for all the serverless stuff were a plus, but certainly not part of the decision. It was entirely down to Vapor being an easy way to deploy Laravel with all its bells and whistles on AWS.

Vapor-izing

Once we signed up, getting the first app deployed was easy. The instructions were clear and I got my app all vapor.yml-ed up.

I had some hiccups around DNS and certificates. I initially didn’t add any domains to Vapor, since university.edu has its own nameservers. Vapor wouldn’t let me create certificates for a domain it didn’t know about.

I emailed support, and they told me to add the domain even if I was not planning on using Vapor/Route53 to manage it. Easy enough.

My second “oops” was creating the cert in us-east-2. I didn’t know anything about API Gateway at the time, so it didn’t occur to me that it would be using CloudFront. The problem was obvious when I went to deploy an app; I got it sorted out myself.

I deploy from Jenkins. It was easy to wire up. Here’s my pipeline stage, updated for PHP 7.4:

   stage ('Deploy to Vapor')
{
    agent {
        docker {
            image 'edbizarro/gitlab-ci-pipeline-php:7.4'
            args '-u root:root'
        }
    }

    steps {
        script {
            GIT_COMMIT=checkout(scm).GIT_COMMIT
        }

        sh 'composer global require --prefer-dist --no-ansi --no-interaction --no-progress --update-with-dependencies laravel/vapor-cli'
        sh "PATH=\$PATH:\$COMPOSER_HOME/vendor/bin/ vapor --version"
        sh "php --version"

        // Kind of annoying, but the vapor-cli needs the vendor/composer/installed.json to exist so it can
        // validate your vapor-core version is compatible w/ your vapor.yml `runtime: php-XY` selection.
        sh "composer install --quiet --no-dev"

        // Hack to support using two Vapor teams -- the project ID differs between non-prod and prod
        sh "[ \"${BRANCH_NAME}\" = 'production' ] && sed -i 's/id: xxxx/id: xxxx/' vapor.yml || true"

        withCredentials([string(credentialsId: 'vapor-api-key', variable: 'VAPOR_API_TOKEN')]) {
            sh "PATH=\$PATH:\$COMPOSER_HOME/vendor/bin/ vapor deploy --no-ansi ${BRANCH_NAME} --commit='${GIT_COMMIT}'"
        }
    }

}     

And with that, I had eliminated the infrastructure diversion from the project plan, saving something like 89 hours.

After I had deployed my dev environment, I realized I wanted to do some event broadcasting as part of an export feature — the export could take a few minutes, so an async push notification to the browser letting them know it was ready was ideal. Vapor didn’t have a way to do websockets out of the box on AWS, but Taylor had tweeted a suitable solution: Pusher.io.

Pusher is a SaaS websocket provider with lots of lipstick. My export was only for a handful of admin users, so their free tier was fine. Ten minutes later, the infrastructure was sorted out and I was deep in the bowels of client-side Echo code.

Later on, when I wanted to deploy a second app, I already knew the pitfalls. It took about an hour, most of which was spent waiting on my hostmaster to add DNS entries.

Shortcomings

Vapor is a good product, but I have run into some shortcomings.

The first is a restriction on assigning vanity hostnames that are more than three levels. Vapor will let me use my-app.university.edu, but not my-app.college-of-arts.university.edu.

This isn’t the end of the world for me, since I’m a member of our IT department and can whatever I need added to the university.edu zone. I’ve got a lot of friends who work for separate IT departments attached to our colleges, and it’s a much more difficult (read as: political) process for them. They can manage their college’s zone already, but Vapor won’t work with that.

I have emailed Vapor support and spoken with both Mohamed & Taylor, but fixing that is not a priority. Which is fair enough: how many Vapor customers are even using a third-level domain instead of something like coding-cool.ly, nevermind going even deeper?

The second problem I have is that setting up projects initially isn’t very infrastructure-as-code-y. I create projects & DBs in the UI. Between the Vapor CLI & vapor.yml file, I think I can do everything the UI can. But I would really like a declarative way to get DBs and caches. If I could do these from the vapor.yml, I think I’d be happy.

I may be able to address that with some clever Jenkins-ing. It hasn’t been a big enough problem (yet) to spend time on. But I am [allegedly] the IaC champion, so somebody will eventually call me on my bullshit and I’ll have to come up with a solution.

My final complaints are more about the Vapor product. The billing account being the super-admin that can do everything is kind of a gross SaaS thing. I can’t have our AP person logging in with that level of access.

I think it’s the only account that can invite users to teams. I kind of expected better, since they have a product for that. Adding users to your team first requires them to make a Vapor account, so I have to reach out to people and have them do that before I can add them. It’s a nuisance. I can’t enforce enabling MFA for them, either.

If anybody has questions, hit me up on Twitter. I’m happy to talk about my experience with the product. But you might be better off talking to Jack Ellis?

Extracting Jenkins Credentials for Use in Another Place

I support a bunch of Jenkins servers for CI/CD. One of the things we wanted to do was stuff all our credentials into Jenkins so devs could manage them there instead of giving them rights in the AWS console to set secrets in SSM parameter store.

On its face, this might sound kinda crazy: the parameter store UI is really nice. But we decided that since we already have to set up role-based privleges for the Jenkins Folders (for managing jobs) that it’s easy to re-use this for controlling access to the secrets.

The approach taken is Terraform creates some SSM parameters with dummy values. These have a TF lifecycle rule set so it won’t recreate the param with the dummy value once it’s set. The terraform module ends up emitting a map of parameters as an output. terraform output can give you the map as JSON.

I wanted to build a reusable Jenkins step that any developer could include in their pipeline for syncronizing Jenkins credential values to the parameters in SSM. I figured I could make something happen in a script block with plain-old Groovy loop once I parsed the TF output.

I ran into tons of roadblocks:

  • Parsing JSON in a script { } block has a pitfall: its JSOn lib can return a LazyMap. Every step in a script block needs to be serializable so Jenkins can pause at any point, and that kind of map wasn’t compatible with Jenkins’ var serilization.
    • There is a workaround: you can create a function and annotate it NonCPS. This tells Jenkins it needs to execute the entire function atomically, instead of saving state after every line.
  • You normally call a helper method in withCredentials, like string(credentialsId: 'something'). Those aren’t available in a script block.
    • The workaround here is to use an array with $class: 'StringBinding' in it. Some magic happens that is equivilent to calling a helper method.

The last caveat was the hardest to get around: the withCredentials() directive is not designed to extract a variable number of credentials. In fact, I couldn’t get it to work at all inside the script, since it returned something unserializable.

I got it working with a gnarly solution. The script block parses the parameter output from TF, loops through, and builds two lists: one for withCredentials() in a normal steps block, and another injected as an environment variable for Bash to loop through.

With that done, I just needed a little shell script for parsing the env var, looping through, and calling aws ssm put-parameter.

Here’s a demo of the pipeline:

@NonCPS
def parseJson(jsonString) {
    def lazyMap = new groovy.json.JsonSlurper().parseText(jsonString)
    
    // JsonSlurper returns a non-serializable LazyMap, so copy it into a regular map before returning
    def m = [:]
    m.putAll(lazyMap)
    return m
}

pipeline {
    agent any

    stages {
        stage ('Terraform') {
          // terraform init && terraform apply -auto-approve && etc . . .
        }

        stage ('Publish Secrets to SSM') {
            steps {
                withCredentials([[
                    $class: 'AmazonWebServicesCredentialsBinding',
                    credentialsId: 'aws',
                    accessKeyVariable: 'AWS_ACCESS_KEY_ID',
                    secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
                ]]) {
                    script {
                        def params_json = sh(label: 'Param map', returnStdout: true, script: 'terraform output -json parameters').trim()
                        def params = parseJson(params_json)

                        credentialsToResolve = []
                        def CREDS_FOR_BASH = ""
                        def bashIndex = 0;

                        for (param in params) {
                            credentialsToResolve << [$class: 'StringBinding', credentialsId: param.key, variable: "SECRET_VALUE_${bashIndex}"]
                            CREDS_FOR_BASH = CREDS_FOR_BASH + "${param.key}\t${param.value}\t${bashIndex}\n"
                            bashIndex++;
                        }

                        env.CREDS_FOR_BASH = CREDS_FOR_BASH;
                    }

                    withCredentials(credentialsToResolve) {
                        sh '''
                        IFS='\n'
                        for line in $CREDS_FOR_BASH; do
                            param_name=$(echo $line | awk -F'\t' '{print $1}')
                            param_arn=$(echo $line | awk -F'\t' '{print $2}')
                            secret_index=$(echo $line | awk -F'\t' '{print $3}')
                            secret_var="SECRET_VALUE_${secret_index}"
                            aws ssm put-parameter --name ${param_arn} --type "SecureString" --value ${!secret_var} --region us-east-2 --overwrite
                        done;
                        '''
                    }
                }
            }
        }
    }
}

The final version got refactored into a Jenkins shared library. My developers just have to include two lines in their pipelines to get that functionality.

Using Amazon SES with Laravel Vapor

If you are deploying your Laravel apps with Vapor, you might want to use Amazon SES as your mail driver.

There are two set-up steps for SES itself:

  1. Set your domain up with appropriate SPF/DKIM/DMARC records so Amazon & email recipients know everything is on the up-and-up
  2. Ask Amazon to take your AWS account out of the SES sandbox (aka email jail)

The first step is pretty standard: Sparkpost/Mailgun/Postmark/etc all require this as well. The second step is unique to Amazon – when I used Sparkpost, I never had to submit a ticket to get started emailing. In my experience, it takes Amazon ~24h to take you out of email jail.

One your AWS account is set up, you just have to set MAIL_DRIVER=ses in your app environment through the Vapor CLI or console.

Caveat: SES isn’t everywhere

A big caveat is that SES is only available in a couple of regions. If you’re deployed to us-east-1, everything is going to work fine.

I am not deployed to us-east-1. Instead, I use us-east-2, which is closest to 90% of my customers. There are no SES API endpoints in this region.

To work around this, you need to change the services.ses.region config value in Laravel. This currently requires editing the config file: Vapor will inject AWS_DEFAULT_REGION, and you don’t want to set that to us-east-2 or everything else will break (DyanmoDB/SQS/etc) since those are in your default region.

I change it out for AWS_SES_REGION:

'ses' => [
    'key' => env('AWS_ACCESS_KEY_ID'),
    'secret' => env('AWS_SECRET_ACCESS_KEY'),
    'region' => env('AWS_SES_REGION', 'us-east-1'),
],

Then in your Vapor environment config, you set the MAIL_DRIVER=ses and AWS_SES_REGION=us-east-1. Redeploy and your emails will work!