Instantiating an Abstract Class with Dependency Injection

In my Laravel app, I wrote an abstract class that has a fair bit of stand-alone behaviour. I wanted to test this directly instead of via the implementations.

PHP has anonymous classes, so instantiating it isn’t very tricky to do in a unit test. But the constructor also has about a dozen dependencies its asking Laravel’s service container for, and I wanted [almost] all those to be injected for me.

Problem is, I don’t think PHP lets you create an anonymous class and assign it to a variable, without instantiating it at the same time.

Instead, I found a clever trick: I can implement my own constructor and use App::call() on parent::__construct.

new class extends WorkflowFromJsonRepository {
    public function __construct()
        App::call(parent::__construct(...), [
            'someParamToStub' => $stubbedThing,

If you need to pass stubs in, you can pick and choose by passing them in the second call() parameter. Ya’know, standard Laravel stuff at that point.

I am not sure if this trick would works on anything below PHP 8.1. The syntax with the three dots — parent::__construct(...) — is the new first-class callable feature. I’m not sure if you can get a handle on that using the older array-based syntax?

Wake up your RDS Aurora Serverless before running your migrations

When you’re using an RDS Aurora Serverless DB instance with Laravel Vapor, you have the option to scale it down to zero capacity units when it’s been idle. This is great for development environments — it only takes a few seconds to come back up, and while it’s hibernating, you’re saving loads of money.

One downside is that your migrations may fail during deployments. When Vapor goes to run the php artisan migrate command, RDS often won’t wake up before Laravel’s DB connection attempt times out:

An error occurred during deployment.

Message: Deployment hook failed.
Hook: migrate --force

In Connection.php line 678:
  SQLSTATE[08006] [7] timeout expired (SQL: select * from information_schema.  
  tables where table_schema = public and table_name = migrations and table_ty  
  pe = 'BASE TABLE')                                                           

In Connector.php line 70:
  SQLSTATE[08006] [7] timeout expired  

I wrote an artisan command to poke the DB a couple times, and added this to my vapor.yml as its first deployment step.


// app/Console/Commands/WakeUpDatabase.php

namespace App\Console\Commands;

use Illuminate\Console\Command;
use Illuminate\Support\Facades\DB;

class WakeUpDatabase extends Command
    protected $signature = 'db:wake {retries=5 : attempts to make} {wait=5 : time to wait between retries, in seconds}';
    protected $description = 'Wakes up a potentially-inactive serverless RDS database';

    public function handle()
        $retries = (int) $this->argument('retries');
        $wait_between = ((int) $this->argument('wait')) * 1000;

        retry($retries, fn () => DB::select('SELECT 1'), $wait_between);

        $this->info('Database is up');

Add php artisan db:wake to your vapor.yml deploy: section, right above the migrate command, in each environment you have an Aurora Serverless DB.

Problem solved!

Adding stubs to your Laravel project

The project that I am currently working on is primarily not using Eloquent — instead, I’m using a JSON:API item class from a package, spiced up with some additional code I’ve mixed in.

I’m making lots of these models, so I did a quick source-dive into the framework to figure out how the make:something commands are implemented. Turns out, writing your own is really easy.


namespace App\Console\Commands;

use Illuminate\Console\GeneratorCommand;

class JsonModelMakeCommand extends GeneratorCommand
    protected $name = 'make:json-model';
    protected $description = 'Create a new JSON:API model';
    protected $type = 'Model';

    protected function getStub()
        return $this->laravel->basePath('/stubs/json-model.stub');

    protected function getDefaultNamespace($rootNamespace)
        return $rootNamespace.'\Models';

For the stub, you just have to create stubs/json-model.stub at the root of your project. It has a couple of substitution tokens. The {{ rootNamespace }} one is the App/ namespace — that concern is the code I added.


namespace {{ namespace }};

use {{ rootNamespace }}Models\Concerns\ItemIsArrayable;
use Illuminate\Contracts\Support\Arrayable;
use Swis\JsonApi\Client\Item;

class {{ class }} extends Item implements Arrayable
    use ItemIsArrayable;

    protected $type = '{{ class }}';

You should add a note to the README about your new make command, so other developers notice it.

It’s worth noting that Laravel 7 added stub customization. What I showed above takes things a step beyond into new types classes that the framework does not have out of the box. If I had entirely eschewed using Eloquent, I could have updated the model stub instead of adding a whole new command.

Starting a new Laravel app

I’ve started working on a new Laravel app. That isn’t uncommon, but it is a good opportunity for some blogging!

I wanted to do more restaurant reviews — but the world has other ideas — so here’s some thoughts on how I’m setting the new app up instead.


The database is unusual for this app: I am replacing a web app using a legacy Oracle database. There’s still tons of business logic external to the app (yay, batch jobs) written against this DB, so moving it to postgres isn’t in the cards right now.

This usually means it’s time for another oci8 or pdo_oci adventure. I’ve been building these extensions on-and-off for the last 15 years or so, and it’s never fun.

  • Oracle [used to] make it difficult to download the instant client (but credit where it’s due, this no longer needs an account / EULA acceptance, so scripting setup is possible now)
  • The PHP extension build scripts usually fail to detect where the instant client is installe to, so I have to go source-diving to figure out what it’s expecting
  • If you’re using a product that comes with PHP in a non-standard location (lookin’ at you, Zend Server), the build scripts may fail to find some of the PHP devel stuff, like the php-config command
  • You have to rebuild whenever you do a minor PHP version upgrade, since no vendor is shipping pre-built Oracle extensions for PHP (thanks, Oracle licensing)

Even when you do have oci8 or pdo_oci set up, you run into random bugs that nobody has bothered fixing, because who the hell is even using this nonsense?

But deploying to Vapor offers me a blessing in disguise: getting the Oracle driver built as an additional Lambda layer sounds like a HUGE pain in the ass, so the team came up with something totally different instead:

The Laravel UI talks to a JSON:API implemented with NodeJS & Express. Node talks to Oracle using the node-oracledb package.

This neatly solves a second problem I would have had to deal with — the Oracle DB is on-premises, so the code needs to run in our VPC with a VPN connection back to the datacenter. Vapor uses its own VPC with no VPN set up. The express app will live in my VPC, and Laravel will make API calls to it over the internets.

Installing the node-oracledb package is MUCH easier then building a PHP extension. I’m even able to bundle the Linux instant client with the repo, so there’s zero extra setup — yarn install && yarn start and you’re in business.

But why do you even need Laravel“, you ask yourself, “if you have a perfectly good Node app…”

Well, a couple reasons. First of all, because we’re Laravel devs. But we want to use Livewire and eventually build some async jobs in. Eventually, we want to migrate the Oracle stuff to postgres and tie it directly into the Laravel app, so this Node piece is a transitory shim that will be going away in a year or two.

Eloquent for Oracle

Before we decided on using Express, we had considered Laravel -> Oracle directly. Once you clear the driver hurdle, there’s still a problem: Eloquent doesn’t have an Oracle grammar.

Yajra has a package that fixes this: laravel-oci8. Even though we did not go this route, I’m confident that package would have worked out well — I use another of Yajra’s packages, laravel-datatables.

Yajra is a great developer 💓


If you attended Laracon Online 2020, you saw Matt Stauffer’s talk on the JSON:API spec. It’s a very reasonable standard for making REST APIs, so I decided to run with it.

That does introduce some complexity — my models need to be decorated by a bunch more Stuff before the API sends them to the client.

I was writing an OpenAPI spec by hand for a couple of the API endpoints, so the first thing I needed were some OpenAPI models for the JSON:API “stuff”. I found exactly what I needed in a gist, and that got me started.

On the Express side, I went with Yayson to make things easier. It’s not a complete solution though — the first time I wanted to emit an error response, the library left me hanging.

I would have preferred something more complete like ethanresnick/json-api, but it REALLY wants you to be using a supported ORM, and there’s not much love for Oracle in developing-ORMs-for-node community.

Writing my own adapter looked tedious, so Yayson was the better choice: all I needed to do was write an error model & presenter.

On the Laravel side, I’m working with swisnl/json-api-client. It’s still early days, but this seems like a very thorough client implementation. I was going to write repositories for all my models; this has a base class for a repository that Just Works.

Laravel Authentication

Authenticating users for this project is very different from my usual approach: plug in to our SSO system and generate user models for people I don’t have in the database yet.

All of the auth is instead outsourced to the Express API, and I needed to plug authentication into the JSON:API client’s repository base class.

My API has two authentication options: forwarding a cookie for our SSO system, or providing an API token header (for a few select admin users). Once you authenticate, the API exposes your user info & permissions, so Laravel can grab those.

I did this with a custom guard & user provider. The default session guard almost did what I wanted, but it had a bunch of stuff for a ‘remember me’ token and other types of auth that I didn’t want to deal with.

I rolled my own simple guard. The GuardHelpers trait made it painless — if you’re writing a guard, you probably want to mix that in.

The user provider is just a shim to hit the JSON:API repository. You need to implement five methods; four of mine just throw a NotImplemented exception. It’s only being called from the guard I wrote, so I know those extra methods won’t ever be called.

Wiring up the JSON:API client so it forwards the SSO cookie took me awhile to figure out because what I actually wanted was more complex: forward the cookie for web requests, but use an admin API token (from my env vars) for tinker, console commands, and async jobs.

Tinker is important to our teams’ development workflow, and having to go copy-paste a cookie to make it work would have made for a poor developer experience.

I ended up with a two-piece solution: bind the JSON:API ClientInterface to a class that extends the package’s client, but adds an extra API token header as a default. Admin API token auth becomes the “default” for the Laravel app.

Then, to get the cookie forwarding working, I added a middleware to the web route group, right before the authentication middleware. This becomes the default for web requests. It felt like a very clean approach, since middleware receives the request and can grab the cookie value off that.

Here’s the cookie one, as an example:


namespace App\Repositories\ApiAuth;

use Psr\Http\Client\ClientInterface as HttpClientInterface;
use Psr\Http\Message\RequestFactoryInterface;
use Psr\Http\Message\StreamFactoryInterface;
use Swis\JsonApi\Client\Client;

 * SSO cookie authenticated JSON:API client, for general use by the UI.
class SsoClient extends Client
    public function __construct(string $api_url, string $sso_token, string $cookie_name, HttpClientInterface $client = null, RequestFactoryInterface $requestFactory = null, StreamFactoryInterface $streamFactory = null) {
        parent::__construct($client, $requestFactory, $streamFactory);

            'Cookie' => sprintf('%s=%s', $cookie_name, $sso_token),

And the binding, which I’ve placed in a dedicated middleware and applied to the web group:


namespace App\Http\Middleware;

use Closure;
use App\Repositories\ApiAuth\SsoClient;
use Swis\JsonApi\Client\Interfaces\ClientInterface;

class BindSsoCookieToApiClient
    public function handle($request, Closure $next)
        // The admin API token client is the default (bound in AppServiceProvider),
        // but for HTTP requests we want to use the SSO client so it forwards the cookie.

        $cookie_name = config('my-app.cookie-name');

        $apiKeyClient = app()->makeWith(SsoClient::class, [
            'api_url' => config('jsonapi.base_uri'),
            'cookie_name' => $cookie_name,
            'sso_token' => $request->cookie($cookie_name),

        app()->instance(ClientInterface::class, $apiKeyClient);

        return $next($request);

Since my SSO cookie isn’t set by Laravel, there is one last step: add it to the EncryptCookies exception list, so Laravel doesn’t blank the value out. I somehow had failed to notice this default middleware for the last two years, and I had been grabbing it from the $_COOKIE superglobal instead … doh!


I have a standard Bootstrap theme & site layout that I’ve been using in our Laravel apps. Until now, I’ve just been copy-pasting it from app to app.

That seemed like a maintenance time-bomb. With Bootstrap 5 drawing closer, I took steps to defuse the bomb and packaged my UI up as a UI preset. User-definable presets became a thing in Laravel 6, but the laravel/ui package has been on my mind a lot more since 7 launched.

My initial package ejected everything into the app. This didn’t feel like a good fix: devs would customize things and upgrading would still be impossible.

I looked at a few other packages to see if they handled that problem. Turns out, Laravel has a solution and I’d missed it: packages can register a view namespace. I was ejecting a couple views for the layout & some standard UI components, so I pulled them back into the package.

Unfortunately, the same functionality doesn’t exist for the SASS/JS stuff, so that’s still being ejected. If anybody has a good solution for that, ping me @owlmanatt … some of it should be ejected, but I don’t want stuff like my social icon styles to be changed, since they’re done per our organization’s branding guide.

Developer Experience

My role increasingly involves creating positive developer experiences: writing Terraform modules for our different infrastructure patterns, doing the early bits of development projects to set the scene for other teams to flesh out, and writing loads of documentation.

Just like we give user experience a lot of thought, I’m constantly evaluating the developer experience (DX) that my decisions result in.

Auto-wiring the authentication stuff to the JSON:API repositories is one less thing future developers have to think about. The new UI package, plus my tools package, means our dev teams can go from generating a new Laravel app skeleton to the meat of their projects immediately.

When I was setting up the Express project, I tried to make it “feel” like a Laravel app. Express is a barebones framework, so there’s not much structure. Setting it up with controllers, putting the routes file where you’d see it in Laravel, and adding dotenv will make our developers more comfortable crossing over.

Both the Laravel app & API include setup steps in the README that have been tested on Windows, OS X, and our usual Linux VM. This is a critical step — nothing kills momentum like “I spent all week figuring out how to set up my dev environment”.

Laravel’s easy thanks to Homestead/Valet, but Express offers a pretty shite DX out-of-the-box. It doesn’t detect code changes, so you have to stop/start the app constantly. I went through and set up Nodemon so nobody has to think about this when they’re working.

It’s all minor stuff, but the lifespan of our apps is usually around 10 – 15 years. Every minor thing that I can do to make developing & maintaining the app easier adds up to a lot of saved time over such a long lifespan!

Why Laravel Vapor was Right for Me

I’ve been watching Jack Ellis talk about his upcoming Vapor course over the last week. I’ve seen a lot of folks curious if Vapor/AWS was worth the money and if it was right for them.

We had a use case for Vapor and adopted it at the office as soon as they started selling it. It was the right thing for a subset of our apps. I wanted to document my use-case and the rationale behind it being right for us so I can share it with the curious folk.


I’m a developer in a big university’s IT department. My group is responsible for something like twenty different applications, all of varying size, age, and tech stacks. Only about seven of them are Laravel apps. We’re definitely trending up on Laravel for a lot of apps, but I expect that will slow down now. We have them deployed in a couple ways: on-prem with both mod_php & FPM, on AWS Elastic Container Service, and [now] on Vapor.

The big initiative right now is moving our on-prem stuff into The Cloud. The goal is the elimination of mutable infrastructure. This initiative is coming from the developers, so an unspoken secondary goal is have as little infrastructure as possible. We aren’t being paid to think about servers, so IMO the fewer of those we have, the better!

We’ve locked our AWS accounts down so developers cannot do anything in the AWS console themselves. If they want resources, they need to write some Terraform, get it code reviewed, and run it through Jenkins. This makes it possible to enforce consistency between the dev, QA, and production environments: you can’t forget to configure something if it’s all done by scripts kept in git.

When we develop Laravel apps, we’re taking advantage of the full range of framework features: queuing, job dispatching, broadcasting, & scheduler see use in all of our Laravel apps.

How Does Vapor Fit?

I would have liked to get all of this working on the AWS serverless platform myself, but but it’s kind of a lot. The first step is just getting PHP working on Lambda — Bref exists, but its using Serverless instead of Terraform. Then I’d need to figure out the wiring for config management, SQS, API gateway, CloudWatch events, monitoring, and all that kind of stuff.

It would have taken a lot of time. My working knowledge of AWS isn’t stellar, so I figured I’d spend a week getting a workable first-pass. It probably wouldn’t have had all the fancy stuff with SQS/DynamoDB that Vapor does. Then again: it took The Otwell eight months to make Vapor, so “eh a week” is perhaps a severe under-estimation.

The first app I wanted to deploy was for an “oh shit, vendor went out of business” project. This was being developed at 5,000mph and we didn’t have any firm plans for how to deploy it. I figured it would probably be Dockerized and deployed on ECS. Vapor helpfully went on sale right around the time we were talking about deploying dev.

So the value proposition of Vapor ended up looking like this: I would spend several days building Terraform scripts to run one Laravel app on AWS, the salary cost of which is well in excess of the US$400 price on a year of Vapor. The AWS costs associated with the app are something our institution had already decided to pay, with or without Vapor, because we have chosen AWS as our cloud provider.

It was a no-brainer. If I couldn’t get Vapor working, support was doing refunds. I’d only have lost a few hours and could move on to an in-house solution.

Scalability didn’t enter into it. The low AWS costs for all the serverless stuff were a plus, but certainly not part of the decision. It was entirely down to Vapor being an easy way to deploy Laravel with all its bells and whistles on AWS.


Once we signed up, getting the first app deployed was easy. The instructions were clear and I got my app all vapor.yml-ed up.

I had some hiccups around DNS and certificates. I initially didn’t add any domains to Vapor, since has its own nameservers. Vapor wouldn’t let me create certificates for a domain it didn’t know about.

I emailed support, and they told me to add the domain even if I was not planning on using Vapor/Route53 to manage it. Easy enough.

My second “oops” was creating the cert in us-east-2. I didn’t know anything about API Gateway at the time, so it didn’t occur to me that it would be using CloudFront. The problem was obvious when I went to deploy an app; I got it sorted out myself.

I deploy from Jenkins. It was easy to wire up. Here’s my pipeline stage, updated for PHP 7.4:

   stage ('Deploy to Vapor')
    agent {
        docker {
            image 'edbizarro/gitlab-ci-pipeline-php:7.4'
            args '-u root:root'

    steps {
        script {

        sh 'composer global require --prefer-dist --no-ansi --no-interaction --no-progress --update-with-dependencies laravel/vapor-cli'
        sh "PATH=\$PATH:\$COMPOSER_HOME/vendor/bin/ vapor --version"
        sh "php --version"

        // Kind of annoying, but the vapor-cli needs the vendor/composer/installed.json to exist so it can
        // validate your vapor-core version is compatible w/ your vapor.yml `runtime: php-XY` selection.
        sh "composer install --quiet --no-dev"

        // Hack to support using two Vapor teams -- the project ID differs between non-prod and prod
        sh "[ \"${BRANCH_NAME}\" = 'production' ] && sed -i 's/id: xxxx/id: xxxx/' vapor.yml || true"

        withCredentials([string(credentialsId: 'vapor-api-key', variable: 'VAPOR_API_TOKEN')]) {
            sh "PATH=\$PATH:\$COMPOSER_HOME/vendor/bin/ vapor deploy --no-ansi ${BRANCH_NAME} --commit='${GIT_COMMIT}'"


And with that, I had eliminated the infrastructure diversion from the project plan, saving something like 89 hours.

After I had deployed my dev environment, I realized I wanted to do some event broadcasting as part of an export feature — the export could take a few minutes, so an async push notification to the browser letting them know it was ready was ideal. Vapor didn’t have a way to do websockets out of the box on AWS, but Taylor had tweeted a suitable solution:

Pusher is a SaaS websocket provider with lots of lipstick. My export was only for a handful of admin users, so their free tier was fine. Ten minutes later, the infrastructure was sorted out and I was deep in the bowels of client-side Echo code.

Later on, when I wanted to deploy a second app, I already knew the pitfalls. It took about an hour, most of which was spent waiting on my hostmaster to add DNS entries.


Vapor is a good product, but I have run into some shortcomings.

The first is a restriction on assigning vanity hostnames that are more than three levels. Vapor will let me use, but not

This isn’t the end of the world for me, since I’m a member of our IT department and can whatever I need added to the zone. I’ve got a lot of friends who work for separate IT departments attached to our colleges, and it’s a much more difficult (read as: political) process for them. They can manage their college’s zone already, but Vapor won’t work with that.

I have emailed Vapor support and spoken with both Mohamed & Taylor, but fixing that is not a priority. Which is fair enough: how many Vapor customers are even using a third-level domain instead of something like, nevermind going even deeper?

The second problem I have is that setting up projects initially isn’t very infrastructure-as-code-y. I create projects & DBs in the UI. Between the Vapor CLI & vapor.yml file, I think I can do everything the UI can. But I would really like a declarative way to get DBs and caches. If I could do these from the vapor.yml, I think I’d be happy.

I may be able to address that with some clever Jenkins-ing. It hasn’t been a big enough problem (yet) to spend time on. But I am [allegedly] the IaC champion, so somebody will eventually call me on my bullshit and I’ll have to come up with a solution.

My final complaints are more about the Vapor product. The billing account being the super-admin that can do everything is kind of a gross SaaS thing. I can’t have our AP person logging in with that level of access.

I think it’s the only account that can invite users to teams. I kind of expected better, since they have a product for that. Adding users to your team first requires them to make a Vapor account, so I have to reach out to people and have them do that before I can add them. It’s a nuisance. I can’t enforce enabling MFA for them, either.

If anybody has questions, hit me up on Twitter. I’m happy to talk about my experience with the product. But you might be better off talking to Jack Ellis?

Using Amazon SES with Laravel Vapor

If you are deploying your Laravel apps with Vapor, you might want to use Amazon SES as your mail driver.

There are two set-up steps for SES itself:

  1. Set your domain up with appropriate SPF/DKIM/DMARC records so Amazon & email recipients know everything is on the up-and-up
  2. Ask Amazon to take your AWS account out of the SES sandbox (aka email jail)

The first step is pretty standard: Sparkpost/Mailgun/Postmark/etc all require this as well. The second step is unique to Amazon – when I used Sparkpost, I never had to submit a ticket to get started emailing. In my experience, it takes Amazon ~24h to take you out of email jail.

One your AWS account is set up, you just have to set MAIL_DRIVER=ses in your app environment through the Vapor CLI or console.

Caveat: SES isn’t everywhere

A big caveat is that SES is only available in a couple of regions. If you’re deployed to us-east-1, everything is going to work fine.

I am not deployed to us-east-1. Instead, I use us-east-2, which is closest to 90% of my customers. There are no SES API endpoints in this region.

To work around this, you need to change the config value in Laravel. This currently requires editing the config file: Vapor will inject AWS_DEFAULT_REGION, and you don’t want to set that to us-east-2 or everything else will break (DyanmoDB/SQS/etc) since those are in your default region.

I change it out for AWS_SES_REGION:

'ses' => [
    'key' => env('AWS_ACCESS_KEY_ID'),
    'secret' => env('AWS_SECRET_ACCESS_KEY'),
    'region' => env('AWS_SES_REGION', 'us-east-1'),

Then in your Vapor environment config, you set the MAIL_DRIVER=ses and AWS_SES_REGION=us-east-1. Redeploy and your emails will work!