There is a magical creature we've been chasing throughout multiple projects, plotting and planning with some beautiful minds, and it has a name: The 3 Layer Architecture.
In short, the 3 Layer Architecture allows small development teams to build an API-centric, scalable and secure project. The structure is based on compartmentalised communication through message queueing.
All you need are 3 VPS servers and some open source software. The beauty lies in the logic and minimal cost - the proof in the ease of explaining it.
The architecture is outspoken (restful) API Centric, with MQ (message queuing) as backbone. What makes us special, is the introduction of a synchronous/foreground MQ dialect in regular architectures. Think of RPC, but better.
We approach the structure from an end-user point of view, beginning with the App Cloud.
The approach of a modern service is one of many Apps, both self-maintained as 3d party integrated parts of your service.
Most applications benefit from a static and CDN based deployment, where the authentication and data retrieval comes from the API, instead of backend implementation.
This approach creates room for deep-level UX integrations, eg. IDB based offline data management.
Your API is the central hub. What you need here is a super light router to dispatch all requests and responses – both endpoints as authentication – to a Message Queue (foreground enabled). You might also want to consider to store the API Docs and Authentication html templates on this machine, to prevent fragmentation.
Never connect your API to your DB! Your API logic should be deployed from a repo, so you can disable ftp and any other access to your machine, because your API will be the favourite address for intrusion attempts. Automated attacks won’t do dramatic harm because of the logic segmentation, but still, let’s keep security in simplicity.
Layer 3, a.k.a. Workers
The business logic (brains) of your service is contained in mass-deployable, identical Workers.
You can scale this layer in any direction you want, tailored to your project. You can have 1 little machine running some worker nodes, or you can have multiple VPS’s, paired with a cluster of DB machines for central data storage, in any flavour, whatever you like.
Your worker connects to the MQ server on the API, handles the job, and sends the response back. Since the API doesn’t know anything about the workers (it only cares about his Message Queue), there is no way for voyeurs to find out where your business critical logic and DB are running. Thus, safety by simplicity.
In short, you provide unlimited, easy to maintain scalability on frontend and backend level, with a light-weight gatekeeper in the middle.
Let's take a look on how it's done with Laravel.
- Set up your local battleground
- Create a Frontend App in Backbone
- Use Laravel's Lumen as API Framework
- Create your Worker sandbox
- Set up your servers
- Set up the repo flow
- Set up the Gearman MQ
For Python lovers, the former Tick.ee Project (now open source) provides a rough reference.
We've dedicated a post at all the open source magic we used for this project.