Building in the Cloud with Low Code
Luddy Harrison, July 23, 2023
This is the first article in our Introduction series
At The Intersection of Two Trends
If software has eaten the world, then web and mobile applications ate the most and burped the loudest. We now routinely reach for our telephones when planning an evening, navigating, shopping, talking to our friends, and so on. Using a computer for long stretches without an internet connection is becoming less and less tenable. How long can we get by without access to Zoom, Figma, GitHub, Slack and the other online tools that have transformed how people work with one another? In fact, the more we have retreated into our individual digital corners, the more we reach out from those corners to meet one another in the cloud. And perhaps that's not so suprising after all. In the modern world, we need the flexibility to separate where we are physically from what we want to do and who we want to do it with.
Building a modern application means building both the mobile or web interface that the users sees, and the cloud back end that users don't see directly, but without which the application would be nothing but an empty wrapper.
As it turns out, however, building in the cloud is difficult, time-consuming, and requires a lot of training. We want to move fast and break things, try out our new ideas for applications quickly, and navigate this exciting space of possibilities with aplomb. Instead we find ourselves wading through technical quicksands of various kinds, trying to get our footing.
Where might we turn to help solve this problem? Low code is the name that has been given to a new generation of tools that make software development faster, easier and more widely accessible by, in effect, doing most of the code-writing automatically for the user. In domain after domain, low-code tools have popped up, allowing people to quickly create websites and games, automate business processes, and so on, reducing the amount of code they must write dramatically. Low code sounds ideal for building in the cloud!
What then is required of a low-code system for cloud development? For starters, it must provide for rapid development and deployment. Those are table stakes. If we look more closely, however, we find that there are many more requirements. It must allow our back end to grow and scale over time as our business grows. It must support the best of continuous integration and deployment, so that we can keep it running while we update and extend it. It must allow us to take advantage of the veritable universe of cloud-based services, software libraries, front end technologies, standards, protocols, data formats, and so on that are available today.
In a word, our low code system must not cut off possibilies for our application. It must not be that as we buy into the attractive aspects of low code, fast and easy development above all, we are buying out of the best of current cloud development and new technologies, what we might call the pro code side of cloud development. We are on the most solid footing if our low code tool fits hand in glove with pro code, so that we are ensured of smooth transit back and forth across the boundary between the current way of doing things, with its code-centric, laborious but enormously flexible and powerful methods, and the newer approach in which we create faster and move more fluidly but bump up against limits of what can be expressed.
To solve this challenge, to square the circle of low code and pro code for cloud development, is the reason we have created Coreograph.
Coreograph is a labor of love for us. We use the system every day for our work, building back ends for our customers, creating new architectural structures, pushing the boundaries of what the system can do, and how easily and quickly it can do it. It is a big system, with a lot of parts, and some time is required to drink it in and get oriented. We have prepared a number of demonstrations, tutorials, videos, articles and documents to help you explore the system. You can sign up for a free account here.
This article, however, isn't about Coreograph. Instead, it lays out our view of the problem of building for the cloud, and considers what kind of role low code can and should play in this development. In other words, it tries to frame the problem of low code and cloud development.
Basic Elements of Cloud Computing
Most modern applications run in part on a user's device — phone, tablet or laptop — and in part in the the cloud. The part that runs in the cloud is ordinarily called the back end of the application, in contrast to the front end that runs on the user's device.
Cloud back ends come in every shape and size. Some are quite modest, serving a relative handful of users from a simple program that is running on a computer that is physically located in a cloud center. Others are worldwide, with resources of many kind spread across many time zones. In spite of this great variety, cloud back ends generally have a few key elements in common. At the risk of generalizing, we can list out what we might call the essential characteristics and ingredients of cloud back ends.
Network Presence
A cloud back end presents itself to the outside world — most especially the front end of the application it belongs to — by having a presence on the internet. The back end lives at a URL, an internet address. For example, the excellent Lucidchart application lives at the URL https://lucid.app. Several cloud services are necessary to make this magic association between the URL and the cloud back end happen:
- DNS, which maps a URL like
https://lucid.app
to an IP address, the numerical form of the internet address - A Gateway, Content Delivery Network (CDN), or other service that receives requests that are sent to the back end, and sends responses back in return
The interfaces that are presented to the outside world by this network point of presense are called APIs: Application Programming Interfaces. There are in fact many kinds of APIs, going well beyond cloud back ends. Every manner of software library and programmable device presents APIs of various kinds, to organize how software is supposed to interact with it. But in modern terminology, API is most often used to mean an interface presented by a cloud back end. The easiest way to think of an API is this:
API
An API is an interface to a cloud back end from the outside world. It defines a question one may ask of the cloud back end. Each API expects a request in a certain format, and returns a response in a certain format.
The gateway or content delivery network that faces the outside world doesn't do the work of the back end, or at least not all of the work. Typically, between receiving a request and sending a response back from it, various other cloud services are pulled into play to do the heavy lifting, to implement the APIs that are presented by the back end. Two of the most common are databases and computing elements.
Database
The standout characteristic of a cloud back end is centralization. Many users — sometimes millions or more — use an application from their devices. They wish to interact with the world beyond their device, with other users of the app, with businesses and people beyond the app, with information that is available in the cloud. They need a stable and safe place to store their information, in case they change or lose their device. They need things that can only be accomplished by a central figure who can communicate both with them, the end-users, and the resources and other people they wish to interact with.
That central figure is the cloud back end, and the cloud services that are available to it. And the archetype of all centralized cloud resources is the database.
Databases store information and allow it to be searched, retrieved, and updated. Databases that are shared by all users of an application must in some sense reside in the cloud, because they must do two things that are fundamentally in tension and opposed to one another:
- supply and receive information from users all over the world
- keep a single coherent picture of all that information without losing or bungling the data
The tension is between many users interacting with and updating this information, and the single coherent picture of the state of all that information at any moment: many versus one, the parts versus the coherent whole. That's the central tension or balance in cloud computing, and as we will see with more examples below, it exemplifies a basic truth about cloud computing:
Note
Building in the cloud is fundamentally about balancing opposing considerations.
Expensive versus economic and affordable (a tradeoff between performance and reliability versus affordability).
Localized versus distributed (a tradeoff between simplicity of state management versus scale and performance).
Uniform versus heterogeneous (a tradeoff between simplicity of structure and technical capability).
Legacy versus modernized (a tradeoff between business continuity and access to modern technology).
And so on and so on. There are almost endless such balances and contrasts lurking in cloud computing, which is to say, we very seldom have the luxury of choosing absolute extremes like infinite and unlimited scaling unless we are on top the world financially and operate where expense is no object. (And no one is really in that position, the cloud eventually humbles all its practitioners.)
Compute
While many cloud services are self-contained, and we can do a great deal of work simply by calling their interfaces, inevitably a realistic cloud back end needs to do specialized computing that is particular to that back end. Even in the case where a back is nothing more than an orchestration of calls to various network services, still some kind of compute resource must be used to perform the calls, prepare the inputs to them, and collect the outputs from them.
The various kinds of compute available in the cloud are basically distinguished by how long they run before they vanish. You could say that computing in the cloud is a constant go-round of starting up compute resources, doing some computation on them, and releasing them again, over and over. At one end of the spectrum, we have serverless functions that run for 100ms or so and then vanish. When they vanish, so does any state they built up while they were running. This means that when we start them up again, we must retrieve or rebuild the necessary state again. At the other end of the spectrum, we have long-running servers that are bootstrapped and stay up for months at a time, ideally shutting down only when software on them needs to be updated. These servers can accumulate memories and disks full of long-term state. And once again, we have a classic cloud tradeoff: the cost of centralizing all that state inside a single server (vulnerability to a crash, limits on how many clients can access that state) versus the benefit of centralizing that state (no need to constantly write and read the state to databases outside the server).
Oceans of inks are spilled and endless battles fought over this relatively simple tradeoff: centralizing state in a long-running server, versus distributing that state into databases and cloud storage and accessing it by ephermeral serverless functions. The reason for all the controversy and upheaval is not that the concept is fundamentally difficult. It is rather that moving between these two alternatives is typically a terrible, herculean undertaking that requires breaking apart software systems and re-assembling them in other forms, or rewriting them altogether. In other words, it's not so much that we can't comprehend what the state of our application is and what our choices are. It's that exercising those choices in practice is terribly difficult. So we end up stuck at one end or the other of this tradeoff with no easy way to visit the other end to see what it might offer us.
Authentication
Let's face it, the internet isn't the nicest neighborhood these days. Thieves and bad actors of every kind take advantage of its interconnected nature to probe, scrape, break in, misrepresent, disrupt, invade, extort, and worse. One of the very best things about good cloud service providers is that they provide an environment in which it is at least possible to build truly secure cloud back ends. There is ample room to go wrong of course. But there are good practices that, if followed properly, eliminate a great many risks. Among these are things like virtual private clouds (VPCs), key management systems, identity and access management services and so on. The key to making use of all these effetively is automation. To the extent that good practices are automated into place, the biggest security risk of all — simple human error — is reduced dramatically.
Among all these security services, one stands out from the others for its role in applications that involve user devices and cloud back ends: authentication. Authentication is how we establish that a user is who he claims to be. It is how we distinguish a paying customer from a would-be hacker of our site. It is familiar to all of us as a simple user name and password login, or a login that piggybacks on another service we already belong to, like Google or Facebook.
Third Party APIs
As the cloud has grown, so have the number and variety of services and APIs available in it.
A cloud service provider can be thought of as a collection or constellation of APIs. AWS for example is said to have more than 200 cloud services. This is far more than any particular cloud back end will use. But the point is that each of these cloud services is effectivley a collection of related APIs in and of itself. And indeed, AWS grew very deliberately into this shape thanks to a famous memo from Jeff Bezos in which he directed the various departments at Amazon to communicate with one another solely through APIs! The rest is history, as they say.
But there are many services on the internet that are far less sweeping and comprehensive than AWS. The United States Census Bureau for example provides a set of APIs. Their APIs are not intended to be general purpose or to cover all the needs of people who want to build applications that use the cloud. They are narrowly focused on their area of expertise: U.S. census data. This is typical of the vast majority of APIs available on the internet. There are APIs that search, APIs that provide financial data, APIs that deliver fonts, APIs that create various kinds of diagrams, APIs that provide storage, APIs that process natural language, and so on ad infinitum. Some are free, but typically they are paid and one must subscribe to make regular use of them.
Many cloud back ends use such APIs at one point or another. It might be that early in the development of a cloud back end, it is convenient but somehow costly to call a third party API that provides a needed service, rather than to achieve the same effect by implementing with less expensive components. At a later date, it might be decided to replace the third party solution with one that lives in the back end itself.
As a rule, third party APIs speak HTTP, which stands for Hypertext Transfer Protocol. All frameworks for working in the cloud provide extensive support for communicating by HTTP. Occasionally, one encouters an API that speaks some other protocol. For example, MQTT is a standard messaging protocol for IoT (Internet of Things) applications.
Cloud services and APIs are the bread and butter of getting things done in the cloud.
The Role of Low Code
These basic elements — network presence, databases, compute, authentication, and third party APIs — are practically universal in cloud back ends. We could express this a little differently, and refer to them as networking, data, compute, security, and services.
What then do we need and expect from a low-code environment that purports to help us build in the cloud?
Architecture
First and foremost, it must make it easy for us to choose components from these essential elements, and assemble them into a working back end. For common tasks, like allowing users to sign up and sign in to our application, populating and querying a database, performing a function in response to a request to our APIs, and calling cloud services, it should make as many of the detailed configuration choices as possible, leaving us to make only big choices that truly make a difference to our application.
Nor should the tool concern itself only with resources — the nouns of cloud computing, like databases and network configurations —, and leave to us the job of creating all of the computing logic, the verbs. Crafting the archicture of our application should leave us with the basic frame of business logic for getting things done already in place, just as the basic structure of a building already has a great many useful systems build into it, like electricity, plumbing and HVAC.
It should be flexible enough that we are not bound forever by architectural choices we make initially. The architecture we create is to be a living, breathing structure that will grow as our application grows.
We might call this activity architecture. It means crafting the structure of our back end. It corresponds roughly to working at the level of an architectural diagram.
Refinement and Customization
Secondly, the low-code environment must make it possible to modify, specialize, customize and refine the structure we have put together. Resources that are introduced and configured during the archtictural process should have carefully designed configurations that provide good security and performance, but should not be carved in stone. If it should happen that we need to impose new constraints on these resources, or enhance their capabilities through additional configuration, it should be straightforward to find where this change is to be made, as well as to make the change.
Similarly, business logic that is generated automatically during the architecture process should be well-organized and open to modification. If an additional processing step is required in order to shape data to our needs, it should be easy to insert operations to do so at the appropriate spot in the flow.
The guiding principle here is accessibility, visibility, transparency. A good low-code tool must not generate an opaque mass that is inscrutable from the point of view of the user. If a low-code tools is effectively a translator, or even a series of translators, from the high-level world of big choices that the user needs to make in order to create the application he envisions, to the low-level world of details that make up cloud services, then it is crucial that the result of this translation is comprehensible in its own right. It should be that someone who is expert in working at this lower level would be comfortable with the result of the translation and the low-level view on the application that it presents. Most especially, the linkage and correspondence between the high-level way of looking at the architecture, and the low-level view of its operation, should be evident.
One must be able to climb up and down the ladder of abstraction, as it were, and choose the right level at which to work at any moment.1
Integration
Thirdly, the low-code environment must provide integration with pro code up and down its hierarchy. It must allow existing containers that have been shaken out to be used seamlessly in the low code setting, i.e., deployed and orchestrated alongside business logic that is native to the low code platform. It must allow libraries in other languages to be used without breaking the low-code paradigm. The indispensible AI libraries available in Python are a good example. Some of these libraries require managed runtimes, others are compiled to conventional object code and need good linker support.
Generally speaking, the low-code environment needs to be open and technically welcoming to containers, libraries and foreign programming languages. It should be easy and performant to marshall and convert data between these components in the low-code setting.
Deployment
In a similar way, a good low-code environment for cloud development should make it easy to deploy applications.
This means making it straightforward to deploy an application in development, test, and production environments while holding substantially everything about the application fixed.
It means that the resources of the application should be created, updated and deleted as a group so that cloud accounts are not left littered with the remains of past development and test exercises. Things should be kept tidy. Messes in cloud resources not only make it hard to see what is going on, they often cost money, and often for no reason at all other than that it is not at all easy to know whether they are still needed and get rid of them.
Observation, Monitoring, Debugging
A good low-code environment for cloud development must allow the user to watch the application running. It should be straightforward to watch data entering the cloud back end in the form of a cloud request, transiting the structures of the back end, and returning in the form of a response.
It should likewise be possible to get good readings of performance, because time is money in the cloud. When a back end is slow, it is expensive too. You pay twice for poor cloud performance: once because the application is sluggish and gives a bad user experience, and again because you get billed by the cloud service for all the time the back end spends doing its job. The first key to improving performace is understanding performance; you can't improve what you can't measure.
A good low-code environment provides good visibility on behavior and performance.
Change, Iteration, Experimentation
One of the great frustrations of cloud development has traditionally been the sheer momentum and inertia of past choices. A framework is chosen, structures are created around the choice, code built on top of those structures, and a whole tower of babel built that soon has such mass and momentum that any attempt to change it has such profound and sweeping implications that it is effectively impossible or at least quite difficult and time-consuming.
A good low-code system should make it possible to change big structural and architectural decisions quickly. How can it do this?
One way is by organizing the low-code system so that components plug freely with one another. A consistent set of types is used at important interface boundaries, so that modules that speak those types can be plugged in and out without breaking their interfaces. Another way is adopting conventions for business logic that play well all the compute settings the low-code tool offers. The difference between serverless function and long-running server code need not be turned into an unbridgeable gulf. It should be straightforward to move business logic between the two (or to compute settings in between the two) without having to rebuild the logic. In other words, plug and play should be a thing, a way to work in low-code, just as it is a thing when playing with childrens' building blocks.
Into the Future
This then is the problem of low-code and building in the cloud that we have put before ourselves, and that informs and motivates the development of Coreograph, our low-code offering.
We are passionate about architecture, low code, developers both experienced and new to cloud computing. We hope that if you share our entusiasm for this adventure, you'll join us in making Coreograph a world-class tool for building in the cloud.
-
Gregor Hohpe has written a number of intersting articles and books exploring levels of abstraction in cloud architecture. ↩