Twitter-White.png
LinkedIn-White.png

© 2020

with passion ❤ D-ASYNC

Early Experience

CORE CONCEPTS

 

A cloud-native application with our technology consists of three major layers: business logic, D-ASYNC middle layer, and a cloud platform.

The development of an app or one of its parts starts with defining a domain boundary. The surface of the functional responsibilities defines the contract of a service. Then the core business logic can be coded first without addressing most of the non-functional requirements. A service can consume another service by referencing its contract without any extra code.

D-ASYNC currently works with C# and .NET

The D-ASYNC layer translates abstractions of a programming language into service-oriented patterns and delegates the execution to a platform. Unlike using a framework, D-ASYNC references the code of the business logic and not the other way around.

The cloud platform can be either public or private and can be customized later independently from the core functionality of your application. The customization options provided by D-ASYNC platform can be switched on-the-fly with a single operation.

 
 
 
 
 
 

WHERE IS INFRASTRUCTURE?

DELAYED DESIGN

 

Non-functional requirements come second. The business logic remains exactly the same regardless if it is hosted on Windows or Linux, if it uses containers and a container orchestrator, if it runs on a serverless platform or a typical VM, if methods are invoked via HTTP or a message-passing mechanism, if API calls are asynchronous or not.

You may delay design decisions and swap the underlying hosting and communication infrastructure later on without changing the code of your application.

OPTIMIZED TRIGGERS

The same method of a service can have multiple triggers like HTTP and message bus. One can be used from the front-end with more relaxed consistency guarantees, another one can be used from the back-end in a more strict mode.

All methods of a single service do not have to use exactly the same communication mechanism. You may configure one to be synchronous HTTP call and another one use message bus only depending on how you want to balance scalability, reliability, and latency.

DEPLOYMENT SPLIT

In some cases, we tend to create multiple services due to the differences in hardware. For example, a managing service receives an HTTP request and puts a message on a queue, which is passed to a long-running processing service.

Instead, those two services logically can be a single one. A single method or a group of methods (workflow) can be configured to run on a different deployment, or even spin up a container for isolated execution. In this case, the unit of deployment can be as small as one method.