This is a guest publication for the Computer Weekly Developer Network written by Chris carreiro as ParkView's CTO in Park Place Technologies – The company is a specialist in maintenance of data centers with global staff in all continents.
Carreiro says he believes the IT business press has painted an image of an ongoing power struggle between the cloud and the edge, with one or the other due to supremacy.
He argues that the explosion of data volume driven by the IoT, our mobile lifestyles and a plethora of upcoming low-latency, data-intensive technologies (such as augmented reality) require traditional computing, storage, networking and data resources. computational accelerator are closer to the end user, that is, to the edge.
But, as we know, it is unlikely that cloud technologies will be replaced.
Instead, he suggests, we are entering a period of "forced" distributed architecture at different levels of "agitation," which will complement the centralized capabilities in the cloud and / or enterprise data center. Carreiro writes in the following way from here …
The hierarchy of edge levels.
It is important to keep in mind that the edge is not a specific location; it is an abbreviated form of any relocation outside the data center when the data is processed closer to the end user and there are many different levels of edge that will differ according to the industry.
A hierarchy of edge levels can be regional, neighborhood, street, and building nodes in a smart city, for example. The levels would be different for mobile consumer technology, autonomous vehicles, etc. Micro-data centers sprayed around various corporate facilities could be advantageous in certain cases. In general terms, there will be an "edge spectrum" that will extend from the device level to the gateway to the cloud.
Practice for the perfect change.
To optimize applications for perimeter computing, it will be vital to be careful about where instances of objects are created and on what machine (physical or virtual) the memory is allocated. Relocation of parts of business logic processes from monolithic applications hosted on a central server in the data center to the edge can potentially generate scope resolution problems or, more likely, new granularity control problems.
If the interdependent objects that had previously occurred on the same virtual or physical server in the data center are now separated, one on the edge and one on the data center, there may not be a way for these objects to refer to each other or access a shared memory space. . In addition, when declaring variables and objects in this distributed environment, developers should consider which process and in which namespace an object descends to adequately address issues such as object persistence, data integrity, and read permissions. / writing.
Control costs, do not let costs control you
There are costs associated with the movement of data, pushing the data upwards to process them in a central data center and sending them to the edge for the user. It will be essential to guarantee that, in any place in which an object / variable is exemplified or a declaration of memory is made, it is not necessary to make a trip through the network to carry out its mission. Without carefully considering where there are instances as part of the business process and the design of the application, it makes no sense to move the processes to the limit, since doing so with a traditional application architecture could easily multiply the workload across the network, since edge processes constantly refer to centralized servers.
In some way, the efficient edge development skill will depart from current practice. Having fewer lines of code is not the measure of success. Creating an application with fewer variables or shared memory can result in a less bulky application, but will hinder efficiency if you move to the limit.
After the balancing act, who is responsible?
Movement to the edge can also generate significant safety and compliance problems.
For example, a particular business process that existed previously in the data center may require administrator access to function. When that process is decoupled from the cloud and sent to the limit, what security profile will it use? The process may still require administrator permissions, but it is no longer stored securely within the relative security of the data center.
Now it is outside the walls and in nature.
In a simplified example, a retailer can now host a secure process in the cloud, where it is protected by physical measures, such as biometric access control in the data center, as well as extensive network security. Moving that process to the limit, like a closet in the back of a retail store, would represent a substantial change in the way it is "locked".
The various problems related to the edge that are covered above can be more easily considered during greenfield implementations, where applications can be optimized from scratch for the edge model. Unfortunately, as cutting-edge computing takes hold, most developers will not enjoy the benefit of starting over. They will be responsible for doubling and extending existing business processes designed for older monolithic technologies in this new distributed topology.
There is no free lunch.
Variable declaration, object instance creation, memory allocation and security profile problems must be resolved as such applications are reconfigured to make the leap to the limit.