Edge Computing as Antidote to Remote Engineering Challenges

Cloud and edge, when used in combination, yield a novel, cost-efficient IoT deployment solution for smart products.

Cloud and edge, when used in combination, yield a novel, cost-efficient IoT deployment solution for smart products.

Reliable edge computing solutions make real-time insights a reality in extremely remote locations like the International Space Station. Image courtesy of NASA.


Of all the accolades given to the Mars Perseverance Rover, add one more: most distant successful deployment of edge computing. 

Edge computing is defined as a distributed computing ecosystem that brings computation and data storage closer to the collection location. In the case of the Mars exploration, closer means millions of miles, improving on signal latency measured in minutes instead of milliseconds. 

Edge computing offers local computation, making for faster decisions. Cloud computing offers fast computation of large data sets, and the ability to run complex artificial intelligence and machine learning algorithms. Working together, cloud and edge computing offer a new and cost-effective IoT deployment solution for smart products. 

NASA’s Perseverance and its sidekick drone helicopter Ingenuity have to operate without direct control from Earth. Most of the data analysis is done on site, using a PowerPC 750, the CPU best known as the processor in the 1998 iMac. All data is transmitted back to Earth, but the 12-minute sending time one way makes direct control impossible. 

Even before going to work exploring Mars, data was gathered from Perseverance’s descent from an array of sensors in the heat shielding. Sent back to Earth after landing, this data allows NASA engineers to upgrade heat shields and other essential landing equipment based on experience and not just simulation. 

The Mars Perseverance Rover shows just how far edge computing can take us. Image courtesy of NASA.

International Space Station on Edge 

The edge computing environment on Mars is not the first extraterrestrial deployment of its kind. Hewlett-Packard Enterprise (HPE) and NASA are testing a new computer to run artificial intelligence routines on the International Space Station. “Spaceborne Computer-2” will allow astronauts to process data locally, in minutes instead of months as with previous low-power computing resources on board. 

“The most important benefit to delivering reliable in-space computing is making real-time insights a reality,” Dr. Mark Fernandez, HPE’s principal investigator for Spaceborne Computer-2, recently told news site FedScoop. “Space explorers can now transform how they conduct research based on readily available data and improve decision-making.”

Such local-and-remote computing working in tandem is growing rapidly for more down-to-earth applications. Engineering organizations are finding benefit in shifting from datacenter and workstation-centric operations to embracing remote collaboration, using computing resources on site and in the cloud. 

Working together, cloud and edge computing offer a new and cost-effective IoT deployment solution for smart products—even in outer space. Image courtesy of NASA.

Computational Immediacy

“Edge computing gives immediacy,” notes Nick Brackney, senior consultant for cloud at Dell Technologies. “Workflows that get pushed to the Edge have volatile data; its use is required immediately.” 

Such immediacy is essential to gain consistency across the dispersed ecosystem. Applications can be operating at the remote site, making real-time decisions based on previous and ongoing deep learning neural networks that operate in the cloud. 

Spaceborne Computer-2 will allow astronauts to process data locally, in minutes instead of months as with previous low-power computing resources on board. Image courtesy of HP.

“Real-time operations at the Edge; training and optimization at the cloud,” notes Brackney. “It is a virtuous cycle for autonomous applications.” 

New technologies such as autonomous vehicles generate terabytes of data per day. To process all that data is a challenge no matter its location. 5G improves the latency issues, but there is still too much data in the device to make real-time operational decisions remotely. 

“The challenge [for engineering] is to balance how much to send to the cloud and how much to process at the edge,” notes Brackney. “Each workload is different.” 

Data Gravity 

At a macro level, this trend of dividing data processing between a central and a remote location is an example of what experts call “data gravity.” The new generation of computationally intensive products is the gravitational force drawing applications, services and other data just as a planet draws everything toward its center. 

Edge computing offers local computation, making for faster decisions. Cloud computing offers fast computation of large data sets, and the ability to run complex artificial intelligence and machine learning algorithms. Image courtesy of Dell Technologies.

“The theory is that data acts like gravity,” notes Matt Trifiro, CMO of Vapor IO, a company working on what they call the Kinetic Edge, described as a wide-scale network for solving edge computing issues. 

“A petabyte takes a month to send on today’s internet,” says Trifiro. Data gravity is when the application to process this data is sent to where the petabyte of data resides. “There is no one edge,” notes Trifiro. “You must be able to access the edge everywhere as one common set of infrastructure, [one in which] companies bring their technology to the common infrastructure.”

On a practical level, data gravity can become a source of ongoing contention between the information technology (IT) and the operational technology (OT) teams. 

“These are all snowflake deployments; every use case is different,” notes Dell’s Brackney. “IT has its issues. The OT people who own the factory are more device oriented. To succeed at the Edge, IT and OT must come together and digitize all their workflows. This is the chasm to cross.” 

Containers and Kubernetes 

Two newer data technologies coming to the fore with the rise of edge devices are containers and Kubernetes. Containers are like virtual machines, but lighter in size and defined for a narrow set of capabilities. A container has its own file system and its own share of a local CPU. It is decoupled from its underlying infrastructure, allowing it to be portable across cloud and OS distributions. 

Containers offer Agile application and deployment, crucial in creating IoT-enabled products. Containers decouple development from operational issues, and use the ability to create an application image when required, rather than at initial product deployment. Containers are loosely coupled in operation, and allow users to easily deploy distributed and elastic services. There is no monolithic OS stack as in the typical workstation or server. 

Kubernetes is a portable, extensible open source platform for container orchestration and management. The Kubernetes platform facilitates declarative configuration and device automation. Google did the original research and now oversees work from the growing, robust Kubernetes open source community. 

Kubernetes provides a way to run containers as elements in a distributed network. The Kubernetes platform takes care of deployment, service discovery, load balancing and storage orchestration. If a container fails, the Kubernetes platform can replace or isolate it. However, Kubernetes is not a complete Platform-as-a-Service offering. Instead, it is more like a sack of IT pieces for building and deploying independent operations. 

Edge and the Developing World 

The ideas behind edge computing and its relationship to cloud computing may seem fairly straightforward in countries with an established internet infrastructure. In countries with limited infrastructure, the national telecommunications companies (telcos) are the data providers. 

“Add 5G and you have a natural last-mile answer to end users,” notes Michael DeNeffe, director of product development for Cloud at AMD

As a vendor heavily invested in graphics, AMD sees edge computing as a great way to enable graphics intensive workflows that use virtual reality or augmented reality (AR/VR) in a more location-independent fashion. 

“VR/AR in engineering workflows are awesome, but unless you are directly connected to the cloud at high bandwidth it gets dicey,” according to DeNeffe. The solution, DeNeffe says, is fast edge networks using 5G. 

“Telcos are trying to figure out how to monetize this,” DeNeffe notes. “They are looking at workloads in various applications, including high-performance engineering. Companies can now hire engineers in time zones all over the world. [With edge computing] they can share data sets and take advantage of local capabilities. There is no need for centralized work.”

Edge and the Cost of Engineering

Companies have two factors in consideration regarding engineering talent, DeNeffe says: head count and cost to deploy engineers. 

“Rather than a team of ten engineer[s] in California designing a product, hire 40 engineers globally collaborating over edge networks,” says DeNeffe, who adds this has a fundamental quality of “quicker time to money and more efficiency. Network availability means hiring engineers anywhere.” 

Local-and-remote computing working in tandem is a more common scenario, as engineering organizations see the benefits in shifting from datacenter and workstation-centric operations to embracing remote collaboration. Image courtesy of Dell Technologies.

How should engineering teams evaluate their edge computing needs? DeNeffe says they need to focus on the problem to solve. 

“Edge brings you closer to the actual compute,” DeNeffe says. “It is a honeycomb of aspects that makes edge [computing] exciting. If you provide capability on networking or hardware, you always find people taking advantage of it for software or engineering workflows.” 

Fast networking is opening use cases previously thought to be impractical, such as using VR/AR technology in remote worksites. 

“When virtual reality first came out, we realized you needed a direct connection to a computer or an extremely fast network,” says DeNeffe. “Use cases broke down. But now they are picking up again thanks to fast networking.”

More AMD Coverage

Accelerating Electric Vehicle Development with Multidisciplinary Simulation and High-Performance Computing
In this new Making the Case guide, learn how a unified approach to design and multidisciplinary simulation from Dassault Systèmes, combined with high-performance computing powered by AMD EPYC™ processors, can accelerate EV design.
New Engineering Design Center for AMD Opens in Serbia
AMD expands in the Balkans with a new design center to improve software and AI capabilities.
Rise of the AI Workstation
Given the rapid interest in artificial intelligence, more workstation vendors are rising to meet demand.
AMD Acquires Hyperscale Systems Developer ZT Systems
Goal of acquisition is for AMD to greatly expand its data center artificial intelligence systems capabilities, company says.
Velocity Micro Now Offers AMD Ryzen 9000 Series Desktop Processors
New processors with AMD’s Zen 5 core technology will be available in workstation desktop systems, company reports.
AMD Completes Acquisition of Silo AI
Plan is to accelerate development and deployment of artificial intelligence models on AMD hardware.
AMD Company Profile

More Dell Coverage

Artificial Intelligence for Design and Engineering Workflows
In this white paper, learn how artificial intelligence and machine learning can improve design and simulation.
NVIDIA, nTop Strengthen 3D Solid Modeling Collaboration
NVIDIA invests in nTop, integrates OptiX rendering into nTop software.
AU 2024: Autodesk Offers Glimpses of the Future with Project Bernini
New Proof of Concept at Autodesk University Hints at AI Training Based on Proprietary Data
Rise of the AI Workstation
Given the rapid interest in artificial intelligence, more workstation vendors are rising to meet demand.
Compact Power Without Compromise: Dell Precision 3280 Compact Workstation
Dell’s latest ultra-small form factor workstation makes no compromises.
SIGGRAPH 2024: NVIDIA Releases Microservices to Empower AI Apps and Robot Training
NVIDIA NIMs expected to attract developers to NVIDIA Omniverse™
Dell Company Profile

More Hewlett Packard Enterprise Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Randall  Newton's avatar
Randall Newton

Randall S. Newton is principal analyst at Consilia Vektor, covering engineering technology. He has been part of the computer graphics industry in a variety of roles since 1985.

  Follow DE
#25253