Skip to content

Getting started with edge computing

Edge computing practitioners answer your questions about when and why to build applications at the edge.

Getting started with edge computing
Author

The Microsoft Azure cloud computing dictionary describes edge computing as a framework that “allows devices in remote locations to process data at the ‘edge’ of the network, either by the device or a local server. And when data needs to be processed in the central datacenter, only the most important data is transmitted, thereby minimizing latency.”

There’s quite a bit to unpack there. How does building edge computing software differ from writing other cloud applications, what do you need to know to get started, and does Microsoft’s definition hold up in the first place? The ReadME Project Senior Editor Klint Finley gathered three experts to answer these and other questions.

Let’s meet our experts:

Headshot photograph of Jerome Hardaway Jerome Hardaway is a senior software engineer at Microsoft, where he works in Industry Solutions Engineering. He’s also a U.S. Air Force veteran and the executive developer of Vets Who Code, a tuition-free, open source, coding-immersive non-profit that specializes in training veterans.

Headshot photograph of Kate Goldenring Kate Goldenring is co-chair of the Cloud Native Computing Foundation IoT Edge Working Group and a senior software engineer at Fermyon Technologies.

Headshot photograph of Alex Ellis Alex Ellis is the founder of OpenFaaS, a former CNCF ambassador, and creator of the Linux Foundation’s Introduction to Kubernetes on Edge with K3s course.

Klint: Let’s start by getting on the same page about what we’re talking about. I shared the Microsoft Azure cloud computing dictionary definition of edge computing. Does that definition work? Would you change anything about it?

Jerome: I would make the definition more human-centric. It’s not just about devices, it’s about the person. You want data processed and updated at the edge of the network as close to the person using it as possible, because, without a person to answer it, a cell phone is just a block of electricity.

Kate: I think it’s a good definition, given that it’s 12 words long. I would add more to it. When the CNCF IoT Edge Working Group was working to define edge computing, we found that definitions tend to fall into three main categories. The most common, and the one that Microsoft seems to be using, is geography-based—the distance between devices and servers, for example. The second is a resource-based definition, which prioritizes the resource constraints faced in edge computing. The third was connectivity-based.

Alex: Likewise, I’d change the definition to reflect how broad a topic edge computing can be. Just like with cloud computing, you can have two industry experts with a wealth of experience talking about two very different things when you ask them about edge computing.

Klint: I could see there being some confusion between edge computing and private or hybrid cloud, since all three typically involve some on-premises computing power. What are the main differences between edge computing architectures and more traditional architectures?

Jerome: A big part of the difference is about the intent, and that will affect how you architect your solution. Private and hybrid cloud computing is usually more about controlling where your data can go. For example, a healthcare company might need to make sure that patient data never leaves their premises. Edge computing is more about specific requirements, like the need to have an extremely responsive application, for example. Edge computing is about ensuring you have the right resources in the right places.

Kate: One way to think about it is that edge computing is a continuum that includes the downstream devices; upstream cloud resources, whether those are part of a public or private cloud; and whatever nodes you might have in between. You have to think about what sort of storage and computing resources you will have available at each point in the continuum. Network connectivity is a big constraint for much of what we talk about when we talk about edge computing.

Alex: You’re not always necessarily working around resource constraints in edge computing. Sometimes you might be working with rather capable devices and servers. But resources and environment are certainly something you have to consider when designing an edge computing solution in a way you might not have to for a more traditional scenario. For example, in a hybrid cloud scenario, you might be able to assume that devices will maintain a traditional TCP/IP network connection at all times. But what if you have a remote sensor powered by a battery that has to be changed manually? You might want to plan to have that sensor only intermittently connect to a network, rather than maintaining a constant connection, to conserve power and reduce the number of trips someone has to make to change the batteries. The device might only support a low-power wireless protocol to communicate with the intermediary device, so you’ll need to accommodate that as well.

Klint: What applications are NOT a good fit for edge computing?

Jerome: Adding more intermediaries between a device and a data store creates a bigger attack surface that you have to secure, so some industries, healthcare for example, will need to pay extra attention to the possible trade-offs. You have to think about the requirements and the benefits versus the challenges for your specific use case. You need to make sure you understand the business problems you’re trying to solve for your organization.

Alex: I don’t want to pigeonhole edge computing by saying it’s good for certain things and not others. It’s more about building an appropriate solution, which can differ greatly depending on the requirements. To continue with the healthcare example, building an edge computing system for medical devices will be quite different from building one for restaurant point-of-sale systems, which will be different from a solution for airlines. A POS device might run a full installation of Android and you might need to reboot it periodically to install updates. For medical devices, you’re probably going to want something like a real-time operating system that can run continuously without the need to take it offline to install updates.

Kate: It comes down to the characteristics of the application. If you have a highly stateful application that needs constant connectivity and lots of storage, then maybe you’d be better off running that application in the cloud. But you still might want to have some intermediaries near the edge to handle some lighter processing.

Klint: How portable between platforms do edge computing applications tend to be? Any tips on making them more portable?

Kate: It depends on what you mean by platform. Edge computing software tends not to be very portable between scenarios because of how customized it is to its circumstances. There are many different ways to connect to different devices, so there often needs to be a lot of custom logic to facilitate that. But one thing you can make consistent is what you do after ingesting data from your devices. You can focus on these elements to make things more portable.

Jerome: The more features of a platform you use, the less portable it is. To use an analogy, the more you use the built-in functionality of a framework from Ruby on Rails as opposed to implementing your own solutions, the harder it will be to move. But it’s also more work on your end. The tradeoff is that the more you leverage the technology, the more dependent you are on it. That’s not always bad but you need to be aware of it.

Alex: Again, it depends on what you’re running at the edge and what resources and capabilities are available. Embedded software for bespoke devices might not be very portable, but if your hardware can run a container or a virtual machine, your solution can be very portable.

Klint: What sorts of skills should developers learn to prepare to work in edge computing development?

Alex: I have a free course on Kubernetes and K3s at the edge. It has a list of associated skills that are useful in this space, such as MQTT, shell scripting, and Linux. Of course, what you need to learn depends on what sort of edge computing you will be doing. In some cases you might be making an otherwise traditional web or mobile application more responsive by putting resources closer to the user, but in others you might be working with industrial equipment or automobiles. Either way, Kubernetes isn’t a bad skill to have.

Jerome: Language-wise, I recommend Python, because you’ll be working with many different platforms and environments, and Python plays well with just about everything. It’s one of the most transferable technical skills you can learn. Edge computing is also one of the few areas where I recommend getting professional certifications for the technologies you use, because it showcases that you’re really taking the time to learn them. And as always, work on your communication skills, because you’re going to FUBAR a thing or two.

Kate: Edge computing is a really broad field. It’s also fairly new, so you’re not alone in figuring out what you need to learn. Learning about networking technologies and all the various protocols that devices use to communicate might be a good starting point. And you can always just get a Raspberry Pi and build something. Connect it to an edge computing platform and start learning the terminology. Have some fun, that’s the best way to learn.


Want to get The ReadME Project right in your inbox? Sign up for our newsletter to receive new stories, best practices and opinions developed for The ReadME Project, as well as great listens and reads from around the community.

Explore more from GitHub

Community

Community

See what’s happening in the open source community.
GitHub Universe 2024

GitHub Universe 2024

Get tickets to the 10th anniversary of our global developer event on AI, DevEx, and security.
GitHub Copilot

GitHub Copilot

Don't fly solo. Try 30 days for free.
Work at GitHub!

Work at GitHub!

Check out our current job openings.