Home / Networking / Fog computing: Balancing the best of local processing with the cloud

Fog computing: Balancing the best of local processing with the cloud

[ad_1]

fog-computing.jpg

Getty Images/iStockphoto

The mobile computing era changed the expectations we place on our personal computers and internet services. These days, we like our computing on the fly, moving between areas of variable connectivity as we go about our lives while expecting that email, spreadsheet, or cat video at the instant it is requested.

And we’re bringing basic connectivity to far-flung places around the planet through satellites, which provide a basic connection to the rest of the world but are subject to all kinds of atmospheric interference and latency issues.

Special Feature


Cloud: How to Do SaaS Right

Software as a Service offers irresistible benefits for organizations of all sizes — from cost savings to scalability to mobile accessibility. We offer guidance on avoiding the pitfalls of the cloud and choosing your SaaS partners well.

As a result, a generation of technology deployments based around the idea of a powerful client computing device (the PC) and a continuous internet connection (home broadband) are evolving to handle the more demanding stress of mobile computing, in which the devices you’re trying to serve tend to go through tunnels at inconvenient moments.

We used to process everything locally; then we eagerly embraced the idea of processing everything remotely (cloud computing); but a third model is evolving that tries to balance the best of each world in response to the unpredictability of computing on the go.

The marketing people want to call this “fog computing,” which makes me a little sad inside, but this theory of infrastructure architecture has long been part of the largest tech companies of our day and is starting to make more and more sense for smaller companies dependent on customers that use a mixture of on-premises technology and cloud services. It’s a topic we plan to explore thoroughly at Structure 2016, our annual showcase of the best people and hottest topics in cloud computing.

Two of our Structure speakers — Urs Hölzle, the man who literally built Google (he would insist that I’m overstating this, but it’s true), and Jay Parikh, who is responsible for making sure Facebook’s infrastructure can handle an ever-expanding list of demands — would consider this shift old news. Google and Facebook have designed their infrastructure to apply computing where it makes the most sense, which helps Google offer fast search results and global compute services, and helps Facebook serve video to billions of mobile devices while ushering the virtual reality era into the mainstream.

But this approach is starting to make more and more sense for other companies. Take GE Digital, for example, which has started to put more processing power into sensors on trains, because even millisecond delays in relaying sensor data back to the datacenter in hopes of requesting train routing instructions can have catastrophic effects. Seth Bodnar, GE Digital’s chief digital officer, will be at Structure 2016 to talk about how this approach is improving train performance and safety.

Another example of a company taking this approach is Uber, which stores some data locally on drivers’ phones in case of a local network outage so that its service remains up and ferrying VCs between SoMa and The Battery in San Francisco. On the enterprise side, HP is building more sophisticated networking gear for Internet of Things applications. And speaking of IoT, Plum recently released a light switch for your home based around the Erlang programming language, which allows it to do much more than a simple connected switch could accomplish.

The nice thing about this approach is that you can balance the ratio of computing on the endpoint device, on the edge of the network, or in the data center in whatever way makes the most sense for your product or service and your users. Modern data center management tools allow you to think of your whole infrastructure as a single computer, regardless of how complicated the actual reality is, which means you can provide your users with a better overall experience by taking advantage of the most appropriate computing resources demanded by your application where they are needed the most.

We’ll explore this topic in depth with several speakers, in addition to the ones mentioned above, over two days at Structure 2016. The event will take place November 8 and 9 at the UCSF Mission Bay Conference Center, and if you need to know about the evolution of cloud computing to make decisions about your business, you need to be there. The schedule can be found here, and you can purchase tickets here.

[ad_2]
Source link

About admin

Check Also

RCE is back: VMware details file upload vulnerability in vCenter Server

[ad_1] Image: Shutterstock If you haven’t patched vCenter in recent months, please do so at ...

Leave a Reply

Your email address will not be published. Required fields are marked *