Military Embedded Systems

Cloud, fog, mist, fluid, blockchain, and other things that irritate me

Blog

September 28, 2016

Ray Alderman

VITA Standards Organization

We are being inundated with new computing and networking models for all the wrong reasons -- from cloud computing to blockchain. This is a topic that distorts my normally congenial and pleasant demeanor, so let?s take a look at what is really going on and clear the air.

Data centers

-Cloud computing is a centralized computing model where all the data goes from the user at the edge, up through the cloud, to the data center. It then gets processed and reports are sent back down to the edge. If you look at it closely, this is the old GEISCO (General Electric Information Services Company) RJE-to-mainframe (Remote Job Entry) services model from the 1970s using HASP software (Houston Automatic Spooling Priority) and punched cards. Today's cloud architecture assumes that (1) we have the bandwidth on the Internet to handle billions of transactions per second and (2) the users don’t mind the massive latencies. CPU utilization in data center servers is running about 10 percent. It was 6-7 percent just a couple of years back. All the servers in a data center are waiting for I/O: they are I/O-bound (the CPU can process more data that the communication channels can deliver) while sucking-up lots of power and dissipating lots of heat. <http://embedded-computing.com/guest-blogs/the-iot-needs-fog-computing/>

-Fog computing is a distributed computing model where a new tier of smaller data centers, regional for example, are placed between the cloud severs and the user at the edge. User data only needs to travel to these intermediate data centers to be processed, reducing the network latency and increasing CPU utilization in the fog servers. Then, these fog servers send some consolidated data or reports up to the cloud servers. <http://embedded-computing.com/guest-blogs/defining-fog-computing-for-those-who-thought-it-was-just-deploying-some-logic-on-an-edge-gateway/>

-Mist computing is a mixture of both centralized and distributed computing models. It moves another set of servers, in yet a smaller data center (like in a city) closer to the edge, closer to the user. Mist servers communicate with the fog servers, and they communicate with the cloud servers. <http://www.thinnect.com/mist-computing/>

-Fluid computing is a distributed computing model, where the embedded computers controlling machines or processes share their individual resources among themselves (i.e., storage, computing power, etc). Somewhere in that local network, one of the machines sends data up to the mist servers, or maybe to the fog servers, or maybe the cloud servers. <http://embedded-computing.com/guest-blogs/fluid-computing-unifying-cloud-fog-and-mist-computing/#> Hard to tell in this model which level of data center servers are needed.

-There is a fifth computing model, blockchain, that is used primarily in the financial industry and for Bitcoins. It operates like a broadcast-based cache coherent network, not relevant to the majority of cloud/fog/mist/fluid computing applications. <http://blogs.wsj.com/cio/2016/02/02/cio-explainer-what-is-blockchain/> There are no centralized data centers in a Blockchain network. Any message from one computer is sent to all the computers in the network (i.e., broadcast). It’s pretty interesting how it works, which is why it’s mentioned in this discussion.

Which to use?

If you read the papers written about these new computing architectures, the justification for them involves cost (to operate a data center), CPU utilization, bandwidth bottlenecks on the network, “softwarization”, unpredictable paths with very high latencies, and a host of other ridiculous explanations. All of these reasons are self-serving rationalizations by uninformed people after the fact, most of whom are selling hardware. These architectures have nothing to do with efficient networking or computing theory.

There are basically two ways to make networks and data centers operate more efficiently: (1) prioritize the traffic into critical, prompt, and routine transactions or (2) move the servers closer and closer to the user. Why do cloud/fog/mist/fluid computing architectures each move the servers closer to the user, and not use prioritization?

The singular reason is FCC order 15-24 dated February 2015, Section II, Subsection A, paragraph 18. <https://apps.fcc.gov/edocs_public/attachmatch/FCC-15-24A1_Rcd.pdf> That paragraph states that no specific traffic on the Internet may be given a higher priority over any other traffic. If this order did not exist, I think the present internet could handle maybe twice as much traffic, and CPU utilization in data centers would jump up to 50 percent or more. Whether the ultimate architectural structure looks like a set of nested concentric circles (like Russian Matryoshka Dolls), a bewildering set of overlapping Ven diagrams, or a horribly complex and grotesque Fat-tree is yet to be seen, thanks to the FCC’s order and the perverse and depraved minds at the telecom companies. Remember the deep-packet inspection idea, that would allow the telecommunication companies (telecoms) to charge by the byte? Remember station-to-station, person-to-person, and collect long distance telephone calls?

If the Internet allowed prioritization of traffic, the Internet service providers and telecoms would certainly come up with a new pricing model, charging the sender and receiver more for the faster traffic than the slower messages. Telecom people will sell their children’s organs to make another penny or two, and cannot be trusted in a prioritized traffic environment. The Stanford University folks have come up with a method that would allow the end-users to establish the priorities of the data they receive, making the Internet more efficient. But that idea will just open the can of pricing worms, inspire more egregious behavior, and introduce malfeasance by telecom companies, as well as flood the streets of America with rabid telecom zombies again.

Cellphone CPU architectures

The cellphone CPU makers are showing much higher levels of engineering intelligence than the data center engineers. But those CPU companies don’t have to tolerate the U.S. Federal Communications Commission (FCC) (except for their RF chips). They have the exact same problem as the data center guys: the CPU is always waiting on I/O (cellphone CPUs are also I/O-bound, due to all the internal data transfers between cores). With the distances involved in the data center, they are required to move the data through routers, across internal networks, into server main memory, and then into the CPU cache. They must move the data at least five times before it gets processed. The cellphone guys don’t have to move the data once it’s inside the device. They can use cache coherent memory architectures.

These CPU makers are moving the data from the I/O, straight into shared virtual cache memory, using the AXI extensions of the ACE protocol. <https://community.arm.com/groups/processors/blog/2016/05/29/exploring-how-cache-coherency-accelerates-heterogeneous-compute> Any processor (CPU, core, or GPU) that needs the data simply reads the shared cache. With the new Bifrost architecture, the CPU and GPU share a fast cache coherent memory, independent of the shared virtual memory cache used by the other cores. <https://community.arm.com/groups/arm-mali-graphics/blog/2016/08/31/bitesize-bifrost-2-system-coherency> These CPU makers know that every time they move the data, they induce latencies and consume lots of power, so they don’t move the data. The data center guys, however, don’t have that option. At least not yet. I/O to cache (instead of I/O to main memory) in a server could be beneficial to CPU utilization and cut-out at least one data movement operation.

There are number of new concepts concerning memory and data sharing schemes in cellphone processors. More recently, the the people at North Carolina State University have developed a core-to-core communications acceleration framework (CAF) using a new queue management device (QMD) in silicon.

Einstein and entanglements

There is another potential future solution to the I/O-bound and CPU utilization problems in the data center: quantum-entangled cache coherent I/O. This goes back to a paper written by Albert Einstein and others in 1935. When certain particles are created together (entangled), each of the particles will maintain the same state, no matter how far they are apart. If an entangled particle on one side of the universe changes spin direction, the other entangled particles will change their spin instantly, on the other side of the universe. This, however, occurs faster than the speed of light, a characteristic that bothered Einstein. <http://www.livescience.com/56076-entangled-particles-remain-spooky.html> Scientists at the U.S. National Institute of Standards and Technology (NIST) successfully accomplished quantum teleportation/communication over a distance of 100 kilometers in late 2015, breaking the old record of 25 kilometers. So, the concept works. If you are still skeptical, the Chinese have just launched their first satellite that uses a quantum entangled communications link to earth. And, this one is accomplishing quantum communication over a distance of 1200 kilometers (746 miles). In a few decades, data centers and the internet could be using quantum-entangled data links. The I/O-bound and CPU utilization problems will go away. And we will become CPU-bound again (i.e., the quantum communications channel can deliver more data than the CPU can process), just like we were back in the 1970s with mainframes.

Organic computing

A lot of work is also being done on organic computing (using DNA and bacteria), Neuromorphic Computing (implementing the synapses of the brain, in silicon) and Quantum Computing (quantum and optical computing are merging). Aside from the computing spectrum, we are also manipulating human DNA in ways never previously imagined. What we are really doing today is reading the mind of God, and that has both beneficial and catastrophic consequences. But even the catastrophic effects on the planet and the human race, if they occur, are probably much better than what is happening with cloud computing.

In closing

Denis Diderot (1713-1784), a French philosopher, once said about the nefarious association of church and state in France: “Men will never be free until the last king is strangled with the entrails of the last priest.” It is my solemn duty, imposed upon me by my rigorous Southern education, to properly adapt and apply what he said to today's technology situation: “Computing and networking will never be efficient until the last FCC commissioner is strangled to death with the innards of the last telecom engineer.”

Now, I feel much better. I can get back to work on the article I promised last time, about cyberwars.