IF YOU WANT to control devices that require precise, coordinated movements, protocols like TCP/IP have too much overhead and latency making life difficult or impossible. Luckily, a standard called EtherCat is aimed at fixing all the things that makes TCP/IP unsuitable while still running over low cost 802.3 Ethernet hardware.
The idea is simple, use existing hardware and lower level protocols to contain costs, then replace what you need higher up the OSI stack. If TCP/IP is too slow, too inefficient, and has too many latency problems, just replace it. That is what EtherCat did.
If you are driving a high speed robot, milliseconds count. Feedback from one device may be needed on the next machine on the assembly line in very short order, so stopping to do the TCP/IP equivalent of pondering your navel is not acceptable.
EtherCat has three main goals, high bandwidth utilization, predictable timings and extremely low latencies. TCP/IP misses that target because of packet overhead, switching inefficiencies, and susceptibility to interrupts. Plain old 802.3, Ethernet for the non-IEEE trivia buffs, is just fine, so it is used.
This means all the hard stuff, cables, connectors, switches and controllers, are all done, from here on out, it is simply software. The network people at every company are already familiar with the nuances of Cat5 cable so the physical side won’t necessitate an expensive consultant.
The first huge difference is that EtherCat networks are one big loop. There is a master controller, and a large number of slaves, 65K or so, limited by 802.3 standards. That should be enough for most people to squeak by. Each vertical white slice below is an EtherCat slave, as is the big silver box. The blue one is the master, controlling all 10 slaves.
EtherCat controller and slaves
A single Cat5 cable connecting two devices may not seem like a loop, but it is. Four of the eight wires are one direction, the other four go the opposite way. The master device is the start, and the designated last device on the network becomes a loopback as well as a node. Because of this, just about any physical topology can be designed, stars, trees, or loops, but there is only one logical path, and it is a big loop.
Lots of topologies on one network
If you are wondering how this all works, the topology is preset. When you design an EtherCat network, you set up the path that the data packets run through, and they then follow the given path. If you add devices, they won’t be seen until you program them into the network.
If you pull one out, depending on how things are set up, it will either shut down the network entirely, or more likely turn the last node before the break into a loopback. This self-healing ability can be very important for many industrial networks. Additionally, if you add something in to the network, it can be detected and an administrator flagged. This can be seen either as hotplugging or a very rudimentary security measure.
So, one big loop, and the packets start and end at the same place, the master node, after they pass through every one of the nodes. There is no broadcast of packets, every one goes through every node, in order, one at a time. The packets are not stored and forwarded, they use cut-through routing, and that is the magic behind EtherCat.
In an EtherCat network, the packets never stop, they are never delayed, they go right on through each slave node. The node controller reads each packet as it goes by, and parses the data on the fly. What it needs to read, it does, and just ignores the rest in real time. The magic is that a slave also does the same with writes. When a device needs to write something, it also does it on the fly, inserting the data into the packet in real time. The last thing it does is write CRC data on the tail, again on the fly, completing the packet as it passes by on the wire.
The packets, messy and tight timings
If you are thinking this is not just compute and timing intensive, but also somewhat insane, you are right. Think about how long it takes a few K to pass by on a 100 Base-T wire, that is the time an EtherCat node has to read the frame, write to the frame, and recompute the CRC. The frame never stops, it is all real time cut-through routing.
If you remember earlier on, we said that the goals of EtherCat were low latency and precise timing. You can see how latency is lowered, the packet basically never stops moving, but timing is also taken into account. Since the packet starts and ends with the master, you can calculate round trip time, and if you want, time between each device can be calculated as well. EtherCat controllers can learn exactly how long each hop takes, and recalculate as needed if there are changes.
The last goal, bandwidth utilization, is obvious. TCP/IP sends out a packet, gets an ACK back, and then sends the next one. The longer the cable, the lower the bandwidth utilization, especially at higher speeds. EtherCat avoids that by being able to write packet after packet, leading to a claimed 97% bandwidth utilization, one of the highest the author has ever heard of.
So, why would you ever want EtherCat? If you are running a manufacturing line, there is little else you can use, there doesn’t seem to be anything else that offers the same set of features as EtherCat. When you are running a robot that has to optically align a laser and blow 10K+ fuses a minute, precise timing matters, and it matters a lot. Passing the results to the sorting machine that is next on the assembly line matters almost as much. A network interrupt could cause a lot of product returns weeks later.
The list of EtherCat devices is very long, and the number of member companies is equally impressive at over 1000. Lets just say that there are lots of options to choose from if you want to run your factory with an EtherCat backbone, no lock in here.
The best sign of how well a standard is doing is how many companies have adopted it. If you look at joke ‘standards’ like MS’s OOXML, there is only one company that uses the standard. EtherCat seems to have delivered on all three of their promised goals, and adoption is the opposite of things like OOXML. It could be a lot of fun to play around with when building your T800 factory.S|A
Latest posts by Charlie Demerjian (see all)
- Intel unleashes more Kaby Lake SKUs on the yearning public - Jan 4, 2017
- Qualcomm opens up a bit more on the 10nm Snapdragon 835 SoC - Jan 3, 2017
- AMD’s Freesync 2 changes the display game - Jan 3, 2017
- Coffee Lake points to issues with Intel’s 10nm process - Dec 28, 2016
- Intel goes all Pokemon with code names – really - Dec 28, 2016