Local Networks

Old ethernet.
Computers Learned to Share

In 1973, Bob Metcalfe sketched the first Ethernet design on a napkin at Xerox PARC. His idea was simple: connect all computers in an office using a single cable, like a telephone party line where everyone could hear everyone else. The name "Ethernet" came from the luminiferous ether, the hypothetical medium once thought to carry light waves through space.

Metcalfe's ether would carry data packets through offices, warehouses, and eventually the entire world. That napkin sketch would become the foundation of nearly every computer network built in the following five decades.

The original Ethernet, later called 10Base5 or "thick Ethernet," used coaxial cable as thick as a garden hose. The cable could stretch up to 500 meters and required special vampire taps to connect computers. These taps literally pierced the cable's outer shielding to make contact with the inner conductor. Installing thick Ethernet was like installing plumbing—cables ran through walls, across ceilings, and under floors. A single break anywhere in the cable would bring down the entire network, making fault-finding an exercise in detective work.

The early 1980s brought 10Base2, or "thin Ethernet," which used thinner coaxial cable similar to television cable. Computers connected using T-connectors and 50-ohm terminators at each end of the cable segment. The system was cheaper and easier to install than thick Ethernet, but it was also more fragile. A loose connector could create signal reflections that corrupted data for the entire network. Network administrators learned to carry spare terminators and BNC connectors, as these small components caused a disproportionate number of network outages.

IBM's response to Ethernet came in 1985 with Token Ring, a networking technology that used a completely different approach. Instead of allowing all computers to transmit whenever they wanted, Token Ring passed a special data packet called a token around the network. Only the computer holding the token could transmit data. This eliminated the collision problems that plagued early Ethernet networks, but it also made Token Ring more complex and expensive. IBM positioned Token Ring as the enterprise solution, while Ethernet was seen as suitable only for small workgroups.

The token passing mechanism of Token Ring created predictable network behavior. Network administrators could calculate exactly how long it would take for any computer to gain access to the network, making Token Ring ideal for time-sensitive applications. The technology operated at 4 Mbps initially, later upgraded to 16 Mbps. Token Ring used expensive shielded twisted pair cables and required specialized equipment at every connection point. A complete Token Ring installation could cost three times more than equivalent Ethernet equipment.

Ethernet's breakthrough came with 10BaseT in 1990, which replaced coaxial cables with ordinary telephone wire. Each computer connected to a central hub using a dedicated cable, creating a star topology that was much easier to troubleshoot and maintain. If one cable failed, only that computer lost connectivity. The hub repeated all data to every connected port, maintaining Ethernet's shared medium approach while improving reliability. 10BaseT used RJ-45 connectors, the same type used for telephones, making installation familiar to electricians and cable installers.

Early Ethernet hubs were essentially multiport repeaters that amplified and retransmitted every signal to all connected ports. A 12-port hub created a single collision domain where all 12 computers competed for network access. As more computers were added, collisions increased exponentially, and network performance degraded. The CSMA/CD (Carrier Sense Multiple Access with Collision Detection) protocol handled these collisions by having computers wait random amounts of time before retransmitting, but busy networks spent more time recovering from collisions than transmitting useful data.

The collision problem led to the development of bridges, devices that learned the MAC addresses of computers on each network segment and only forwarded packets when necessary. A bridge could connect two Ethernet segments, effectively doubling the available bandwidth by reducing the collision domain. Bridges were expensive, often costing $2,000 or more, but they provided dramatic performance improvements for busy networks. Network administrators strategically placed bridges to isolate heavy traffic and improve overall network efficiency.

Network interface cards (NICs) evolved rapidly during the 1980s and 1990s. Early cards were full-length ISA adapters that consumed significant CPU resources. Each received packet generated an interrupt that the processor had to handle, limiting network performance on slower computers. The introduction of bus mastering and buffer memory on NICs reduced CPU overhead and improved performance. By the mid-1990s, Ethernet cards cost less than $50 and were standard equipment on business computers.

The switch marked the next major evolution in local networking. Unlike hubs, which created a single collision domain, switches created a separate collision domain for each port. This meant that a computer connected to a switch port had dedicated 10 Mbps bandwidth, regardless of how many other computers were connected. Early switches were expensive, costing $100 or more per port, but they provided immediate performance benefits for busy networks. The first switches were essentially bridges with multiple ports, learning MAC addresses and forwarding packets only when necessary.

Full-duplex Ethernet, introduced in the mid-1990s, eliminated collisions entirely by using separate wire pairs for transmitting and receiving data. A computer could send and receive data simultaneously, effectively doubling the available bandwidth. Full-duplex required switches rather than hubs, as hubs couldn't provide the separate transmit and receive channels. This development made 10 Mbps Ethernet feel like 20 Mbps and eliminated the performance degradation associated with network congestion.

Fast Ethernet (100BaseT) arrived in 1995, providing 100 Mbps speeds over the same Category 5 cables used for 10BaseT. The technology used the same CSMA/CD protocol and frame format as traditional Ethernet, making it backward compatible with existing equipment. Fast Ethernet switches could auto-negotiate speeds, automatically selecting 10 Mbps when connected to older equipment and 100 Mbps when connected to Fast Ethernet devices. This smooth migration path helped Fast Ethernet achieve rapid market adoption.

Cable management became a significant challenge as networks grew. Early installations often resembled spaghetti, with cables running in every direction without organization. The introduction of structured cabling systems, patch panels, and cable management hardware improved network reliability and maintainability. Professional installers learned to label every cable, create cable maps, and use proper bend radii to prevent signal degradation. Good cable management practices reduced troubleshooting time and improved network aesthetics.

Network protocols layered on top of these physical connections. IPX/SPX dominated Novell NetWare networks, while NetBEUI was common on Microsoft networks. TCP/IP was primarily used in Unix environments and academic institutions. Each protocol had different characteristics and performance profiles. Network administrators often ran multiple protocols simultaneously, creating complex routing and bridging configurations. The eventual dominance of TCP/IP simplified network management but required years of migration planning.

The concept of VLANs (Virtual Local Area Networks) emerged in the early 1990s as networks grew larger and more complex. VLANs allowed network administrators to create logical network segments without changing physical connections. A single switch could support multiple VLANs, each with its own broadcast domain and security policies. VLAN technology enabled more flexible network designs and improved security by isolating different types of traffic.

Network monitoring and management tools evolved alongside the hardware. SNMP (Simple Network Management Protocol) allowed administrators to monitor switch performance, track utilization, and receive alerts about network problems. Early network management systems required dedicated workstations and specialized software. As networks became more complex, these tools became essential for maintaining performance and diagnosing problems.

The transition from shared media to switched networks represented a fundamental shift in networking philosophy. Instead of all computers sharing a single communication channel, each computer gained dedicated bandwidth and collision-free communication. This change enabled the high-speed networks that would eventually carry internet traffic and support distributed computing applications. The principles established by these early local networks—packet switching, MAC addressing, and hierarchical network design—still govern how data flows through modern networks supporting cloud computing, artificial intelligence workloads, and robotic control systems.