Login / Register

Most Popular Articles

  • IoT devices will overtake mobile by 2018 with Europe leading the way – Ericsson

    By Scott Bicheno  

    The latest Ericsson Mobility Report forecasts such rapid growth in the number of global IoT devices that they will overtake mobile phones as the largest category of connected device by 2018. Ericsson reckons Western Europe will be the biggest growth driver for IoT devices, forecasting a 5x increase by 2021. This won’t necessarily be the result of a greater appetite for IoT by European consumers, however, with Ericsson saying directives such as eCall for cars and smart meters compelling the continent to increase its number of connected devices. “IoT is now accelerating as device costs fall and innovative applications emerge,” said Rima Qureshi, Chief Strategy Officer at Ericsson. “From 2020, commercial deployment of 5G networks will provide additional capabilities that are critical for IoT, such as network slicing and the capacity to connect exponentially more devices than is possible today.” While the majority of IoT devices will be connected via non-cellular means (presumably wired or wifi), cellular IoT devices are forecasts to be the fastest growing category. Ericsson reckons a major reason for that growth will be 3GPP standardization of cellular IoT technologies, by which it’s presumably referring to NB-IoT. Other notable findings from the latest report include the fact that global smartphone subscriptions are expected to overtake those of basic phones in Q3 of this year and that the use of cellular data for smartphone video has doubled among teens in the past year, in contrast to a significant fall in the amount of time they spend watching traditional TV. Additionally the first devices supporting 1 Gbps LTE download speeds are expected later this year. Lastly Ericsson used the report to bring attention to the need to harmonise 5G spectrum in the frequencies above those currently licensed for mobile but below the 24 GHz+ range that was addressed at WRC-15, including better accommodation for microwave backhaul. It said the 3.1-4.2 GHz range is considered essential for early deployments of 5G and offered the chart below to illustrate how un-harmonised the global microwave backhaul picture currently is.

    Read more »
  • Cisco StackWise and StackWise Plus Technology

    This white paper provides an overview of the Cisco StackWise and Cisco StackWise Plus technologies and the specific mechanisms that they use to create a unified, logical switching architecture through the linkage of multiple, fixed configuration switches. This paper focuses on the following critical aspects of the Cisco StackWise and Cisco StackWise Plus technologies: stack interconnect behavior, stack creation and modification; Layer 2 and Layer 3 forwarding; and quality-of-service (QoS) mechanisms. The goal of the paper is to help the reader understand how the Cisco StackWise and StackWise Plus technologies deliver advanced performance for voice, video, and Gigabit Ethernet applications. First, this white paper will discuss the Cisco Catalyst 3750 Series Switches and StackWise and second, the Cisco Catalyst 3750-E and Catalyst 3750-X Series Switches with StackWise Plus will be discussed, highlighting the differences between the two. Please note that the Cisco Catalyst 3750-E and Catalyst 3750-X will run StackWise Plus when connected to a stack of all Cisco Catalyst 3750-E and Catalyst 3750-X switches, while it will run StackWise if there is one or more Cisco Catalyst 3750 in the stack. (See Figures 1 and 2.)

    Figure 1. Stack of Cisco Catalyst 3750 Series Switches with StackWise Technology

    Figure 2. Stack of Cisco Catalyst 3750-E Series Switches with StackWise and StackWise Plus Technologies

    Technology Overview

    Cisco StackWise technology provides an innovative new method for collectively utilizing the capabilities of a stack of switches. Individual switches intelligently join to create a single switching unit with a 32-Gbps switching stack interconnect. Configuration and routing information is shared by every switch in the stack, creating a single switching unit. Switches can be added to and deleted from a working stack without affecting performance.

    The switches are united into a single logical unit using special stack interconnect cables that create a bidirectional closed-loop path. This bidirectional path acts as a switch fabric for all the connected switches. Network topology and routing information is updated continuously through the stack interconnect. All stack members have full access to the stack interconnect bandwidth. The stack is managed as a single unit by a master switch, which is elected from one of the stack member switches.

    Each switch in the stack has the capability to behave as a master or subordinate (member) in the hierarchy. The master switch is elected and serves as the control center for the stack. Both the master member switches act as forwarding processors. Each switch is assigned a number. Up to nine separate switches can be joined together. The stack can have switches added and removed without affecting stack performance.

    Each stack of Cisco Catalyst 3750 Series Switches has a single IP address and is managed as a single object. This single IP management applies to activities such as fault detection, virtual LAN (VLAN) creation and modification, security, and QoS controls. Each stack has only one configuration file, which is distributed to each member in the stack. This allows each switch in the stack to share the same network topology, MAC address, and routing information. In addition, it allows for any member to become the master, if the master ever fails.

    The Stack Interconnect Functionality

    Cisco StackWise technology unites up to nine individual Cisco Catalyst 3750 switches into a single logical unit, using special stack interconnect cables and stacking software. The stack behaves as a single switching unit that is managed by a master switch elected from one of the member switches. The master switch automatically creates and updates all the switching and optional routing tables. A working stack can accept new members or delete old ones without service interruption.

    Bidirectional Flow

    To efficiently load balance the traffic, packets are allocated between two logical counter-rotating paths. Each counter-rotating path supports 16 Gbps in both directions, yielding a traffic total of 32 Gbps bidirectionally. The egress queues calculate path usage to help ensure that the traffic load is equally partitioned.

    Whenever a frame is ready for transmission onto the path, a calculation is made to see which path has the most available bandwidth. The entire frame is then copied onto this half of the path. Traffic is serviced depending upon its class of service (CoS) or differentiated services code point (DSCP) designation. Low-latency traffic is given priority.

    When a break is detected in a cable, the traffic is immediately wrapped back across the single remaining 16-Gbps path to continue forwarding.

    Online Stack Adds and Removals

    Switches can be added and deleted to a working stack without affecting stack performance. When a new switch is added, the master switch automatically configures the unit with the currently running Cisco IOS ® Software image and configuration of the stack. The stack will gather information such as switching table information and update the MAC tables as new addresses are learned. The network manager does not have to do anything to bring up the switch before it is ready to operate. Similarly, switches can be removed from a working stack without any operational effect on the remaining switches. When the stack discovers that a series of ports is no longer present, it will update this information without affecting forwarding or routing.

    Physical Sequential Linkage

    The switches are physically connected sequentially, as shown in Figure 3. A break in any one of the cables will result in the stack bandwidth being reduced to half of its full capacity. Subsecond timing mechanisms detect traffic problems and immediately institute failover. This mechanism restores dual path flow when the timing mechanisms detect renewed activity on the cable.

    Figure 3. Cisco StackWise Technology Resilient Cabling

    Subsecond Failover

    Within microseconds of a breakage of one part of the path, all data is switched to the active half of the bidirectional path (Figure 4).

    Figure 4. Loopback After Cable Break

    The switches continually monitor the stack ports for activity and correct data transmission. If error conditions cross a certain threshold, or there is insufficient electromagnetic contact of the cable with its port, the switch detecting this then sends a message to its nearest neighbor opposite from the breakage. Both switches then divert all their traffic onto the working path.

    Single Management IP Address

    The stack receives a single IP address as a part of the initial configuration. After the stack IP address is created, the physical switches linked to it become part of the master switch group. When connected to a group, each switch will use the stack IP address. When a new master is elected, it uses this IP address to continue interacting with the network.

    Stack Creation and Modification

    Stacks are created when individual switches are joined together with stacking cables. When the stack ports detect electromechanical activity, each port starts to transmit information about its switch. When the complete set of switches is known, the stack elects one of the members to be the master switch, which will be responsible for maintaining and updating configuration files, routing information, and other stack information. The entire stack will have a single IP address that will be used by all the switches.

    1:N Master Redundancy

    1:N master redundancy allows each stack member to serve as a master, providing the highest reliability for forwarding. Each switch in the stack can serve as a master, creating a 1:N availability scheme for network control. In the unlikely event of a single unit failure, all other units continue to forward traffic and maintain operation.

    Master Switch Election

    The stack behaves as a single switching unit that is managed by a master switch elected from one of the member switches. The master switch automatically creates and updates all the switching and optional routing tables. Any member of the stack can become the master switch. Upon installation, or reboot of the entire stack, an election process occurs among the switches in the stack. There is a hierarchy of selection criteria for the election.

    1. User priority - The network manager can select a switch to be master.

    2. Hardware and software priority - This will default to the unit with the most extensive feature set. The Cisco Catalyst 3750 IP Services (IPS) image has the highest priority, followed by Cisco Catalyst 3750 switches with IP Base Software Image (IPB).

    Catalyst 3750-E and Catalyst 3750-X run the Universal Image. The feature set on the universal image is determined by the purchased license. The "show version" command will list operating license level for each switch member in the stack.

    3. Default configuration - If a switch has preexisting configuration information, it will take precedence over switches that have not been configured.

    4. Uptime - The switch that has been running the longest is selected.

    5. MAC address - Each switch reports its MAC address to all its neighbors for comparison. The switch with the lowest MAC address is selected.

    Master Switch Activities

    The master switch acts as the primary point of contact for IP functions such as Telnet sessions, pings, command-line interface (CLI), and routing information exchange. The master is responsible for downloading forwarding tables to each of the subordinate switches. Multicast and unicast routing tasks are implemented from the master. QoS and access control list (ACL) configuration information is distributed from the master to the subordinates. When a new subordinate switch is added, or an existing switch removed, the master will issue a notification of this event and all the subordinate switches will update their tables accordingly.

    Shared Network Topology Information

    The master switch is responsible for collecting and maintaining correct routing and configuration information. It keeps this information current by periodically sending copies or updates to all the subordinate switches in the stack. When a new master is elected, it reapplies the running configuration from the previous master to help ensure user and network continuity. Note that the master performs routing control and processing. Each individual switch in the stack will perform forwarding based on the information distributed by the master.

    Subordinate Switch Activities

    Each switch has tables for storing its own local MAC addresses as well as tables for the other MAC addresses in the stack. The master switch keeps tables of all the MAC addresses reported to the stack. The master also creates a map of all the MAC addresses in the entire stack and distributes it to all the subordinates. Each switch becomes aware of every port in the stack. This eliminates repetitive learning processes and creates a much faster and more efficient switching infrastructure for the system.

    Subordinate switches keep their own spanning trees for each VLAN that they support. The StackWise ring ports will never be put into a Spanning Tree Protocol blocking state. The master switch keeps a copy of all spanning tree tables for each VLAN in the stack. When a new VLAN is added or removed, all the existing switches will receive a notification of this event and update their tables accordingly.

    Subordinate switches wait to receive copies of the running configurations from the master and begin to start transmitting data upon receipt of the most current information. This helps ensure that all the switches will use only the most current information and that there is only one network topology used for forwarding decisions.

    Multiple Mechanisms for High Availability

    The Cisco StackWise technology supports a variety of mechanisms for creating high resiliency in a stack.

    CrossStack EtherChannel® technology - Multiple switches in a stack can create an EtherChannel connection. Loss of an individual switch will not affect connectivity for the other switches.

    Equal cost routes - Switches can support dual homing to different routers for redundancy.

    1:N master redundancy - Every switch in the stack can act as the master. If the current master fails, another master is elected from the stack.

    Stacking cable resiliency - When a break in the bidirectional loop occurs, the switches automatically begin sending information over the half of the loop that is still intact. If the entire 32 Gbps of bandwidth is being used, QoS mechanisms will control traffic flow to keep jitter and latency-sensitive traffic flowing while throttling lower priority traffic.

    Online insertion and removal - Switches can be added and deleted without affecting performance of the stack.

    Distributed Layer 2 forwarding - In the event of a master switch failure, individual switches will continue to forward information based on the tables they last received from the master.

    RPR+ for Layer 3 resiliency - Each switch is initialized for routing capability and is ready to be elected as master if the current master fails. Subordinate switches are not reset so that Layer 2 forwarding can continue uninterrupted. Layer 3 Nonstop Forwarding (NSF) is also supported when two or more nodes are present in a stack.

    Layer 2 and Layer 3 Forwarding

    Cisco StackWise technology offers an innovative method for the management of Layer 2 and Layer 3 forwarding. Layer 2 forwarding is done with a distributed method. Layer 3 is done in a centralized manner. This delivers the greatest possible resiliency and efficiency for routing and switching activities across the stack.

    Forwarding Resiliency During Master Change

    When one master switch becomes inactive and while a new master is elected, the stack continues to function. Layer 2 connectivity continues unaffected. The new master uses its hot standby unicast table to continue processing unicast traffic. Multicast tables and routing tables are flushed and reloaded to avoid loops. Layer 3 resiliency is protected with NSF, which gracefully and rapidly transitions Layer 3 forwarding from the old to new master node.

    High-Availability Architecture for Routing Resiliency Using Routing Processor Redundancy+

    The mechanism used for high availability in routing during the change in masters is called Routing Processor Redundancy+ (RPR+). It is used in the Cisco 12000 and 7500 Series Routers and the Cisco Catalyst 6500 Series Switch products for high availability. Each subordinate switch with routing capability is initialized and ready to take over routing functions if the master fails. Each subordinate switch is fully initialized and connected to the master. The subordinates have identical interface addresses, encapsulation types, and interface protocols and services. The subordinate switches continually receive and integrate synchronized configuration information sent by the current master and monitor their readiness to operate through the continuous execution of self-tests. Reestablishment of routes and links happens more quickly than in normal Layer 3 devices because of the lack of time needed to initialize the routing interfaces. RPR+ coupled with NSF provides the highest performance failover forwarding.

    Adding New Members

    When the switching stack has established a master, any new switch added afterward automatically becomes a subordinate. All the current routing and addressing information is downloaded into the subordinate so that it can immediately begin transmitting traffic. Its ports become identified with the IP address of the master switch. Global information, such as QoS configuration settings, is downloaded into the new subordinate member.

    Cisco IOS Software Images Must Be Identical

    The Cisco StackWise technology requires that all units in the stack run the same release of Cisco IOS Software. When the stack is first built, it is recommended that all of the stack members have the same software feature set - either all IP Base or all IP Services. This is because later upgrades of Cisco IOS Software mandate that all the switches to be upgraded to the same version as the master.

    Automatic Cisco IOS Software Upgrade/Downgrade from the Master Switch

    When a new switch is added to an existing stack, the master switch communicates with the switch to determine if the Cisco IOS Software image is the same as the one on the stack. If it is the same, the master switch sends the stack configuration to the device and the ports are brought online. If the Cisco IOS Software image is not the same, one of three things will occur:

    1. If the hardware of the new switch is supported by the Cisco IOS Software image running on the stack, the master will by default download the Cisco IOS Software image in the master's Flash memory to the new switch, send down the stack configuration, and bring the switch online.

    2. If the hardware of the new switch is supported by the Cisco IOS Software image running on the stack and the user has configured a Trivial File Transfer Protocol (TFTP) server for Cisco IOS Software image downloads, then the master will automatically download the Cisco IOS Software image from the TFTP server to the new switch, configure it, then bring it online.

    3. If the hardware of the new switch is not supported by the Cisco IOS Software image running on the stack, the master will put the new switch into a suspended state, notify the user of a version incompatibility, and wait until the user upgrades the master to a Cisco IOS Software image that supports both types of hardware. The master will then upgrade the rest of the stack to this version, including the new switch, and bring the stack online.

    Upgrades Apply to All Devices in the Stack

    Because the switch stack behaves like a single unit, upgrades apply universally to all members of the stack at once. This means that if an original stack contains a combination of IP Base and IP services software feature sets on the various switches, the first time a Cisco IOS Software upgrade is applied, all units in the stack will take on the characteristic of the image applied. While this makes it much more efficient to add functionality to the stack, it is important to make sure all applicable upgrade licenses have been purchased before allowing units to be upgraded from IP Base .to IP Services functions. Otherwise, those units will be in violation of Cisco IOS Software policy.

    Smart Unicast and Multicast - One Packet, Many Destinations

    The Cisco StackWise technology uses an extremely efficient mechanism for transmitting unicast and multicast traffic. Each data packet is put on the stack interconnect only once. This includes multicast packets. Each data packet has a 24-byte header with an activityJame list for the packet as well as a QoS designator. The activity list specifies the port destination or destinations and what should be done with the packet. In the case of multicast, the master switch identifies which of the ports should receive a copy of the packets and adds a destination index for each port. One copy of the packet is put on the stack interconnect. Each switch port that owns one of the destination index addresses then copies this packet. This creates a much more efficient mechanism for the stack to receive and manage multicast information (Figure 5).

    Figure 5. Comparison of Normal Multicast in Stackable Switches and Smart Multicast in Cisco Catalyst 3750 Series Switches Using Cisco StackWise Technology

    QoS Mechanisms

    QoS provides granular control where the user meets the network. This is particularly important for networks migrating to converged applications where differential treatment of information is essential. QoS is also necessary for the migration to Gigabit Ethernet speeds, where congestion must be avoided.

    QoS Applied at the Edge

    Cisco StackWise supports a complete and robust QoS model, as shown in Figure 6.

    Figure 6. QoS Model

    The Cisco Catalyst 3750-E, Catalyst 3750-X and Cisco Catalyst 3750 support 2 ingress queues and 4 egress queues. Thus the Cisco Catalyst 3750-E, Catalyst 3750-X and Cisco Catalyst 3750 switches. support the ability to not only limit the traffic destined for the front side ports, but they can also limit the amounts of and types of traffic destined for the stack ring interconnect. Both the ingress and egress queues can be configured for one queue to be serviced as a priority queue that gets completely drained before the other weighted queue(s) get serviced. Or, each queue set can be configured to have all weighted queues.

    StackWise employs Shaped Round Robin (SRR). SRR is a scheduling service for specifying the rate at which packets are dequeued. With SRR there are two modes, Shaped and Shared (default). Shaped mode is only available on the egress queues. Shaped egress queues reserve a set of port bandwidth and then send evenly spaced packets as per the reservation. Shared egress queues are also guaranteed a configured share of bandwidth, but do not reserve the bandwidth. That is, in Shared mode, if a higher priority queue is empty, instead of the servicer waiting for that reserved bandwidth to expire, the lower priority queue can take the unused bandwidth. Neither Shaped SRR nor Shared SRR is better than the other. Shared SRR is used when one wants to get the maximum efficiency out of a queuing system, because unused queue slots can be used by queues with excess traffic. This is not possible in a standard Weighted Round Robin (WRR). Shaped SRR is used when one wants to shape a queue or set a hard limit on how much bandwidth a queue can use. When one uses Shaped SRR one can shape queues within a ports overall shaped rate. In addition to queue shaping, the Cisco Catalyst 3750-E can rate limit a physical port. Thus one can shape queues within an overall rate-limited port value.

    As stated earlier, SRR differs from WRR. In the examples shown in figure 7, strict priority queuing is not configured and Q4 is given the highest weight, Q3 lower, Q2 lower, and Q1 the lowest. With WRR, queues are serviced based on the weight. Q1 is serviced for Weight 1 period of time, Q2 is served for Weight 2 period of time, and so forth. The servicing mechanism works by moving from queue to queue and services them for the weighted amount of time. With SRR weights are still followed; however, SRR services the Q1, moves to Q2, then Q3 and Q4 in a different way. It doesn't wait at and service each queue for a weighted amount of time before moving on to the next queue. Instead, SRR makes several rapid passes at the queues, in each pass, each queue may or may not be serviced. For each given pass, the more highly weighted queues are more likely to be serviced than the lower priority queues. Over a given time, the number of packets serviced from each queue is the same for SRR and WRR. However, the ordering is different. With SRR, traffic has a more evenly distributed ordering. With WRR one sees a bunch of packets from Q1 and then a bunch of packets from Q2, etc. With SRR one sees a weighted interleaving of packets. In the example in Figure 7, for WRR, all packets marked 1 are serviced, then 2, then 3, and so on till 5. In SRR, all A packets are serviced, then B, C, and D. SRR is an evolution of WRR that protects against overwhelming buffers with huge bursts of traffic by using a smoother round-robin mechanism.

    Figure 7. Queuing

    In addition to advanced queue servicing mechanisms, congestion avoidance mechanisms are supported. Weighted tail drop (WTD) can be applied on any or all of the ingress and egress queues. WTD is a congestion-avoidance mechanism for managing the queue lengths and providing drop precedences for different traffic classifications. Configurable thresholds determine when to drop certain types of packets. The thresholds can be based on CoS or DSCP values. As a queue fills up, lower priority packets are dropped first. For example, one can configure WTD to drop CoS 0 through 5 when the queue is 60% full. In addition, multiple thresholds and levels can be set on a per queue basis.

    Jumbo Frame Support

    The Cisco StackWise technology supports granular jumbo frames up to 9 KB on the 10/100/1000 copper ports for Layer 2 forwarding. Layer 3 forwarding of jumbo packets is not supported by the Cisco Catalyst 3750. However, the Cisco Catalyst 3750-E and Catalyst 3750-X. do support Layer 3 jumbo frame forwarding.

    Smart VLANs

    VLAN operation is the same as multicast operation. If the master detects information that is destined for multiple VLANs, it creates one copy of the packet with many destination addresses. This enables the most effective use of the stack interconnect (Figure 8).

    Figure 8. Smart VLAN Operations

    Cross-Stack EtherChannel Connections

    Because all the ports in a stack behave as one logical unit, EtherChannel technology can operate across multiple physical devices in the stack. Cisco IOS Software can aggregate up to eight separate physical ports from any switches in the stack into one logical channel uplink. Up to 48 EtherChannel groups are supported on a stack.

    StackWise Plus

    StackWise Plus is an evolution of StackWise. StackWise Plus is only supported on the Cisco Catalyst 3750-E and Catalyst 3750-X switch families. The two main differences between StackWise Plus and StackWise are as follows:

    1. For unicast packets, StackWise Plus supports destination striping, unlike StackWise support of source stripping. Figure 9 shows a packet is being sent from Switch 1 to Switch 2. StackWise uses source stripping and StackWise Plus uses destination stripping. Source stripping means that when a packet is sent on the ring, it is passed to the destination, which copies the packet, and then lets it pass all the way around the ring. Once the packet has traveled all the way around the ring and returns to the source, it is stripped off of the ring. This means bandwidth is used up all the way around the ring, even if the packet is destined for a directly attached neighbor. Destination stripping means that when the packet reaches its destination, it is removed from the ring and continues no further. This leaves the rest of the ring bandwidth free to be used. Thus, the throughput performance of the stack is multiplied to a minimum value of 64 Gbps bidirectionally. This ability to free up bandwidth is sometimes referred to as spatial reuse. Note: even in StackWise Plus broadcast and multicast packets must use source stripping, because the packet may have multiple targets on the stack.

    Figure 9. Stripping

    2. StackWise Plus can locally switch. StackWise cannot. Furthermore, in StackWise, since there is no local switching and since there is source stripping, even locally destined packets must traverse the entire stack ring. (See Figure 10.)

    Figure 10. Switching

    3. StackWise Plus will support up to 2 line rate 10 Gigabit Ethernet ports per Cisco Catalyst 3750-E.

    Combining StackWise Plus and StackWise in a Single Stack

    Cisco Catalyst 3750-E and Catalyst 3750-X StackWise Plus and Cisco Catalyst 3750 StackWise switches can be combined in the same stack. When this happens, the Cisco Catalyst 3750-E, or Catalyst 3750-Xswitches negotiate from StackWise Plus mode down to StackWise mode. That is, they no longer perform destination stripping. However, the Cisco Catalyst 3750-E and the Catalyst 3750-X will retain its ability to perform local switching.


    Products using the Cisco StackWise and StackWise Plus technologies can be managed by the CLI or by network management packages. Cisco Cluster Management Suite (CMS) Software has been developed specifically for management of Cisco stackable switches. Special wizards for stack units in Cisco CMS Software allow the network manager to configure all the ports in a stack with the same profile. Predefined wizards for data, voice, video, multicast, security, and inter-VLAN routing functions allow the network manager to set all the port configurations at once.

    The Cisco StackWise and StackWise Plus technologies are also manageable by CiscoWorks.


    Cisco StackWise and StackWise Plus technologies allow you to increase the resiliency and the versatility of your network edge to accommodate evolution for speed and converged applications. 
    Read more »
  • Polarization-Maintaining Fiber Tutorial

    Introduction to Polarization

    As light passes through a point in space, the direction and amplitude of the vibrating electric field traces out a path in time. A polarized lightwave signal is represented by electric and magnetic field vectors that lie at right angles to one another in a transverse plane (a plane perpendicular to the direction of travel). Polarization is defined in terms of the pattern traced out in the transverse plane by the electric field vector as a function of time.

    Polarization can be classified as linear, elliptical or circular, in them the linear polarization is the simplest. Whichever polarization can be a problem in the fiber optic transmission.

    FiberStore Polarization Coordinate System

    More and more telecommunication and fiber optic measuring systems refer to devices that analyse the interference of two optical waves. The information given by the interferences cannot be used unless the combined amplitude is stable in time, which means, that the waves are in the same state of polarization. In those cases it is necessary to use fibers that transmit a stable state of polarization. And polarization-maintaining fiber was developed to this problem. (The polarization-maintaining fiber will be called PM fiber for short in the following contents.)


    What Is PM Fiber?

    The polarization of light propagating in the fiber gradually changes in an uncontrolled (and wavelength-dependent) way, which also depends on any bending of the fiber and on its temperature. Specialised fibers are required to achieve optical performances, which are affected by the polarization of the light travelling through the fiber. Many systems such as fiber interferometers and sensors, fiber laser and electro-optic modulators, also suffer from Polarization-Dependent Loss (PDL) that can affect system performance. This problem can be fixed by using a specialty fiber so called PM Fiber.


    Principle of PM Fiber

    Provided that the polarization of light launched into the fiber is aligned with one of the birefringent axes, this polarization state will be preserved even if the fiber is bent. The physical principle behind this can be understood in terms of coherent mode coupling. The propagation constants of the two polarization modes are different due to the strong birefringence, so that the relative phase of such copropagating modes rapidly drifts away. Therefore, any disturbance along the fiber can effectively couple both modes only if it has a significant spatial Fourier component with a wavenumber which matches the difference of the propagation constants of the two polarization modes. If this difference is large enough, the usual disturbances in the fiber are too slowly varying to do effective mode coupling. Therefore, the principle of PM fiber is to make the difference large enough.

    In the most common optical fiber telecommunications applications, PM fiber is used to guide light in a linearly polarised state from one place to another. To achieve this result, several conditions must be met. Input light must be highly polarised to avoid launching both slow and fast axis modes, a condition in which the output polarization state is unpredictable.

    The electric field of the input light must be accurately aligned with a principal axis (the slow axis by industry convention) of the fiber for the same reason. If the PM fiber path cable consists of segments of fiber joined by fiber optic connectors or splices, rotational alignment of the mating fibers is critical. In addition, connectors must have been installed on the PM fibers in such a way that internal stresses do not cause the electric field to be projected onto the unintended axis of the fiber.


    Types of PM Fibers

    Circular PM Fibers

    It is possible to introduce circular-birefringence in a fiber so that the two orthogonally polarized modes of the fiber—the so called Circular PM fiber—are clockwise and counter-clockwise circularly polarized. The most common way to achieve circular-birefringence in a round (axially symmetrical) fiber is to twist it to produce a difference between the propagation constants of the clockwise and counterclockwise circularly polarized fundamental modes. Thus, these two circular polarization modes are decoupled. Also, it is possible to conceive externally applied stress whose direction varies azimuthally along the fiber length causing circular-birefringence in the fiber. If a fiber is twisted, a torsional stress is introduced and leads to optical-activity in proportion to the twist.

    Circular-birefringence can also be obtained by making the core of a fiber follows a helical path inside the cladding. This makes the propagating light, constrained to move along a helical path, experience an optical rotation. The birefringence achieved is only due to geometrical effects. Such fibers can operate as a single mode, and suffer high losses at high order modes.

    Circular PM fiber with Helical-core finds applications in sensing electric current through Faraday effect. The fibers have been fabricated from composite rod and tube preforms, where the helix is formed by spinning the preform during the fiber drawing process.


    Linear PM Fibers

    There are manily two types of linear PM fibers which are single-polarization type and birefringent fiber type. The single-polarization type is characterized by a large transmission loss difference between the two polarizations of the fundamental mode. And the birefringent fiber type is such that the propagation constants between the two polarizations of the fundamental mode are significantly different. Linear polarization may be maintained using various fiber designs which are reviewed next.

    Linear PM Fibers With Side Pits and Side Tunnels

    Side-pit fibers incorporate two pits of refractive index less than the cladding index, on each side of the central core. This type of fiber has a W-type index profile along the x-axis and a step-index profile along the y-axis. A side-tunnel fiber is a special case of side-pit structure. In these linear PM fibers, a geometrical anisotropy is introduced in the core to obtain a birefringent fibers.


    Linear PM Fibers With Stress Applied Parts

    An effective method of introducing high birefringence in optical fibers is through introducing an asymmetric stress with two-fold geometrical symmetry in the core of the fiber. The stress changes the refractive index of the core due to photoelastic effect, seen by the modes polarized along the principal axes of the fiber, and results in birefringence. The required stress is obtained by introducing two identical and isolated Stress Applied Parts (SAPs), positioned in the cladding region on opposite sides of the core. Therefore, no spurious mode is propagated through the SAPs, as long as the refractive index of the SAPs is less than or equal to that of the cladding.

    The most common shapes used for the SAPs are: bow-tie shape and circular shape. These fibers are respectively referred to as Bow-tie Fiber and PANDA Fiber. The cross sections of these two types of fibers are shown in the figure below. The modal birefringence introduced by these fibers represents both geometrical and stress-induced birefringences. In the case of a circular-core fiber, the geometrical birefringence is negligibly small. It has been shown that placing the SAPs close to the core improves the birefringence of these fibers, but they must be placed sufficiently close to the core so that the fiber loss is not increased especially that SAPs are doped with materials other than silica. The PANDA fiber has been improved further to achieve high modal birefringence, very low-loss and low cross-talk.

    PANDA Fiber and Bow-tie Fiber

    PANDA Fiber (left) and Bow-tie Fiber (right). The built-in stress elements made from a different type of glass are shown with a darker gray tone.

    Tips: At present the most popular PM fiber in the industry is the circular PANDA fiber. One advantage of PANDA fiber over most other PM fibers is that the fiber core size and numerical aperture is compatible with regular single mode fiber. This ensures minimum losses in devices using both types of fibers.


    Linear PM Fibers With Elliptical Structures

    The first proposal on practical low-loss single-polarization fiber was experimentally studied for three fiber structures: elliptical core, elliptical clad, and elliptical jacket fibers. Early research on elliptical-core fibers dealt with the computation of the polarization birefringence. In the first stage, propagation characteristics of rectangular dielectric waveguides were used to estimate birefringence of elliptical-core fibers. In the first experiment with PM fiber, a fiber having a dumbbell-shaped core was fabricated. The beat length can be reduced by increasing the core-cladding refractive index difference. However, the index difference cannot be increased too much due to practical limitations. Increasing the index difference increases the transmission loss, and splicing would become difficult because the core radius must be reduced. Typical values of birefringence for the elliptical core fiber are higher than elliptical clad fiber. However, losses were higher in the elliptical core than losses in the elliptical clad fibers.


    Linear PM Fibers With Refractive Index Modulation

    One way to increase the bandwidth of single-polarization fiber, which separates the cutoff wavelength of the two orthogonal fundamental modes, is by selecting a refractive-index profile which allows only one polarization state to be in cutoff. High birefringence was achieved by introducing an azimuthal modulation of the refractive index of the inner cladding in a three-layer elliptical fiber. A perturbation approach was employed to analyze the three-layer elliptical fiber, assuming a rectangular-core waveguide as the reference structure. Examination of birefringence in three-layer elliptical fibers demonstrated that a proper azimuthal modulation of the inner cladding index can increase the birefringence and extend the wavelength range for single-polarization operation.

    A refractive index profile is called Butterfly profile. It is an asymmetric W profile, consisting of a uniform core, surrounded by a cladding in which the profile has a maximum value of ncl and varies both radially and azimuthally, with maximum depression along the x-axis. This profile has two attributes to realize a single-mode single-polarization operation. First, the profile is not symmetric, which makes the propagation constants of the two orthogonal fundamental modes dissimilar, and secondly, the depression within the cladding ensures that each mode has a cutoff wavelength. The butterfly fiber is weakly guiding, thus modal fields and propagation constants can be determined from solutions of the scalar wave equation. The solutions involve trigonometric and Mathieu functions describing the transverse coordinates dependence in the core and cladding of the fiber. These functions are not orthogonal to one another which requires an infinite set of each to describe the modal fields in the different regions and satisfy the boundary conditions. The geometrical birefringence plots generated vs. the normalized frequency V showed that increasing the asymmetry through the depth of the refractive index depression along the x-axis increases the maximum value of the birefringence and the value of V at which this occurs. The peak value of birefringence is a characteristic of noncircular fibers. The modal birefringence can be increased by introducing anisotropy in the fiber which can be described by attributing different refractive-index profiles to the two polarizations of a mode. The geometric birefringence is smaller than the anisptropic birefringence. However, the depression in the cladding of the butterfly profile gives the two polarizations of fundamental mode cutoff wavelengths, which are separated by a wavelength window in which single-polarization single-mode operation is possible.


    Applications of PM Fibers

    PM fibers are applied in devices where the polarization state cannot be allowed to drift, e.g. as a result of temperature changes. Examples are fiber interferometers and certain fiber lasers. A disadvantage of using such fibers is that usually an exact alignment of the polarization direction is required, which makes production more cumbersome. Also, propagation losses are higher than for standard fiber, and not all kinds of fibers are easily obtained in polarization-preserving form.

    PM fibers are used in special applications, such as in fiber optic sensing, interferometry and quantum key distribution. They are also commonly used in telecommunications for the connection between a source laser and a modulator, since the modulator requires polarized light as input. They are rarely used for long-distance transmission, because PM fiber is expensive and has higher attenuation than single mode fiber.


    Requirments for Using PM Fibers

    Termination: When PM fibers are terminated with fiber connectors, it is very important that the stress rods line up with the connector, usually in line with the connector key.

    Splicing: PM fiber also requires a great deal of care when it is spliced. Not only the X, Y and Z alignment have to be perfect when the fiber is melted together, the rotational alignment must also be perfect, so that the stress rods align exactly.

    Another requirement is that the launch conditions at the optical fiber end face must be consistent with the direction of the transverse major axis of the fiber cross section.

    Read more »
  • Fiber Optic Patch Panels Tutorial

    What Is Fiber Optic Patch Panel?

    Fiber optic patch panel, or fiber optic patch bay, is a common cable management facilities. It includes a series of connection points of electronic equipment and the mainly connections are fiber optic patch cables. The patch panel allows circuits to be easily arranged and rearranged by simply plugging and unplugging the path cables, or changing the circuit of select signals without the use of expensive dedicated switching equipment. It can be an opened box used to protect the bare fiber and the optical fiber cables, meanwhile it protects spaces for fusion splicing and components connections by fiber adapters. During the unused condition, all fiber optic connectors, fiber patch cables and adapters should be kept away from dust. Fiber optic patch panels help with the installation density of the fiber optic cabling and provide more convenient organization and management.

    A typical fiber optic patch panel has some jacket on the front side to receive short patch cables while on the back of the panel. There are either jacks or punch down blocks that receive the connections of longer and more permanent cables. The patch panels are often used to connect several computers by linking them via the panel, which enables the LAN to connect to the Internet or another WAN.

    Types of Fiber Optic Patch Panels

    According to the installation ways, there are mainly two types of fiber optic patch panels: wall-mounted patch panels and rack-mounted patch panels.

    Wall-Mounted Fiber Optic Patch Panel
    Wall-mounted fiber optic patch panels basically keep 12 different fibers separated from one another. If the amount of the fiber is more than 12, the extra fibers can be moved to a second panel or an engineer can use a panel that is designed to hold more fibers separately. The wall-mounted patch panels can be constructed to hold up to 144 fibers at once.

    Wall-mounted fiber optic patch panels use the inside fiber optic adapters panels, patch cables and pigtails to realize the function of optical fiber distribution. They are used for protective connections for the fiber cables and pigtails in fiber optic cabling and user terminal applications. The patch panels are installed on the indoor wall and terrace to provide a flexible fiber management system for transitional outside plant cable to inside cable and connector assemblies.

    wall mounted fiber optic patch panel


    Rack-Mounted Fiber Optic Patch Panel
    Rack-mounted fiber optic patch panels hold the fibers horizontally and are often designed to open like a drawer. The sliding-opened structure offers engineers an easy access to the optical fibers inside. The rack-mounted patch panels are optional with different kinds of fiber optic adapter ports and pre-installed inner trays and accessories. And fiber optic pigtails of different types are optional, such as SC, FC, ST, LC, E2000 etc. Also, rack-mounted patch panels can be customized by the quantity of optical fibers.

    Rack-mounted fiber patch panels are used to terminated and distributed optical fiber cables. They are convenient to organize and connect the fiber optic links. These patch panels are applied to many fiber optic products, such as DWDM MUX DEMUX, Rack Chassis Splitter, Optical Distribution Frame(ODF) etc. They are fully stable with no risk of movement and offer secure environment for fiber optic adapters, patch cables and pigtails.

    Rack-Mounted Fiber Optic Patch Panel


    According to different applications, fiber optic patch panels can be classified as following:

    Loaded Fiber Optic Patch Panel
    Loaded fiber optic patch panels are usually designed for fitting on a standard 19" rack and can provide the best protection for fiber optic applications. There are rubber grommets on the back of loaded fiber optic patch panels to protect the fiber cables from damage. Each loaded patch panel has fiber splice trays and cable routing spools. What's more, it includes zip ties, cable routing clamps and mounting screws, fiber splice sleeves and installation instructions. There is a special black textured which can be installed a sleek look in the server rack of these loaded fiber patch panels.

    Loaded Fiber Patch Panel


    Swing-Out Fiber Optic Patch Panel
    Swing-out fiber optic patch panels are lightweight and robust patch panels. They are designed for the installation of up to 48 standard optical fibers. These patch panels offer an economic alternative to metal fiber enclosures. The lower tray is designed to gain access to preformed fiber and splice management areas, at the same time, make it easy for installation and dressing of fiber optic cables. All common adapter panel/plate types, includes LC, SC, and ST can be changed with the front plates that can accommodate 6 or 12 duplex, or 12 simplex adapters each.

    Swing-Out Patch Panel


    Fixed Fiber Optic Patch Panel
    Fixed fiber optic patch panels are 19" rack-mountable for connecting up to 24 optical fibers. They are suitable for use with manufactured pigtails or field installation connectors. The installation size is adapted by using appropriate mounting brackets, while the chassis will not be changed. Fixed patch panels are easy to use and more artistic while they can also keep rugged design. These patch panels are easier to terminate, providing greater capacity and easier fiber cable managements. Generally, fixed rack-mounted patch panels do not slide out, but we do offer sliding patch panels for even quicker access to the fiber terminations.

    Fixed Fiber Optic Patch Panel


    Slide-Out Fiber Optic Patch Panel
    Slide out fiber optic patch panels fit with standard 19" or 23" racks and are designed to support both patching and splicing in one unit. Each slide-out patch panel has a slide-out master panel with an integrated tray stop to prevent over extension of fiber cables. The slide-out patch panel also has a two-piece top and swell latch door to allow for easy access to the adapter panels.

    Slide-Out Patch Panel


    High-Density Fiber Optic Patch Panel
    High-density fiber optic patch panels have been engineered to be able to significantly increase density for both patching and splicing. High-density patch panels maximize the amount of adapter panels per rack unit of height, so by utilizing LC quad style adapters you can effectively terminate up to 96 fibers per rack unit of space. There are several other features that make a high density fiber patch panel in an exceptionally nice product to work with. The sliding tray has locking positions to prevent over-extending the fibers. High-density patch panels also have a split top design which allows for easier cable management, and improved strain relief for the cable ingress.

    High Density Fiber Patch Panel


    Fiber Optic Patch and Splice Combos Patch Panel
    Fiber optic patch and splice combos patch panels fit with standard 19" or 23" racks and are designed to support both patching and splicing in one unit. Fiber patch and splice combos have to support termination panels in an upper slide-out shelf with a lower compartment for splice trays in a slide-out shelf. This allows for a full front access application. Blank panels are available to fill unused panel positions.

    Fiber Optic Patch and Splice Combos Patch Panel


    Signature Series Patch Panel
    The Signature Series fiber patch panel offers a solution in which you can adapt the fiber patch panel to many fiber adapter panels or fiber module footprints you want in a way that has never been offered before. This new fiber patch panel series is engineered to allow adaptation to a wide variety of fiber patching applications as well as fiber module installations, including a bulkhead that fits Corning adapter plates. You can now adjust to any of these scenarios with just a simple swap of the fiber bulkhead bracket. The unique master panel design allows for easy and secure routing of fiber without obstructions or compromising your bend radius.

    Signature Series Patch Panel


    LGX Fiber Optic Patch Panel
    LGX fiber optic patch panels are rack-mountable patch panels designed to support the storage of splice trays. They provide high-density fiber connectivity solutions. LGX patch panels have universal mounting hardware to hold fully terminated LGX cassettes. This maximizes the performance of networking space while saving valuable installation time.

    LGX Patch Panel Patch
    Read more »
  • Optics and Cables Selection for Storage Area Network (SAN)

    Optics and cables are the most important infrastructures of network connectivity. In a storage area network (SAN), switches are used between servers and storage devices. This means that you should make connection with optics and cables between the server and switch, storage and switch as well as the switch and switch. Of course, according to different application environments, you should choose different optics and cables in order to get the best performance. Furthermore, you may need to consider the future expansion of your network. Thus, an economical and effective solution of optics and cables are very necessary.

    Key Factors Influencing Your Decision

    Firstly, there are some key factors which will influence your decision. Thus, you must make sure that what your network really requires. As we mentioned above, an SAN has server, storage device and switches. So, what should we consider in every section of the network?

    1. Server
    Bandwidth: Depending on the application load requirements, customers typically decide whether they want 1GbE, 10GbE, or 40GbE. In some cases, the decision may also be dictated by the type of traffic, e.g. DCB (Data center bridging) requires 10GbE or higher.Cost: Servers claim the highest share of devices deployed in any data center. Choosing a lower cost connectivity option results in a much lower initial deployment cost.Power: In any high density server deployment, a connectivity option which consumes lower power results in much lower OpEx.Distance: Servers are typically connected to a switch over a very short distance, i.e. typically within the same rack or, in some cases, within the same row.Cabling Flexibility: Some customer prefer to make their own copper cables due to variable distance requirement. This requirement limits the choice of connectivity to copper cables only.


    2. Storage
    Reliability: Typical storage traffic is very sensitive to loss. Even a minor loss of traffic may result in major impact on application performance.Qualification: Storage vendor qualification or recommendation plays an important role in this decision due to reasons such as customer support, peace of mind, etc.Latency: Any time spent in transition is time taken away from data processing. Reducing transition time results in much faster application performance. The result may have a direct impact on customers' bottom line, e.g. faster processing of online orders.


    3. Switch
    Bandwidth: On server facing ports, servers typically dictate the per port bandwidth requirement. However, per port bandwidth requirement for the network facing (switch-to-switch) ports denpends on multiple factors including amount of traffic generated by the servers, oversubscription ratios, fiber limitations, ect.Distance: An inter-switch or switch to router connection could range from a few inches to tens of kilometers. Generally, the price of optics increases as the distance increases.Latency: The network topology and application traffic profile (East-west, HPC (High Performance Computing), computer cluster, etc.) and influence the minimun latency that can be tolerated in the network.


    • Server to Switch Connectivity Solution

    • Storage to Switch Connectivity Solution



    • Switch to Switch Connectivity Solution


     COMPUFOX Solutions

    COMPUFOX  offers a comprehensive solution of optics and cables which supports your network from 1GbE to 100GbE. We have a great selection of 1000BASE-T/SX/LX SFP, BiDi SFP, 10GBASE-SR/LR SFP+, DWDM SFP+, whole series 40G QSFP+ optics and cables, as well as the 100G CFP2 and CFP4, etc. which help you solve the cost issue in fiber project. Especially the 40G QSFP+ optics, with the passive optic design, they can be compatible with all the equipment of all major brands. In addition, most of them are ready stock. See Links below:

    Read more »