JUNOS | Filter-Based Forwarding

Alright, so Filter-Based Forwarding is nothing new. The technology has been around for a while and is relatively well documented. However, I wanted to share a specific use case where Filter-Based Forwarding can be extremely useful. In this scenario, we’re going to use Filter-Based Forwarding to forward traffic to a dedicated VRF where it is then pushed through a DDOS appliance and back to the router via a different VRF.

This construct is very useful when you only need to pass specific ingress traffic through the DDOS appliance. For example, customer destination prefixes who are paying for a DDOS service. Or traffic from certain source prefixes that are known to be malicious. Return traffic in either scenario is not passed via the appliance and is routed directly back to the source.

Keep on reading!

EVPN-VXLAN | Virtual Gateway |QFX5k Forwarding | JUNOS

In this post, I want to discuss how to verify Virtual Gateway forwarding behaviour on Broadcom based Juniper QFX switches.

The general assumption with EVPN Anycast Gateway is that gateway flows are load-balanced across all gateway devices. And whilst EVPN provides the mechanism to support this behaviour, there is a requirement for the forwarding hardware to also support it.

The mechanism for an EVPN device to load balance gateway flows is to install the virtual gateway ESI as a next-hop for the virtual gateway MAC address. However, Broadcom based QFX switches do not support this behaviour and can only install a single VTEP as a next-hop. So this means that traffic flows heading towards the virtual gateway will only ever traverse via a single gateway device. This behaviour is well documented and there are some talks about Broadcom working with the vendors to improve gateway load-balancing with ESI functionality.

Now we understand the characteristics, let’s look at the steps to verify forwarding behaviour on a Broadcom based QFX switch. Here we’ll look at how to identify which VTEP is being used to reach the virtual-gateway MAC address and how the underlay is transporting the traffic.

Lab Setup

The lab setup is a typical EVPN-VXLAN fabric with central routing on Core1 and Core2. The leaf switches are a mix of QFX5100 (Trident II) and QFX5110 (Trident II+) devices. I’ve used physical hardware for this lab as the behaviour of the vQFX is different. The vQFX emulates the Q5 PFE,  thus supporting ESI for forwarding.



EVPN-VXLAN | Layer 3 Gateway | IRB | JUNOS

I often get asked about EVPN Layer 3 gateway options. And more specifically, what are the differences between IRB with Virtual Gateway Address (VGA) and IRB without VGA. There are many different options and configuration knobs available when configuring EVPN L3 gateway. But I’ve focused on the 3 most popular options that I see with my customers in EVPN-VXLAN environments in a centralised model.

Each IRB option can be considered an Anycast gateway solution seeing as duplicate IPs are used across all IRB gateways. However, there are some subtle, yet significant, differences between each option.

Regardless of the transport technology used, whether it be MPLS or VXLAN, a layer 3 gateway is required to route beyond a given segment. I’m only covering the initial configuration required to get up and running. There are many different configuration knobs that are well explained in the following (thanks Luciano):

Comparing Layer 3 Gateway & Virtual Machine Traffic Optimization (VMTO) For EVPN/VXLAN And EVPN/MPLS

EVPN VXLAN Configuration Knobs and Caveats

This Week: Data Center Deployment with EVPN/VXLAN by Deepti Chandra provides in-depth analysis and examples of EVPN-VXLAN. I highly recommend reading this book!

IRB Option 1

Duplicate IP | Unique MAC | No VGA

IRB option 1
IRB Option 1


Juniper QFX10k | EVPN-VXLAN | IRB Routing | BGP

IRB Routing over EVPN-VXLAN on Juniper QFX10K is now officially supported by Juniper! EVPN master Tom Dwyer informed me that support had been added. A quick check on the VXLAN Constraints page confirms the same. Although, note that IS-IS is still not yet officially supported.

So, what’s the use case? Let’s say there’s a vSRX, hosted in a blade chassis environment, which requires dynamic route updates from the core network. The blade chassis is connected to an EVPN-VXLAN fabric via multi-homed layer 2 uplinks utilising EVPN ESI. In order to establish a BGP peering with the core network, the vSRX must peer with the SPINE devices via the EVPN-VXLAN fabric. 


This article explains how to establish an EBGP peering between a QFX10K IRB interface and a vSRX in an EVPN-VXLAN environment. The vSRX is hosted in an ESXi hypervisor, which is connected via LEAF1 and LEAF2. A /29 network is used on VNI300 and an EBGP peering is established between SPINE1 & vSRX1 and between SPINE2 & vSRX1. A default route is advertised by both QFX10K SPINEs towards vSRX1. 




Juniper QFX5110 | VMware NSX ESG | BGP Route Policy

This post follows on from a previous article which detailed how to establish a BGP peering session between Juniper QFX and VMware NSX Edge Gateway. This time we’ll take a look at how to configure BGP route policy and BGP filters.


When working with BGP, it’s important to consider how BGP routes are imported and exported. In certain scenarios, you may find that the default BGP import and export behaviour is sufficient. But more often than not, you will want to implement an import and export policy in order to control how traffic flows through your network. Here’s a quick reminder of the default behaviours:-

Default Import Policy

  • Accept all BGP routes learned from configured BGP neighbors and import them into the relevant routing table.

Default Export Policy

  • Do not advertise routes learned from IBGP neighbors to any other configured IBGP neighbor. Unless acting as a route reflector.
  • Readvertise all active BGP routes to all configured BGP neighbors.

In the following scenario, we’re going to configure BGP import and export policies on Juniper QFX Switches and VMware NSX Edge Gateways. The Juniper QFX switches will be configured to export a default route ( towards the NSX Edges. They will also be configured to import the NSX internal network The NSX Edges will be configured to export the NSX internal network They will also be configured to import the default route ( received from the QFXs.

Note. This example details the necessary steps for QFX1 and ESG1. Although the steps are almost identical for QFX2 and ESG2.




Juniper QFX | VMware NSX Edge Gateway | BGP Peering

In this post, I’m going to explain how to establish a BGP peering session between Juniper QFX Series Switches and VMware NSX Edge Service Gateway. VMware NSX provides many features and services, one of which is dynamic routing via the use of an ESG. Typically, ESGs are placed at the edge of your virtual infrastructure to act as a gateway. There are two primary deployment options, stateful HA or non-stateful ECMP. In this example, we’re looking at the ECMP deployment option.


We have a pair of Juniper QFX5110 switches that we will configure to enable EBGP peering with each NSX Edge Gateway. We also have a pair of NSX Edge Gateway devices that are placed at the edge of a virtualized infrastructure. Each QFX has a /31 point-to-point network to each ESG. These networks are enabled via 802.1q subinterfaces which provide connectivity across the underlying blade chassis interconnect modules.


NSX Juniper BGP


Using JUNOS Firewall Filters for Troubleshooting & Verification | QFX5110

The Junos firewall filter feature can be a really useful tool for troubleshooting and verification scenarios. I was recently troubleshooting a packet loss fault and I was fairly sure it was an asymmetrical routing issue but I needed a quick way of verifying. And then a colleague said, “hey, how about a firewall filter?”. Of course, assuming IP traffic, we can use a Junos firewall filter to capture specific traffic flows.


In this scenario, we have a pair of Juniper QFX5110 switches that are both connected to an upstream IP transit provider. They are also connected to a local network via a VMware NSX edge. We’re going to use a firewall filter on QFX1 and QFX2 to identify which QFX is being used for egress traffic and which QFX is being used for ingress traffic. More specifically, the flow is an ICMP flow between a host on and Cloudflare’s DNS service.


Junos Firewall Filters topology





This article is the second post in a series that is all about EVPN-VXLAN and Juniper QFX technology. This particular post is focussed specifically on EVPN Anycast Gateway and how to verify control plane and data plane on Juniper QFX10k series switches.


In my first post, I explained how to verify MAC learning behaviour in a single-homed host scenario. This time we’re going to look at how to verify control plane and data plane when using EVPN Anycast Gateway. As explained in my previous post, verifying and troubleshooting EVPN-VXLAN can be very difficult. Especially when you consider all the various elements that build up the control plane and data plane.

So, what is EVPN Anycast Gateway?



This article is all about EVPN-VXLAN and Juniper QFX technology. I’ve been working with this tech quite a lot over the past few months and figured it would be useful to share some of my experiences. This particular article is probably going to be released in 2 or 3 parts and is focused specifically on the MAC learning process and how to verify behaviour. The first post focuses on a single-homed endpoint connected to the fabric via a single leaf switch. The second part will look at a multihomed endpoint connected via two leaf switches that are utilising the EVPN multihoming feature. And, lastly, the third part will focus on Layer 3 Virtual Gateway at the QFX10k Spine switches. The setup I’m using is based on Juniper vQFX for spine and leaf functions with a vSRX acting as a VR device. I also have a Linux host that is connected to a single leaf switch.


When verifying and troubleshooting EVPN-VXLAN it can become pretty difficult to figure out exactly how the control plane and data plane are programmed and how to verify behaviours. You’ll find yourself looking at various elements such as the MAC table, EVPN database, EVPN routing table, inet0 routing table, BGP RIB-IN, BGP RIB-OUT, default-switch instance and so on. It can get a little overwhelming when trying to ascertain the significance of all these various components. My objective is to provide a set of verification steps to help make sense of it all.



In this post, I’m going to explain how we can use an SDN controller to provision traffic-engineered MPLS LSPs between PE nodes situated in different traffic-engineering domains.

The SDN controller that we’re going to use is NorthStar from Juniper Networks running version 3.0. For more information regarding NorthStar check here. There is also a great Day One book available: NorthStar Controller Up and Running.


Typically, in order to provision traffic-engineered MPLS LSPs between PE nodes, it is necessary for the PEs to be situated in a common TE domain. There are some exceptions to this such as inter-domain LSP or LSP stitching. However, these options are limited and do not support end-to-end traffic-engineering as the ingress PE does not have a complete traffic-engineering view. I personally haven’t seen many deployments utilising these features.

So, what’s the use case? Many service providers and network operators are using RSVP to signal traffic-engineered LSPs in order to control how traffic flows through their environments. There are many reasons for doing this, such as steering certain types of traffic via optimal paths or achieving better utilisation of network bandwidth. With many RSVP deployments, you will see RSVP used in the core of the network with islands of LDP at the edge. You will often find that the IGP adopts a similar architecture. Meaning separate traffic engineering domains. But what if I want to signal a traffic-engineered LSP between PEs located in different domains?