OCI Virtual Test Access Point, VTAP, is a network traffic mirroring service. It captures a copy of network traffic from a specified source, applies filters to focus on relevant data, and sends it to a target for analysis. This enables use cases such as network troubleshooting, security monitoring, network performance analysis, and compliance auditing.
For compliance or for troubleshooting elusive/intermittent network issues, you might prefer to archive your network traffic rather than perform continuous live monitoring. You can then selectively analyze the network capture of past production traffic as needed. For such scenarios, this solution demonstrates how to archive your mirrored traffic from the VTAP to an Object-Storage bucket in OCI.
The solution is self-contained. Terraform will set up all the resources required within your OCI tenancy.
The Terraform configuration will create a VCN with three subnets: one public and two private. The public subnet will have a single host, which acts as both an HTTP file server and a jumpbox to access nodes in the two private subnets.
One private subnet hosts nodes that download a dummy file from the HTTP file server to create HTTP traffic. These nodes act as sources for the VTAP, and their traffic is mirrored by the VTAP. We'll refer to these nodes as VTAP Source nodes. Each VTAP Source node has its own separate VTAP.
Another private subnet will contain a Network Load Balancer (NLB) that acts as the target for the VTAPs. The NLB will have backend nodes that perform network capture of the VTAP traffic as pcap files and archive them to a bucket. We call these nodes VTAP Sink nodes. The VTAP Sink nodes and NLB reside in the same private subnet.
VTAP is configured with a capture filter to capture only network traffic of HTTP GET requests fired by these VTAP Sources, to the HTTP file server in our public subnet. Please see vtap.tf for details on the capture filter. Specifically, the VTAP is set on the primary VNIC of the VTAP Source nodes.
You can choose the region and compartment for your deployment. All resources will be created in the specified region and compartment. The Object Storage bucket to archive the pcap files will also be created for you.
This solution is developed and tested only for IPv4 traffic.
Please see variables.tf to view all the configurable parameters.
We assume you have the requisite OCI IAM permissions for the chosen compartment and region, to create all the necessary OCI resources for this deployment. For help with IAM permissions, please refer to Common OCI Network IAM Policies.
-
We use
tcpdump
to perform the traffic capture on VTAP Sink nodes. Thetcpdump
command is in the cloud-init script for VTAP Sink nodes. It creates a rotating buffer of 50 capture files in pcap format, each of size 10 MB. Another script picks up each pcap file, compresses it, renames it with packet capture duration timestamps, and uploads it to the bucket. After the upload, it deletes the local zip file. Hence, storage consumption on VTAP Sink nodes is capped at 500 MB. Feel free to fine-tune these parameters as per your requirements. -
VTAP traffic consists of the original packets "as seen" by the VTAP source, with VXLAN encapsulation. You can choose whether to decapsulate the VTAP traffic, leaving only the original packet. Decapsulation will reduce the storage needed for the pcap files. Please refer to cloud-init script for VTAP Sink nodes, for details on decapsulation with virtual VXLAN interface.
-
Please adjust the size, shape, and count of VTAP Sink nodes depending on the volume of your mirrored traffic.
-
If your traffic analysis only requires header information, you can set a lower value for
Max Packet Size
to say ~ 200 in vtap.tf. Note thatMax Packet Size
determines the size of the capture VTAP performs on the original packets on the VTAP Source and does not include any headers added by the VXLAN encapsulation. -
You can potentially have the source of your VTAP in any VCN that is peered to the VCN containing your NLB (acting as the target for VTAP). With a few tweaks, this solution can easily be adapted to your environment!
-
You can check status VTAP capture service on your VTAP Sink nodes with standard
systemd
commands likejournalctl -u vtaparchiver.service
, orsystemctl status vtaparchiver.service
.
You have two easy options !
This Quick Start uses OCI Resource Manager to make deployment easy. Please log into OCI Web Console, select appropriate region and compartment & then just click the button below:
The OCI Web Console will take you through setup of all the variables required for the deployment.
- Install Terraform
- Access to Oracle Cloud Infastructure
- Download or clone the repo to your local machine
git clone [email protected]:oracle-quickstart/oci-vtap-archiver.git
- Replace variable values in
local.tfvars.example
with values as applicable to your OCI tenancy and rename file tolocal.tfvars
. - Run Terraform
terraform init
terraform plan -var-file=local.tfvars
terraform apply -var-file=local.tfvars
After deployment: Turn on VTAPs for each VTAP Source nodes Please note that VTAPs can only be started on the OCI Web Console. After applying the Terraform configuration, you need to start all your VTAPs on the OCI Web Console, as shown below.
-
Using log-collectors like FluentBit, Vector may provide a better way to transfer network capture data to OCI Object Storage. FluentBit, Vector can handle backpressure and resume failed uploads from saved checkpoints.
The pre-conditions for this would be:
- S3 API Compatibility needs to be enabled for OCI Object Storage to leverage output plugin for S3 of these log-collectors, and
- Network capture output should be in a text format like CSV or JSON. Please note
tshark
can output network capture in CSV or JSON buttcpdump
can not.
-
Using
tshark
for pcapng format. -
Splitting and merging the pcap files by VTAP Source. In current setup, a single pcap file on a VTAP Sink node might have captured traffic of multiple VTAP Source nodes.
-
Packet Capture with PacketBeat and then analysis with OCI OpenSearch Service!
-
Support for IPv6.
- If the
Max Packet Size
setting for VTAP is lower than the max packet size of packets in your mirrored traffic, and if you are using Wireshark, Wireshark will displayTCP Previous segment not captured
andTCP ACKed unseen segment
. This is because Wireshark performs its TCP Flow Analysis based on the number ofbytes on the wire
recorded for captured packets. More details below.
- For each packet in the pcap files
tcpdump
records count(bytes on the wire
) it sees during the capture. - For most packets,
count(bytes on the wire)
<length(original packet)
, as they get truncated before reachingtcpdump
in VTAP itself is set to lower value ofMax Packet Size
. - Therefore, from the perspective of
tcpdump
, the VTAP truncated packet is the full original packet. - This occurs regardless of if VXLAN decapsulationis performed or not.
- This occurs regardless truncation at
tcpdump
command with itssnaplen
parameter. - With
editcap
, it might be possible to correct the number ofbytes on the wire
in the pcap files usingIP Length
header field of the original packet, but I am yet to explore this. - For the curious, please refer to my discussion with the Wireshark community.
-
If you are decapsulating the VTAP traffic of its VXLAN header and there is no trucation at VTAP for the mirrored traffic, you may see packets with lengths in the capture that are way above 9k. But max allowed MTU in OCI VCN is 9k! This happens when
generic receive offloading
(of Linux OS) is enabled on the network interface used for the capture. The interface merges multiple TCP segments and sends the aggregated TCP segment to the upper layer in one go to save on CPU cycles. You can turn it off withethtool -K <interface> gro off
. You might want to disable all offloading features of the network interface used for capturing. -
At the time of reboots of VTAP Sink nodes,
pcap
capture files which are under process at that time, can get abodoned. These unfortunatepcap
capture files will be not be reprocessed after reboots and will remain on the node till manually cleaned up. However, astcpdump
running on VTAP Sink node is configured as asystemd
service, it will restart automatically after reboot and continue with the archival of the VTAP traffic.
- Mayur Raleraskar - [email protected]