DEV Community

DEV Community

Yuya Takeyama

Posted on Dec 6, 2017

cURL Response Time How I measure Response Times of Web APIs using curl

There is a bunch of specific tools for benchmarking HTTP requests. ab , JMeter , wrk ... Then why still use curl for the purpose?

It's because curl is widely-used and it's a kind of common language for Web Developers.

Also, some tools have a feature to retrieve an HTTP request as a curl command.

copy as curl command

It's quite useful because it copies not only the URL and parameters but also request headers including Authorization or Cookie .

In this article, I use these tools:

Measure response time using curl

At first, let's prepare a curl command. In this time, I got the command of the request to my personal blog using Google Chrome. ( Cookie is removed)

It just outputs the response body from the server.

Let's append these options.

-s is to silence the progress, -o is to dispose the response body to /dev/null .

And what is important is -w . We can specify a variety of format and in this time I used time_starttransfer to retrieve the response time (time to first byte).

It shows like below:

The response time is 0.188947 second (188 msec).

To simplify, I also created a wrapper command curlb :

Measure the percentile of the response times

It's not proper to benchmark from just a single request.

Then let's measure the percentile of 100 requests.

ntimes is useful for such purposes.

  • https://github.com/yuya-takeyama/ntimes

You can install with go get github.com/yuya-takeyama/ntimes or the repository has pre-built binaries.

Let's append ntimes 100 -- at the beginning of the curl command.

And to measure the percentile of the numbers, the command called percentile may be the easiest option.

  • https://github.com/yuya-takeyama/percentile

Install it by go get github.com/yuya-takeyama/percentile or download the pre-built binary from the repo.

And append | percentile to the end of the command.

Top comments (6)

pic

Templates let you quickly answer FAQs or store snippets for re-use.

emilienmottet profile image

  • Location France
  • Education Ensimag
  • Work Software Engineer at Michelin
  • Joined Sep 3, 2018

Good article ! For zsh users, you could use repeat

welll profile image

  • Joined Dec 10, 2017

by the way, why curlb on the last two commands? is it a typo?

yuyatakeyama profile image

  • Location Tokyo
  • Work Software Engineer at Quipper
  • Joined Nov 25, 2017

Hi, did you see this section?

To simplify, I also created a wrapper command curlb:

-s -o /dev/null -w "%{time_starttransfer}\n" is toooo long to type or to remember. So I always use curlb and recommend using it.

neo profile image

  • Joined Feb 20, 2017

Interesting approach

zinssmeister profile image

  • Joined Jul 25, 2017

This was an interesting post and I got curious to see how this measures up against how we record response time with our product templarbit.com/sonar and found that it works similar!

zeyuanchen23 profile image

  • Joined Feb 1, 2020

Hi! Is it possible to use your tool on Mac? If so, how to install the ntimes?

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

artem profile image

2 UX tips for self-typing text effect

Artem - Apr 18

leemeganj profile image

Jest adoption guide: Overview, examples, and alternatives

Megan Lee - Apr 18

dbillion profile image

Mastering Java Libraries, Packages, and Modules

Oludayo Adeoye - Apr 18

itechblogging profile image

Exploring Azure Blob Storage | A Comprehensive Blog 

Waqas Khursheed - Apr 18

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Cloud Foundry Documentation

  • General Information
  • Contribute to Cloud Foundry documentation
  • Cloud Foundry overview
  • Cloud Foundry security
  • Container security in Cloud Foundry
  • Container-to-container networking
  • Orgs, spaces, roles, and permissions in Cloud Foundry
  • Planning orgs and spaces in Cloud Foundry
  • App Security Groups in Cloud Foundry
  • Cloud Foundry app SSH components and processes
  • High availability in Cloud Foundry
  • How Cloud Foundry maintains high availability
  • Staging your apps in Cloud Floundry
  • The app container lifecycle on Diego architecture
  • How Diego balances app processes in Cloud Foundry
  • Diego components and architecture
  • Cloud Foundry routing architecture
  • Cloud Controller
  • Cloud Controller blobstore
  • User Account and Authentication server
  • Garden component
  • GrootFS disk usage in Cloud Foundry
  • HTTP routing
  • Installing the cf CLI
  • Upgrading to cf CLI v7
  • Upgrading to cf CLI v8
  • Getting started with the cf CLI
  • Using the cf CLI with a proxy server
  • Using the cf CLI with a self-signed certificate
  • Using cf CLI plug-ins
  • Developing cf CLI plug-ins
  • cf CLI v6 Reference Guide
  • cf CLI v7 Reference Guide
  • cf CLI v8 Reference Guide
  • Using experimental cf CLI commands
  • Information for Operators
  • Setting up DNS for your environment
  • Deploying BOSH on AWS
  • Deploying BOSH on GCP
  • Deploying Cloud Foundry
  • Migrating from cf-release to cf-deployment
  • Backup and restore for external blobstores
  • Cloud Controller blobstore configuration
  • Stopping and starting virtual machines
  • Creating and modifying quota plans
  • Using feature flags
  • Using the CPU entitlement plug-in
  • Examining GrootFS disk usage
  • Using metadata
  • Managing custom buildpacks
  • Using Docker in Cloud Foundry
  • Creating and managing users with the cf CLI
  • Creating and managing users with the UAA CLI (UAAC)
  • Get started with the Notifications Service
  • Enabling IPv6 for hosted apps
  • Distributed tracing
  • Enabling Zipkin tracing
  • Enabling W3C tracing
  • Supporting WebSockets
  • Configuring load balancer health checks for CF routers
  • Securing incoming traffic
  • Enabling and configuring TCP routing
  • Configuring HTTP/2 support
  • Managing isolation segments
  • Routing for isolation segments
  • Configuring delayed job priorities with Cloud Controller
  • Using Stack Auditor
  • Changing stacks
  • Restaging your apps on a Windows stack
  • Cloud Foundry logging
  • Configuring system logging
  • Configuring Diego for upgrades
  • Audit Events
  • UAA audit requirements
  • Usage events and billing
  • Configuring SSH access for Cloud Foundry
  • Configuring Diego Cell disk ceanup scheduling
  • Configuring Health Monitor Notifications
  • Monitoring and testing Diego components
  • Troubleshooting Cloud Foundry
  • UAA performance
  • UAA performance metrics
  • Scaling Cloud Controller
  • Scaling Cloud Controller (cf-for-k8s)
  • Logging and metrics architecture
  • Installing the Loggregator plug-in for cf CLI
  • Security event logging
  • Limiting your app log rate in Cloud Foundry
  • Cloud Foundry component metrics
  • Container metrics
  • Loggregator guide for Cloud Foundry operators
  • Logging and metrics in Cloud Foundry
  • Configuring the OpenTelemetry Collector
  • Deploying a nozzle to your Cloud Foundry Loggregator Firehose
  • BOSH Documentation
  • Installing BBR
  • Release notes for BBR
  • Backing up with BBR
  • Restoring with BBR
  • BBR logging
  • Experimental features
  • BBR developer's guide
  • Information for developers
  • Pushing your app using Cloud Foundry CLI (cf push)
  • Deploying with app manifests
  • App manifest attribute reference
  • Deploying an app with Docker
  • Deploying your large apps
  • Starting, restarting, and restaging apps
  • Pushing an app with multiple processes
  • Running cf push sub-step commands
  • Configuring rolling app deployments
  • Pushing apps with sidecar processes
  • Troubleshooting app deployment and health
  • Configuring SSH access for your deployment
  • Accessing your apps with SSH
  • Accessing services with SSH
  • Configuring routes and domains
  • Configuring CF to route traffic to apps on custom ports
  • Routing HTTP/2 and gRPC traffic to apps
  • Managing service instances
  • Sharing service instances
  • Delivering service credentials to an app
  • Managing service keys
  • Configuring Play Framework service connections
  • Using an external file system (volume services)
  • User-provided service instances
  • Streaming app logs to log management services
  • Streaming app logs to third-party services
  • Streaming app logs to Splunk
  • Streaming app logs with Fluentd
  • Streaming app logs to Azure OMS log analytics
  • Using metrics with drain logs
  • Running tasks in your apps
  • Scaling your app using Cloud Foundry CLI (cf scale)
  • Using Cloud Foundry health checks
  • Configuring container-to-Container networking
  • Cloud Foundry environment variables
  • Available Cloud Controller API client libraries
  • Designing and running your app in the cloud
  • Cloud Foundry API app revisions
  • Working with buildpacks in Cloud Foundry
  • Stack association
  • Pushing an app with multiple buildpacks
  • Using a proxy server
  • Supported binary dependencies
  • Configuring the production server
  • Binary buildpack
  • Go buildpack
  • Hosted Web Core buildpack
  • Using Cloud Foundry Java buildpack
  • Getting started deploying your Grails apps to Cloud Foundry
  • Getting started deploying Ratpack apps to Cloud Foundry
  • Getting started deploying Spring apps to Cloud Foundry
  • Configuring service connections
  • Using Java Native Image
  • Cloud Foundry Java Client Library
  • .NET Core buildpack
  • NGINX buildpack
  • Node.js buildpack-specific information
  • Environment variables defined by Node buildpack
  • Configuring service connections for Node.js application
  • Additional information on PHP buildpacks in Cloud Foundry
  • Getting started deploying PHP apps to Cloud Foundry
  • PHP buildpack configuration
  • Python buildpack
  • R buildpack
  • Additional information on Ruby buildpacks in Cloud Foundry
  • Getting started deploying Ruby apps
  • Getting started deploying Ruby on Rails apps
  • Configuring Rake tasks for deployed apps
  • Environment variables defined by Ruby buildpack
  • Configuring service connections for Ruby
  • Windows Gemfile support
  • Staticfile buildpacks
  • Creating custom buildpacks
  • Packaging dependencies for offline buildpacks
  • Merging from upstream buildpacks
  • Upgrading dependency versions for Cloud Foundry
  • Releasing a new Cloud Foundry buildpack version
  • Updating buildpack-related gems in Cloud Foundry
  • Services in Cloud Foundry
  • Open Service Broker API
  • Platform profiles
  • Catalog metadata
  • Volume services
  • Release Notes
  • Managing service brokers in Cloud Foundry
  • Managing access to service plans
  • Binding credentials in Cloud Foundry
  • Setting up and deploying CredHub with BOSH
  • Configuring a hardware security module
  • Using a key management service with CredHub
  • CredHub credential types
  • Backing up and restoring CredHub instances
  • Troubleshooting CredHub
  • Dashboard Single Sign-on
  • Service instance sharing in Cloud Foundry
  • Service broker examples
  • App log streaming in Cloud Foundry
  • Offering Route Services in Cloud Foundry
  • Supporting multiple CF instances
  • API Reference
  • Rate limit information returned by the Cloud Controller API

Troubleshooting slow requests in Cloud Foundry

  • App requests overview

Experiment 1: Measure total round-trip app requests

Experiment 2: view request time in access logs, experiment 3: duplicate latency on another endpoint, experiment 4: remove the load balancer from request path, experiment 5: remove gorouter from request path, experiment 6: test the network between the router and the app, use app logs to locate delays in cloud foundry, causes for gorouter latency, operations recommendations.

Page last updated:

What part of the Cloud Foundry request flow adds latency to your requests? Run the experiments in this topic to find out.

Cloud Foundry recommends running the procedures in this article in the order presented.

App requests typically transit the following components. Only the Gorouter and the app are within the scope of Cloud Foundry.

See the following diagram:

There are four boxes from left to right: Client, Load Balancer, Router, and Backend. A larger box labeled CF encompasses the Router and Backend boxes.

Any of the components in the diagram above can cause latency. Delays can also come from the network itself.

To troubleshoot slow requests and diagnose what might be causing latency to your app requests:

After you determine the cause of latency, see the following sections for more information:

  • Use App Logs to Locate Delays in Cloud Foundry
  • Causes for Gorouter Latency
  • Operations Recommendations

To measure the total round-trip time for your deployed app that is experiencing latency, run:

Where APP-ENDPOINT is the URL endpoint for the deployed app.

For example:

The real time output shows that the request to http://app1.app_domain.com took approximately two minutes, round-trip. This seems like an unreasonably long time, so it makes sense to find out where the delay is occurring.

To narrow down the cause of latency, see the following table for information about the output that you see after running time curl -v APP-ENDPOINT :

In Cloud Foundry, delays can occur within Gorouter, within the app, or within the network between the two. If you suspect that you are experiencing latency, the most important logs are the access logs. The cf logs command streams log messages from Gorouter as well as from apps. This section describes how to find and understand the access log timestamps.

To view request time in access logs:

(Optional) Run cf apps to determine the name of the app.

Where APP-NAME is the name of the app.

From another command line window, send a request to your app.

After your app returns a response, enter Ctrl-C to stop streaming cf logs .

For example: $ cf logs app1

2019-12-14T00:33:32.35-0800 [RTR/0] OUT app1.app_domain.com - [14/12/2019:00:31:32.348 +0000] "GET /hello HTTP/1.1" 200 0 60 "-" "HTTPClient/1.0 (2.7.1, ruby 2.3.3 (2019-11-21))" "10.0.4.207:20810" "10.0.48.67:61555" x_forwarded_for:"52.3.107.171" x_forwarded_proto:"http" vcap_request_id:"01144146-1e7a-4c77-77ab-49ae3e286fe9" response_time:120.00641734 gorouter_time:0.000217 app_id:"13ee085e-bdf5-4a48-aaaf-e854a8a975df" app_index:"0" x_b3_traceid:"3595985e7c34536a" x_b3_spanid:"3595985e7c34536a" x_b3_parentspanid:"-" 2019-12-14T00:32:32.35-0800 [APP/PROC/WEB/0]OUT app1 received request at [14/12/2019:00:32:32.348 +0000] with "vcap_request_id": "01144146-1e7a-4c77-77ab-49ae3e286fe9" ^C In the example above, the first line contains the following timestamps:

  • 14/12/2016:00:31:32.348 : Gorouter receives request
  • response_time:120.00641734 : Time measured from when Gorouter receives the request to when Gorouter finishes sending the response to the end user
  • gorouter_time:0.000217 : Gorouter response time, not including the time that Gorouter spent sending the request to the back end app or the time waiting for the response from the app.

The next step to debugging latency is finding an endpoint that consistently experiences delays. Use a test app that does not make any internal or external requests or database calls. For example, see dora on GitHub.

If you cannot push any apps to your foundation, find an API endpoint that does not make any external calls to use for the rest of the experiments. For example, use a health or information endpoint.

To duplicate latency on another endpoint:

Push an example app.

Measure a request’s full round-trip time from the client and back as by running:

Where TEST-APP-ENDPOINT is the URL endpoint for the test app. While every network is different, this request should take less than 0.2 seconds.

See the following table for information about the output that you see after running time curl -v TEST-APP-ENDPOINT :

If this experiment shows that something in your app is causing latency, use the following questions to start troubleshooting your app:

  • Did you recently push any changes?
  • If so, have your database queries changed?
  • If so, is there a problem in a downstream app?
  • Does your app log where it spends time? For more information, see Use App Logs to Locate Delays in Cloud Foundry .

The next step is to remove the load balancer from the test path by sending the request directly to Gorouter. You can do this by accessing the network where Gorouter is deployed, sending the traffic directly to the Gorouter IP address, and adding the route in the host header.

To remove the load balancer from the request path:

Choose a router VM from your deployment and get its IP address. Record this value and use it as the ROUTER-IP when you run the command in a later step.

Where ROUTER-GUID is the unique identifier for the router VM.

To determine the amount of time a request takes when it skips the load balancer, run:

  • ROUTER-IP is the router VM IP address that you located in the first step.
  • TEST-APP-ENDPOINT is the URL endpoint for the test app.

See the following table for information about the output that you see after removing the load balancer from the app request path:

The next step is to remove Gorouter from the request path. You can SSH into the router VM and send a request directly to the app.

To remove Gorouter from the app request path:

To retrieve the IP address and port number of the Diego Cell where your test app instance runs, run:

Where TEST-APP is the name of the test app.

For example: $ cf ssh my-app -c "env | grep CF_INSTANCE_ADDR"

Choose any router VM from your deployment and SSH into it by running:

To determine the amount of time a request takes when it skips Gorouter, run time curl CF_INSTANCE_ADDR .

See the following table for information about the output that you see after removing Gorouter from the app request path:

The next step is to time how long it takes for your request to make it from the router VM to the Diego Cell where your app is deployed. You can do this by using tcpdump on both VMs.

To test the network between the router and the app:

  • Choose a router VM from your deployment and record its IP address. Use this value as the ROUTER-IP in later steps.
  • To get the IP address of the Diego Cell where your test app instance is running, run cf ssh TEST-APP -c "env | grep CF_INSTANCE_IP" , where TEST-APP is the name of the test app.
  • To get the port number of the Diego Cell where your test app instance is running, run cf ssh TEST-APP -c "env | grep CF_INSTANCE_PORT" , where TEST-APP is the name of the test app.
  • On the command line, locate the router VM that matches the ROUTER-IP value from the first step.
  • To SSH into the router VM, run bosh ssh router/ROUTER-GUID , where ROUTER-GUID is the unique identifier for the router VM.
  • On the router VM, log in as root.
  • To capture all packets going to your app, run tcpdump 'dst CF_INSTANCE_IP and dst port CF_INSTANCE_PORT' .
  • In a second command line window, SSH into the Diego Cell that corresponds with CF_INSTANCE_IP . Run bosh ssh digeo-cell/DIEGO-CELL-GUID , where DIEGO-CELL-GUID is the unique identifier for the Diego Cell where your app is running.
  • On the Diego Cell, log in as root.
  • To capture all packets going to your app, run tcpdump 'dst port CF_INSTANCE_PORT and src ROUTER-IP' , where ROUTER-IP is the router VM IP address that you recorded in the first step.
  • In a third command line window, run ssh ROUTER-IP , where ROUTER-IP is the router VM IP address.
  • To make a request to your app, run curl CF_INSTANCE_IP:CF_INSTANCE_PORT .

Compare the first packet captured on the router VM with the first packet captured on the Diego Cell. The packets should match. Use the timestamps to determine how long it took the packet to traverse the network.

If you use a test app, this can be the only traffic to your app. If you are not using a test app and there is traffic to your app, then these tcpdump commands can result in many packet captures. If the tcpdump results are too verbose to track, you can write them to a pcap file and use wireshark to find the important packets. To write tcpdump commands to a file, use the -w flag. For example: tcpdump -w router.pcap .

See the following table for information about the output that you see after testing the network between the router and the app:

To gain a more detailed picture of where delays exist in your request path, augment the logging that your app generates. For example, call your logging library from the request handler to generate log lines when your app receives a request and finishes processing it:

By comparing the router access log messages from Experiment 2: View Request Time in Access Logs with the new app logs above, you can construct the following timeline:

  • 2016-12-14T00:32:32.35 : App receives request
  • 2016-12-14T00:32:32.50 : App finishes processing request
  • 2016-12-14T00:33:32.35 : Gorouter finishes processing request

The timeline indicates that Gorouter took close to 60 seconds to send the request to the app and another 60 seconds to receive the response from the app. This suggests either of the following:

  • A delay with Gorouter. See Causes for Gorouter Latency .
  • Network latency between Gorouter and the Diego Cells that host the app.

Two potential causes for Gorouter latency are:

Routers are under heavy load from incoming client requests.

Apps are taking a long time to process requests. This increases the number of concurrent threads held open by Gorouter, reducing capacity to handle requests for other apps.

Monitor CPU load for Gorouters. At high CPU (70%+), latency increases. If the Gorouter CPU reaches this threshold, consider adding another Gorouter instance.

Monitor latency of all routers using metrics from the Firehose. Do not monitor the average latency across all routers. Instead, monitor them individually on the same graph.

Consider using Pingdom against an app on your Cloud Foundry deployment to monitor latency and uptime. For more information, see the Pingdom website.

Consider enabling access logs on your load balancer. To enable access logs, see your load balancer documentation. Just as you use Gorouter access log messages above to determine latency from Gorouter, you can compare load balancer logs to identify latency between the load balancer and Gorouter. You can also compare load balancer response times with the client response times to identify latency between client and load balancer.

Deploy a nozzle to the Loggregator Firehose to track metrics for Gorouter. For more information, see Deploying a Nozzle to the Loggregator Firehose . Available metrics include:

  • CPU utilization
  • Requests per second

Timing Page Responses With Curl

Timing Page Responses With Curl

Timing web requests is possible in curl using the -w or --write-out flag. This flag takes a number of different options, including several time based options.

These timing options are useful for testing the raw speed of requests from a web server and can be an important tool when improving performance and quickly getting feedback on the response.

The -w or --write-out flag in curl has a number of different options, far more than I can add here. You can use these options by surrounding them in a " %{parameter} " structure and passing this as a string to the -w flag.

For example, if we wanted to return just the HTTP status code of the response we would use the following curl command.

This returns the string "200" and nothing else.

To explain the parameters used above:

  • -w is the write out flag and we have passed in the http_code option, which means that the output will contain this information.
  • -o will send the output of the response to /dev/null. In other words we are just throwing this away.
  • -sL  this is a dual flag, with the "s" running curl in silent mode without any error messages or progress bars and the "L" will let curl following any redirects so we are actually measuring the final destination and not the initial response.

More importantly for the purposes of measuring the time taken for the response to complete are the time parameters available to the write out flag.

To reference the documentation for the time based variables is as follows.

  • time_appconnect  - The time, in seconds, it took from the start until the SSL/SSH/etc connect/handshake to the remote host was completed.
  • time_connect - The time, in seconds, it took from the start until the TCP connect to the remote host (or proxy) was completed.
  • time_namelookup  - The time, in seconds, it took from the start until the name resolving was completed.
  • time_pretransfer - The time, in seconds, it took from the start until the file transfer was just about to begin. This includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved.
  • time_redirect  - The time, in seconds, it took for all redirection steps including name lookup, connect, pretransfer and transfer before the final transaction was started. time_redirect shows the complete execution time for multiple redirections.
  • time_starttransfer  - The time, in seconds, it took from the start until the first byte was just about to be transferred. This includes time_pretransfer and also the time the server needed to calculate the result.
  • time_total -  The total time, in seconds, that the full operation lasted.

Any of these variables can be added to the request to get an indication of how long a request took to run.

This will return a single value of how long the request took to complete, in seconds.

You can combine these variables together in order to create more detailed information about the request.

The time_starttransfer option gives us access to the "time to first byte" metric, which the time it takes between the request being received and the server sending a response. An important consideration when benchmarking server response times.

Running this command produces the following output.

This does, however, become a little unwieldy as most of the command is now taken up with the -w argument.

The good news is that instead of supplying a string to this flag we can create a template file and supply the filename to the flag using the @ symbol. This is called a readFile macro, and will inject the contents of the file into the flag arguments.

To recreate the above we can create a template file called "curl-firstbyte.txt" and add the following contents.

The .txt file extension here just makes the file easy to edit; it isn't actually required.

We can then change the command to reference the file using the readFile macro syntax. This means that the curl command is simplified to the following.

This works in exactly the same way and is much easier to type into the command line.

For completeness, we can create a template file that contains all of the available time based parameters in one go. Create a file called curl-timings.txt and add the following content to it.

And then reference this template file in the same way as before.

This makes it clear that most of the time taken is the web server responding to the request.

Make sure you check out the curl man page on the write out flag for a detailed breakdown of every option that you can pass to this template.

There are better tools out there to test performance of your site, but this technique can be a useful tool for getting quick feedback on how much you have changed the performance of a single request and nicely complements full stack testing solutions.

More in this series

  • Some Useful Curl Snippets

Add new comment

Related content, using the fingerprint scanner on pop os 22.04.

I work on a couple of ThinkPad laptops (T490 and a P14s) and whilst they have fingerprint scanners I haven't really considered using them. I once attempted to get a fingerprint scanner working in Linux on an old HP laptop and that experience put me off trying again.

Turning On Or Off Fn Mode In Ubuntu Linux

I was typing on my  Keychron K2 keyboard today and realised that I hadn't used the

Repointing A Symlink To A Different Location

​Creating a symlink is a common way of ensuring that the directory structure of a deployment will always be the same. For example you might create a symlink so that the release directory of release123/docroot will instead be just current.

Finding My Most Commonly Used Commands On Linux

I'm a proponent of automation, so when I find myself running the same commands over and over I always look for a way of wrapping that in an alias or script.

Grep Context

Grep is a really powerful tool for finding things in files. I often use it to scan for plugins in the Drupal codebase or to scan through a CSV or log file for data.

For example, to scan for user centric ViewsFilter plugins in the Drupal core directory use this command (assuming you are relative to the core directory).

PHP Streams

Streams are a way of generalising file, network, compression resources and a few other things in a way that allows them to share a common set of features. I stream is a resource object that has streamable behaviour. It can be read or written to in a linear fashion, but not necessarily from the beginning of the stream.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Difference between `Tracert` and `Traceroute`

The program used to determine the round-trip delay between a workstation and a destination address is: (A) Tracert (B) Traceroute (C) Ping (D) Pop

My attempt:

When I googled it:

Traceroute is a utility that records the route (the specific gateway computers at each hop) through the Internet between your computer and a specified destination computer. It also calculates and displays the amount of time each hop took.

The tracert command is a Command Prompt command that's used to show several details about the path that a packet takes from the computer or device you're on to whatever destination you specify. You might also sometimes see the tracert command referred to as the trace route command or traceroute command.

It seems to me both can.

What is difference between them, can you explain, please?

  • command-line

Mithlesh Upadhyay's user avatar

4 Answers 4

Ping command, ping [ 1 ],[ 2 ] , is the basic tool that sends a package to the destination and waits for the answer. In the output it shows the delays ( min/avg/max/mdev ). You can ping different ports too. You can ping different port too with other programs (see below). Among the many options you can select -p to specify the packet you send, useful for diagnosing data-dependent problems in a network.

Tracert and traceroute give you for each node between origin and destination the delay time. Some servers reserve a different amount of bandwidth to different services (udp,http...) so testing different ports (-p option udp,tcp,icmp...) gives different info. It is useful to understand where you spend more time. It is useful when you can change your routing to avoid those bottlenecks. It is slower than ping because will try, as said, to ask to each node between origin and destination.

For what I know pop is a protocol for email and it is possible there is a set of command to test the speed of this service too...

To ping a specific port (it is not really pinging) you can use tools as tcping [ 3 ] or tcpping [ 4 ] .

A common way to measure network latency to a remote host is by using ping utility which uses ICMP echo request and reply packets. In some cases, however, ICMP traffic is blocked by firewalls, which renders ping utility useless with hosts behind restrictive firewalls. In such case, you will need to rely on layer-3 measurement tools that use TCP/UDP packets since these layer-3 packets are more likely to bypass common firewall rules.

Hastur's user avatar

  • 3 You can't ping specific ports but you can use other tools to check if a port is being answered on (i.e. nmap). –  dotancohen Sep 15, 2016 at 10:34
  • Ping has a constant TTL value during one run, whilst tracer(ou)t(e) must increase the TTL while running for finding the hops. Both, Ping and Traceroute use ICMP, and TTL is a property of ICMP. Details in en.wikipedia.org/wiki/Internet_Control_Message_Protocol –  rexkogitans Sep 15, 2016 at 14:05
  • @dotancohen Perfectly right to ping other ports you need other program as one of the version of tcpping, tcping... based on a different protocol to obtain similar result. Sorry (battery died before I can finish...). –  Hastur Sep 16, 2016 at 7:11
  • 1 @Hastur: Those other programs do not ping . –  dotancohen Sep 17, 2016 at 7:18

They should be the same. Here is what I found:

Both commands are basically the same thing. The main difference is of the Operating System and how the command is implemented in the background.On the foreground you see the same kind of information in both cases. Traceroute is a computer network diagnostic tool, displaying the route and measuring transitdelays of packets across the network. The command is available in Unix OS as ‘traceroute’, while it is available as ‘tracert’ in Windows NT based OS. For IPv6 it is often known as ‘tracert6’. In Linux the command sends sequence of User Data-gram Protocol tothe destination host by default. While in the case of Windows it sends ICMP echo requests instead of UDP packets.

https://www.quora.com/What-is-the-difference-between-traceroute-and-tracert

breakpoint's user avatar

  • Note that some versions of Unix traceroute have a -i option to send ICMP instead of UDP. This can be useful when there are packet filters that block UDP to random ports. –  Barmar Sep 16, 2016 at 17:31

This is the difference:

  • traceroute : uses ICMP
  • tracert : uses UDP

I have to say that tracert lately has become my favourite tool:

enter image description here

  • traceroute has lots of different options: icmp, tcp, udp, udplite, rawq, etc. –  EML Jul 3, 2021 at 12:20

I'm sure the answer they'd want is (C) Ping . If I just wanted to check round trip time from a workstation, that's what I'd use. It's quick, it's simple, and it's available in some form on just about every network-capable device in the world .

(A) Tracert and (B) Traceroute are very similar utilities on Windows and *nix, respectively. Deciding between those two answers would depend on the OS of the workstation, which is not specified. Either would also tell you the round trip time to the endpoint, but also to every other router along the way. So they would work, but be slower and longer to type, and maybe not available on as many machines as ping.

(D) Pop is probably just a play on ping in this case. POP is an email protocol (and a sound, and a nickname for a father, etc.), but is very unrelated to this question.

Jacktose's user avatar

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged networking command-line routing ping traceroute ..

  • The Overflow Blog
  • How do you evaluate an LLM? Try an LLM.
  • Why configuration is so complicated
  • Featured on Meta
  • New Focus Styles & Updated Styling for Button Groups
  • Upcoming initiatives on Stack Overflow and across the Stack Exchange network
  • Google Cloud will be Sponsoring Super User SE

Hot Network Questions

  • I am new to Blender, I want to create this certain shape of a shampoo bottle, any help is appreciated!
  • What are some tracing disassemblers for the Z80
  • A visualization for the quotient rule
  • Book about a robotic probe comes to Earth and asks to be destroyed
  • Why don't airports use different radio frequencies/channels for each plane to prevent communications from interfering with each other?
  • Should a 10 speed shifter only click 9 times?
  • How can I reserve a TGV seat on a Germany-Switzerland ticket purchased via Deutsche Bahn?
  • Package maintainer pre-inst script `install` vs `upgrade`
  • Can you tile a 25 x 25 square with a mixture of 2 x 2 squares and 3 x 3 squares?
  • Stack - implementation in Java
  • A book about a kid who has a magic baseball glove
  • Something fishy with canonical momentum fixed at boundary in classical action
  • How do I tell if a shunt resistor is routed using the Kelvin method?
  • Expected value of number of specific cards in starting hands in a card game
  • Why was Bhisma unable to glorify Lord Shiva as he previously glorified Sri Hari Vishnu without hesitation?
  • Why do these secondary transmission wires coil around the primaries?
  • What is canonical spectral theorem?
  • You won't believe what John's hostesses discovered
  • How to determine the number of Multi Layer Insulation layers for a satellite?
  • Odds Ratios paradox? Pooled OR inconsistent with subgroup ORs
  • How was Rome able to conscript and equip 400k soldiers during 2nd Punic War in a pre-industrial society?
  • Which consulate should I contact when my country doesn't have one in the destination country?
  • What is this vegetable?
  • Run function at end of pipe

curl round trip time

CURLOPT_TCP_FASTOPEN explained

CURLOPT_TCP_FASTOPEN - TCP Fast Open

Description

Pass a long as parameter set to 1L to enable or 0 to disable.

TCP Fast Open ( RFC 7413 ) is a mechanism that allows data to be carried in the SYN and SYN-ACK packets and consumed by the receiving end during the initial connection handshake, saving up to one full round-trip time (RTT).

Beware: the TLS session cache does not work when TCP Fast Open is enabled. TCP Fast Open is also known to be problematic on or across certain networks.

Availability

Added in 7.49.0. This option is currently only supported on Linux and macOS 10.11 or later.

Return value

Returns CURLE_OK if fast open is supported by the operating system, otherwise returns CURLE_NOT_BUILT_IN .

CURLOPT_SSL_FALSESTART (3)

ON ORDERS OVER $59*

Paul Mitchell Round Trip / 6.8

Paul Mitchell Round Trip

Introducing the Paul Mitchell Round Trip Defining Serum, a curl-defining solution for those seeking weightless bounce and perfectly defined waves and curls. This liquid curl definer is formulated with styling and conditioning agents to help deliver results from the comfort of your own home. Designed specifically for wavy and curly hair, this nourishing serum helps reduce drying time for faster styling while adding weightless bounce. With its innovative formula, it effortlessly transforms frizzy, unruly hair into vibrant, helping to define curls that turn heads wherever you go. Reduces Drying Time For Faster Styling: No more waiting for hours to achieve the perfect look. The Paul Mitchell Round Trip Defining Serum helps to reduce drying time, making your styling routine more efficient and convenient than ever before. Provides Detail to Waves and Curls: Help say goodbye to lackluster waves and lifeless curls. This defining serum works wonders by enhancing the natural texture of your hair, helping to add depth and dimension to your waves and curls for a more polished and defined appearance. With just a few pumps of this luxurious serum, help enjoy definition, separation, and elasticity that lasts throughout the day. It helps to tame frizz and flyaways, leaving your hair with a lustrous shine that exudes confidence and sophistication. Whether you have natural waves or tight curls, the Paul Mitchell Round Trip Defining Serum is your go-to solution for achieving results at home. Help embrace your natural texture and let this defining serum help redefine your hair routine. Experience the power of the Paul Mitchell Round Trip Defining Serum and transform your hair into a work of art with this game-changing liquid curl definer that helps add weightless bounce and detail to waves and curls. Shop now and unlock the secret to perfectly defined, luxurious locks.

- FREE Ground Shipping on all orders over $59

- $5.99 Continental standard ground (orders less than $59)

- $15 for expedited shipping up to 10lbs

- Orders processing time: 2-4 business days

- Orders placed on weekends and holidays will be processed the next business day.

- See Shipping page for more details: Shipping

- FREE returns and exchanges in store for qualifying online purchases

- See Returns page for more details: Returns

SELECT TOOLS AND DEVICES ARE FINAL SALES (it will state that information on each product description) :

All others: We are happy to refund or exchange them if returned within 14 days of purchase, if it is returned in its original state, unused and in original packaging with the seal unbroken and package not damaged. Please contact us directly for more details.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

AWS - How to handle global "Round Trip Time"?

Hey serverfault people,

Image a generic "Software as a Service" company offering a service running on AWS (hey, that's us). There is no rocket science involved, standard web-application doing its thing as usual and an end-user smartphone app. As customers are from Europe , naturally the AWS eu-central-1 region is containing everything for multiple tenants.

Now Sales manages to win a customer from Australia - all good so far, as the web-application can handle different timezones, currencies and locales already. But: Australia as far away as you can get from Europe (at least on earth), and so quite some round trip time is now involved. Per request we do see roughly 300ms - 400ms extra per direction (EDIT: this is wrong when speaking about RTT as pointing out in the commends, we do see 2x400ms = 800ms extra for the first HTTPS request).

For the mentioned web-application, which is used by the customer for management purposes, its totally fine. The rendered HTML is there a bit later but thanks to CDNs (CloudFront), assets are not an issue.

But the end-user smartphone application, which does smaller but more JSON requests, is affected. There it feels at the edge of "OK-ish" but definitively not snappy.

Now the question is: how to improve the timings from an end-user perspective? We already thought about a few options here:

Clone the complete software and host it in AWS ap-southeast-2 as well

Benefit: awesome performance, easy to setup, CI/CD would allow deploying the same code simultaneously in EU and AU.

Drawbacks: we have to maintain and pay for two identical infrastructure sets, data can not be shared easily, lots of duplication in all terms.

Move only computation instances to AWS ap-southeast-2

Nope, will not work as database or redis queries would be affected by the round trip time even more.

Have a read only replica in AWS ap-southeast-2 and do writes in eu-central-1

Better as option 2, but adds lot of complexity in the code plus the number of writes is not that that few usually.

Spin up a load balancer in AWS ap-southeast-2 and peer connect the VPCs

Idea: users connect to the AU endpoint and traffic is going via beefy connection to the EU instances. However, we this would obviously not reduce the distance and we are unsure about the potential improvement (if any?)

Does anybody have experienced a similar issue and is willing to share some insights?

Update: it seems only the first HTTPS request seems to be very slow. While digging into AWS Load Balancer options, I also noticed that AWS Global Accelerator might help, so we did some tests.

From local system (in EU):

From AU (EC2 instance):

From AU to AWS Global Accelerator(EC2 instance):

In a nutshell: It seems the TLS handshake is causing the biggest initial latency. If it can be reused however, the extra time for AU to EU seems really "just" ~277ms (0,294524s - 0,017285s) for Time To First Byte.

  • amazon-web-services
  • infrastructure

Markus's user avatar

  • Regarding 300ms - 400ms extra per direction , that sounds strange. I would expect the full RTT to be in that range (well, I see 250-300ms RTT to Sydney hosts but depending on where in Australia it will obviously vary... but not double as you indicated). Regarding option 4, if this is about the latency it will not really matter much (while the routing will be slightly different most of that distance is inherent, and as you noted it's really the distance that adds to the latency). –  Håkan Lindqvist Jul 9, 2021 at 17:22
  • To reduce latency you need application and database in Sydney. I like #3, alter your application to use a read replica for reads and send writes to the master EU database, so long as it will actually have benefits. Otherwise you'll need the full stack in Sydney. –  Tim Jul 9, 2021 at 22:00
  • @HåkanLindqvist you are absolutely right! I measured a full HTTPS request and decided it by 2, that's not the RTT. –  Markus Jul 11, 2021 at 12:52
  • The too many writes part may well be insignificant compared to modern browsers ability to shave off round trips. You may want to measure HTTP/1.1, HTTP/2, HTTP/3, 0-RTT & full-handshake separately to confirm that you really do need the database closer to your users, as opposed to, say, wait for old smartphones and MSIE to get replaced. –  anx Jul 11, 2021 at 13:16

You must log in to answer this question.

Browse other questions tagged amazon-web-services hosting infrastructure ..

  • The Overflow Blog
  • How do you evaluate an LLM? Try an LLM.
  • Why configuration is so complicated
  • Featured on Meta
  • New Focus Styles & Updated Styling for Button Groups
  • Upcoming initiatives on Stack Overflow and across the Stack Exchange network

Hot Network Questions

  • Enforce contract on burglars
  • How can I reserve a TGV seat on a Germany-Switzerland ticket purchased via Deutsche Bahn?
  • Is the spontaneous flow of heat to thermal equilibrium an explicit law or is it implicitly assumed in thermodynamics?
  • In which US states are hush money payments illegal?
  • Is there a good term for a 'paper trail' that actually consists of e-mail communication?
  • How to avoid accidentally sharing proprietary information when working for a competitor of a former employer?
  • Integer number in the 700000s as the days from year 1: how can this be cast in tsql to a date and back if the oldest datetime date is 1753-01-01?
  • Can street names be normalized to single form?
  • Can you have a planet as bright as Venus orbiting at 1.0 AU with no significant atmosphere?
  • A Difficult Colombian Sudoku?
  • Why don't Democrats let Representative Greene rename post offices, and how do they prevent her from doing so?
  • Why do these secondary transmission wires coil around the primaries?
  • What is the reason for the difference between Eloi and Eli?
  • How do you stay stable when landing/ Taking off on an F-35 during a VTOL landing/Takeoff?
  • Which consulate should I contact when my country doesn't have one in the destination country?
  • What options do I have for latex-free interior house paint?
  • Godot: Spawn an object in 3D
  • What is this vegetable?
  • Why do some transactions wildly overpay fees?
  • Situation with Artemov's paper?
  • What was that glowing green thing that soldiers threw in the corridor of the White House in the final battle scene of “Civil War?”
  • Book about a robotic probe comes to Earth and asks to be destroyed
  • Why doesn't Israel withdraw from the territories occupied during the Six-Day War of 1967?
  • Something fishy with canonical momentum fixed at boundary in classical action

curl round trip time

Adobe Community

  • Global community
  • 日本語コミュニティ Dedicated community for Japanese speakers
  • 한국 커뮤니티 Dedicated community for Korean speakers
  • Premiere Pro
  • Round Trip Export From DaVinci not Working in Prem...

Round Trip Export From DaVinci not Working in Premiere 2024 (Unsupported compression type error)

matthewf75131930

Copy link to clipboard

never-displayed

mattchristensen

IMAGES

  1. Round Trip Time (RTT)

    curl round trip time

  2. What is RTT (Round Trip Time) and how to reduce it? (2023)

    curl round trip time

  3. What is Round Trip Time (RTT) and how can it be measured?

    curl round trip time

  4. What is RTT(Round Trip Time)?

    curl round trip time

  5. Calculating round trip time

    curl round trip time

  6. Round-Trip Time Measureme

    curl round trip time

VIDEO

  1. LWB X1 to AsiaWorld Expo (Round Trip) (Time Lapse)

  2. NWFB 3A to Felix Villas (Round Trip) (Time Lapse)

  3. Most Simple Trim Router Circle Cutting Jig / Woodworking Skill

  4. Quarry Rock Hike in Deep Cove, North Vancouver

  5. Cut Perfect Circles With Your router!

  6. KMB 3M to Tsz Wan Shan (North) (Round Trip) (Time Lapse)

COMMENTS

  1. How do I measure request and response times at once using cURL?

    I want to measure the request, response, and total time using cURL. My example request looks like: curl -X POST -d @file server:port and I currently measure this using the time command in Linux: time curl -X POST -d @file server:port The time command only measures total time, though - which isn't quite what I am looking for.

  2. A Question of Timing

    It should be close to the round-trip time (RTT) to the server. In this example, RTT looks to be about 200 ms. time_appconnect here is TLS setup. The client is then ready to send it's HTTP GET request. time_starttransfer is just before cURL reads the first byte from the network (it hasn't actually read it yet).

  3. Measure response time using Invoke-WebRequest similar to curl

    I have a curl command which response time by breaking it by each action in invoking a service. curl -w "@sample.txt" -o /dev/null someservice-call. I want to measure the response time in a similar way using PowerShell's built-in Invoke-WebRequest call. So far I am able to get total response time using Measure-Command.

  4. Paul Mitchell Round Trip Curl Defining Serum, Reduces Drying Time For

    This item: Paul Mitchell Round Trip Curl Defining Serum, Reduces Drying Time For Faster Styling, For Wavy + Curly Hair, 6.8 oz. $17.50 $ 17 . 50 ($2.57/Fl Oz) Get it as soon as Friday, Apr 19

  5. How I measure Response Times of Web APIs using curl

    It just outputs the response body from the server. Let's append these options. -s -o /dev/null -w "%{time_starttransfer}\n". -s is to silence the progress, -o is to dispose the response body to /dev/null. And what is important is -w. We can specify a variety of format and in this time I used time_starttransfer to retrieve the response time ...

  6. Troubleshooting slow requests in Cloud Foundry

    Measure a request's full round-trip time from the client and back as by running: time curl -v TEST-APP-ENDPOINT Where TEST-APP-ENDPOINT is the URL endpoint for the test app. While every network is different, this request should take less than 0.2 seconds.

  7. How Long is a Curl?

    $ time curl https://app.acl.com real 1.094s user 0.030s sys 0.009s $ time curl https: ... Our packet capture again confirms that our estimated round-trip time is roughly 80 ms. In reality though ...

  8. Timing Page Responses With Curl

    Timing web requests is possible in curl using the -w or --write-out flag. This flag takes a number of different options, including several time based options. These timing options are useful for testing the raw speed of requests from a web server and can be an important tool when improving performance and quickly getting feedback on the response.

  9. How to Use Timeout on Curl Request

    Now, let's delve into the straightforward method of setting a timeout in a curl request: $ curl -m [timeout_seconds] [URL] In this syntax, we use the -m or -max-time option, followed by the number of seconds of timeout duration. To exemplify further, let's set a timeout of 10 seconds for a GET request to a hypothetical API endpoint ...

  10. Difference between `Tracert` and `Traceroute`

    If I just wanted to check round trip time from a workstation, that's what I'd use. It's quick, it's simple, and it's available in some form on just about every network-capable device in the world. (A) Tracert and (B) Traceroute are very similar utilities on Windows and *nix, respectively. Deciding between those two answers would depend on the ...

  11. Paul Mitchell Round Trip Curl Enhancer

    Shop Paul Mitchell Round Trip Curl Enhancer - 6.8oz at Target. Choose from Same Day Delivery, Drive Up or Order Pickup. ... Reduces drying time to get you on your way faster. Hair Type: Wavy, Curly. Product Warning: Eye irritant. Product Form: Liquid. Beauty Purpose: Curl Enhancing. Net weight: 6.8 Ounces. TCIN: 90046202. UPC: 009531113920 ...

  12. What is round-trip time?

    What is round-trip time? Round-trip time (RTT) is the duration in milliseconds (ms) it takes for a network request to go from a starting point to a destination and back again to the starting point. RTT is an important metric in determining the health of a connection on a local network or the larger Internet, and is commonly utilized by network ...

  13. What is RTT?

    Round-trip time (RTT) in networking is the time it takes to get a response after you initiate a network request. When you interact with an application, like when you click a button, the application sends a request to a remote data server. Then it receives a data response and displays the information to you. RTT is the total time it takes for ...

  14. CURLOPT_TCP_FASTOPEN

    TCP Fast Open ( RFC 7413) is a mechanism that allows data to be carried in the SYN and SYN-ACK packets and consumed by the receiving end during the initial connection handshake, saving up to one full round-trip time (RTT). Beware: the TLS session cache does not work when TCP Fast Open is enabled. TCP Fast Open is also known to be problematic on ...

  15. Paul Mitchell Flexible Style Round Trip Curl Definer 6.8oz

    with your order over $100. Defines And Holds. Round Trip is a fast drying curl enhancer. It uses unique styling agents to give curls natural definition and wave. With a long lasting fixable hold, your curls will still be able to have bounce and movement throughout the day. When used on wet hair, this styler reduces drying time for faster styling.

  16. Paul Mitchell Round Trip

    Reduces Drying Time For Faster Styling: No more waiting for hours to achieve the perfect look. The Paul Mitchell Round Trip Defining Serum helps to reduce drying time, making your styling routine more efficient and convenient than ever before. Provides Detail to Waves and Curls: Help say goodbye to lackluster waves and lifeless curls.

  17. AWS

    From local system (in EU): From AU (EC2 instance): From AU to AWS Global Accelerator (EC2 instance): In a nutshell: It seems the TLS handshake is causing the biggest initial latency. If it can be reused however, the extra time for AU to EU seems really "just" ~277ms (0,294524s - 0,017285s) for Time To First Byte. Greetings!

  18. Amazon.com: Curl Defining Serum

    Paul Mitchell Round Trip Curl Defining Serum, Reduces Drying Time For Faster Styling, For Wavy + Curly Hair, 6.8 oz. 6.8 Fl Oz (Pack of 1) 4.4 out of 5 stars. 3,122. 900+ bought in past month. ... Speeds Up Drying Time, Humidity Resistant, For Frizzy Hair. 0.85 Fl Oz (Pack of 1) Options:

  19. Paul Mitchell Round Trip Curl Defining Serum, Reduces Drying Time For

    Paul Mitchell Round Trip Curl Defining Serum, Reduces Drying Time For Faster Styling, For Wavy Amazon Produc Link: http://amazon.com/dp/B002RS6L10/?tag=quyen...

  20. Paul Mitchell Round Trip Curl Definer

    Paul Mitchell Round Trip Curl Definer. 6.8 fl oz UPC: 0009017444912. Purchase Options. Prices May Vary. Sign In to Add. Shop for Paul Mitchell Round Trip Curl Definer (6.8 fl oz) at Smith's Food and Drug. Find quality beauty products to add to your Shopping List or order online for Delivery or Pickup.

  21. Round Trip Export From DaVinci not Working in Premiere 2024

    Round Trip Export From DaVinci not Working in Premiere 2024 (Unsupported compression type error) matthewf75131930. Community Beginner , Nov 30, 2023. I am on a project attempting to do a round trip color correction from DaVinci resolve using a workflow I've used hundreds of time. I export an XML from Final Cut, import in DaVinci to colr, and ...

  22. python

    Well if round trip is much shorter then wait interval then all responses can in theory be processed before timeout fires. Thus there is no lag. But if round time is around timeout and some processing takes place during that time (e.g. the response processing) then it is natural that the timeout will happen some time after it should.