If a firewall or router is configured so traceroute data can't flow through, you'll see * * instead of the traceroute data. Often, the problem lies with your ISP, rather than something at your end.
Because Qcheck is freely available, Ixia intentionally limited its capabilities so that it can't generate enough traffic to impact network performance. The current limits should be acceptable for almost any quick performance check. If you need to generate enough traffic to test a network or device, consider using NetIQ IxChariot.
Not always. The Endpoints are key to Qcheck's ability to measure application-level network performance. You need them to be installed at both ends of the connection you are testing for response time, throughput, or streaming performance. Traceroute does not require endpoint software on Endpoint 2.
Qcheck was designed to be run from the Qcheck console and cannot be run from the command line. Most people who want to run Qcheck from the command line want to use Qcheck measurements in concert with other tests or automate repeated tests for network monitoring. IxChariot can be used for network testing.
Qcheck sets the ports so that you can run tests across firewalls. See the Qcheck online help for details.
Qcheck can only run one test at a time. If you need to run multiple tests at the same time, consider using Ixia's IxChariot, which can handle up to 10,000 simultaneous endpoint pairs.
For detailed information about Performance Endpoints, see the Performance Endpoints Support Resources.
Qcheck is free. You can point other people to the Qcheck Web site, or just send them a copy of the Qcheck install file (qcinst.exe) after you have downloaded it.
With Qcheck, the best answer is to run the response time test with a very small data size. This gives you round-trip delay. Assuming you have equal performance out and back, dividing by two gives you an approximation of one-way delay. Ixia's IxChariot uses advanced clock synchronization techniques and can show actual one-way delay, and how it varies during a test.
We tried to strike a balance between making sure Qcheck generated enough traffic to give helpful diagnostics and not letting Qcheck generate so much traffic that it could impact network performance. We didn't want the diagnostic tool to become part of the problem. The limits are:
If you are looking for a tool that can generate additional traffic in order to test the limits of your network, consider our IxChariot product.
Qcheck measures application-level data flows to and from Endpoint 1. At the sockets level, here's the transaction:
As you can see, connects and disconnects are not included in the timing.
Qcheck does simple, application-level throughput measurements. Throughput is calculated using this formula:
throughput = (bytes sent + bytes received) / measured time
The throughput value is not the same value you'd see if you used an analyzer on the wire. Qcheck does not include any of the protocol overhead, such as headers, trailers, flow control, and connection setup.
Qcheck, so as not to swamp your network, is intentionally designed to generate small, brief data flows; it's limited to a single connection and sends no more than 1 MB of data.
Endpoint computer operating systems and protocol stacks can limit throughput; with today's computers it's difficult to get throughput greater than 100 Mbps with a single connection.
To maximize throughput, tests should run for a minute or more, allowing TCP/IP to swap key software pieces into memory and crank up its pacing windows. If you want to adjust test duration or other test conditions to maximize throughput, we recommend that you use IxChariot to construct more sophisticated tests. Using IxChariot's script editor, you can tailor script parameters such as the send_size and the buffer_size in connection with frame size adjustments to increase network efficiency. You'll also want to experiment with multiple NICs and with multiple concurrent connections.
With Qcheck, you may be able to improve throughput by tailoring the protocol stack configuration slightly at each of the endpoints. This is an advanced technique; we don't recommend it for anyone but the most technically adventurous. Changing the value of the TCP Receive Window parameter at an endpoint can affect the results you see when testing for maximum throughput. The TCP Receive Window is a TCP/IP stack configuration parameter. Many TCP/IP stacks ship with a default value of 8 KBytes. Changing to a larger value changes performance, increasing throughput on some stacks and reducing it on others. We recommend experimenting with the values you use for this parameter.
For example, in Windows NT, TcpWindowSize is a Registry value that's not already present. To set it, go to the Windows NT endpoint and run program
Go to Edit, Add value, and add "TcpWindowSize" as a REG_DWORD. The maximum value available is 64K. Matching the TcpWindowSize to the underlying MTU size minus the IP header should improve efficiency. This means multiples of 1460, 1457, or 1452, depending on Ethernet implementation.
In streaming tests lost data is often considerable. Data loss has three typical causes. The Data Rate may be higher than the maximum throughput potential, causing lost packets during transmissioncheck to make sure you've selected the correct units, for example. The network may be congested. Or, your network may be configured to give non-streaming traffic priority over streaming traffic, discarding datagram packets when the two compete for bandwidth. Try running a throughput test in the corresponding connection-oriented protocol for comparison. If your throughput is unexpectedly low, network congestion is the likely cause.