Subscribe by Email


Showing posts with label Reliability. Show all posts
Showing posts with label Reliability. Show all posts

Thursday, November 6, 2025

What Is Performance Testing: Guide to Speed, Scalability and Reliability

What Is Performance Testing: Guide to Speed, Scalability and Reliability

Users don’t wait. If a page stalls, a checkout hangs, or a dashboard times out, people leave and systems buckle under the load. Performance testing is how teams get ahead of those moments. It measures how fast and stable your software is under realistic and extreme conditions. Done right, it gives you hard numbers on speed, scalability, and reliability, and a repeatable way to keep them healthy as you ship new features.

Problem:

Modern applications are a web of APIs, databases, caches, third-party services, and front-end code running across networks you don’t fully control. That complexity creates risk:

  • Unpredictable load: Traffic comes in waves—marketing campaigns, product launches, or seasonal surges create sudden spikes.
  • Hidden bottlenecks: A single slow SQL query, an undersized thread pool, or an overzealous cache eviction can throttle the entire system.
  • Cloud cost surprises: “Autoscale will save us” often becomes “autoscale saved us expensively.” Without performance data, cost scales as fast as traffic.
  • Regressions: A small code change can raise response times by 20% or increase error rates at high concurrency.
  • Inconsistent user experience: Good performance at 50 users says nothing about performance at 5,000 concurrent sessions.

Consider this real-world style example: an ecommerce site that normally handles 200 requests per second (RPS) runs a sale. Marketing expects 1,500 RPS. The team scales web servers but forgets the database connection pool limit and leaves an aggressive retry policy in the API gateway. At peak, retries amplify load, connections saturate, queue times climb, and customers see timeouts. Converting that moment into revenue required knowledge of where the limits are, how the system scales, and what fails first—exactly what performance testing reveals.

Possible methods:

Common types of performance testing

Each test type answers a different question. You’ll likely use several.

  • Load testing — Question: “Can we meet expected traffic?” Simulate normal and peak workloads to validate response times, error rates, and resource usage. Example: model 1,500 RPS with typical user think time and product mix.
  • Stress testing — Question: “What breaks first and how?” Push beyond expected limits to find failure modes and graceful degradation behavior. Example: ramp RPS until p99 latency exceeds 2 seconds or error rate hits 5%.
  • Spike testing — Question: “Can we absorb sudden surges?” Jump from 100 to 1,000 RPS in under a minute and observe autoscaling, caches, and connection pools.
  • Soak (endurance) testing — Question: “Does performance degrade over time?” Maintain realistic load for hours or days to catch memory leaks, resource exhaustion, and time-based failures (cron jobs, log rotation, backups).
  • Scalability testing — Question: “How does performance change as we add resources?” Double pods/instances and measure throughput/latency. Helps validate horizontal and vertical scaling strategies.
  • Capacity testing — Question: “What is our safe maximum?” Determine the traffic level that meets service objectives with headroom. Be specific: “Up to 1,800 RPS with p95 < 350 ms and error rate < 1%.”
  • Volume testing — Question: “What happens when data size grows?” Test with large datasets (millions of rows, large indexes, deep queues) because scale often changes query plans, cache hit rates, and memory pressure.
  • Component and micro-benchmarking — Question: “Is a single function or service fast?” Useful for hotspot isolation (e.g., templating engine, serializer, or a specific SQL statement).

Key metrics and how to read them

Meaningful performance results focus on user-perceived speed and error-free throughput, not just averages.

  • Latency — Time from request to response. Track percentiles: p50 (median), p95, p99. Averages hide pain; p99 reflects worst real user experiences.
  • Throughput — Requests per second (RPS) or transactions per second (TPS). Combine with concurrency and latency to understand capacity.
  • Error rate — Non-2xx/OK responses, timeouts, or application-level failures. Include upstream/downstream errors (e.g., 502/503/504).
  • Apdex (Application Performance Index) — A simple score based on a target threshold (T) where satisfied ≤ T, tolerating ≤ 4T, and frustrated > 4T.
  • Resource utilization — CPU, memory, disk I/O, network, database connections, thread pools. Saturation indicates bottlenecks.
  • Queue times — Time waiting for a worker/thread connection. Growing queues without increased throughput are a red flag.
  • Garbage collection (GC) behavior — For managed runtimes (JVM, .NET): long stop-the-world pauses increase tail latency.
  • Cache behavior — Hit rate and eviction patterns. Cold cache vs warm cache significantly affects results; measure both.
  • Open vs closed workload models — Closed: fixed users with think time. Open: requests arrive at a set rate regardless of in-flight work. Real traffic is closer to open, and it exposes queueing effects earlier.

Example: If p95 latency climbs from 250 ms to 900 ms while CPU remains at 45% but DB connections hit the limit, you’ve likely found a pool bottleneck or slow queries blocking connections—not a CPU bound issue.

Test data and workload modeling

Good performance tests mirror reality. The fastest way to get wrong answers is to test the wrong workload.

  • User journeys — Map end-to-end flows: browsing, searching, adding to cart, and checkout. Assign realistic ratios (e.g., 60% browse, 30% search, 10% checkout).
  • Think time and pacing — Human behavior includes pauses. Without think time, concurrency is overstated and results skew pessimistic. But when modeling APIs, an open model with arrival rates may be more accurate.
  • Data variability — Use different products, users, and query parameters to avoid cache-only results. Include cold start behavior and cache warm-up phases.
  • Seasonality and peaks — Include known peaks (e.g., Monday 9 a.m. login surge) and cross-time-zone effects.
  • Third-party dependencies — Stub or virtualize external services, but also test with them enabled to capture latency and rate limits. Be careful not to violate partner SLAs during tests.
  • Production-like datasets — Copy structure and scale, not necessarily raw PII. Use synthetic data at similar volume, index sizes, and cardinality.

Environments and tools

Perfect fidelity to production is rare, but you can get close.

  • Environment parity — Mirror instance types, autoscaling rules, network paths, and feature flags. If you can’t match scale, match per-node limits and extrapolate.
  • Isolation — Run tests in a dedicated environment to avoid cross-traffic. Otherwise, you’ll chase phantom bottlenecks or throttle real users.
  • Generating load — Popular open-source tools include JMeter, Gatling, k6, Locust, and Artillery. Managed/cloud options and enterprise tools exist if you need orchestration at scale.
  • Observability — Pair every test with metrics, logs, and traces. APM and distributed tracing (e.g., OpenTelemetry) help pinpoint slow spans, N+1 queries, and dependency latencies.
  • Network realism — Use realistic client locations and latencies if user geography matters. Cloud-based load generators can help simulate this.

Common bottlenecks and anti-patterns

  • N+1 queries — Repeated small queries per item instead of a single batched query.
  • Chatty APIs — Multiple calls for a single page render; combine or cache.
  • Unbounded concurrency — Unlimited goroutines/threads/futures compete for shared resources; implement backpressure.
  • Small connection pools — DB or HTTP pools that cap throughput; tune cautiously and measure saturation.
  • Hot locks — Contended mutexes or synchronized blocks serialize parallel work.
  • GC thrashing — Excess allocations causing frequent or long garbage collection pauses.
  • Missing indexes or inefficient queries — Full table scans, poor selectivity, or stale statistics at scale.
  • Overly aggressive retries/timeouts — Retries can amplify incidents; add jitter and circuit breakers.
  • Cache stampede — Many clients rebuilding the same item after expiration; use request coalescing or staggered TTLs.

Best solution:

The best approach is practical and repeatable. It aligns tests with business goals, automates what you can, and feeds results back into engineering and operational decisions. Use this workflow.

1) Define measurable goals and guardrails

  • Translate business needs into Service Level Objectives (SLOs): “p95 API latency ≤ 300 ms and error rate < 1% at 1,500 RPS.”
  • Set performance budgets per feature: “Adding recommendations can cost up to 50 ms p95 on product pages.”
  • Identify must-haves vs nice-to-haves and define pass/fail criteria per test.

2) Model realistic workloads

  • Pick user journeys and arrival rates that mirror production.
  • Include think time, data variability, cold/warm cache phases, and third-party latency.
  • Document assumptions so results are reproducible and explainable.

3) Choose tools and instrumentation

  • Pick one primary load tool your team can maintain (e.g., JMeter, Gatling, k6, Locust, or Artillery).
  • Ensure full observability: application metrics, infrastructure metrics, logs, and distributed traces. Enable span attributes that tie latency to query IDs, endpoints, or user segments.

4) Prepare a production-like environment

  • Replicate instance sizes, autoscaling policies, connection pool settings, and feature flags. Never test only “dev-sized” nodes if production uses larger instances.
  • Populate synthetic data at production scale. Warm caches when needed, then also test cold-start behavior.

5) Start with a baseline test

  • Run a moderate load (e.g., 30–50% of expected peak) to validate test scripts, data, TLS handshakes, and observability.
  • Record baseline p50/p95/p99 latency, throughput ceilings, and resource usage as your “known good” reference.

6) Execute load, then stress, then soak

  • Load test up to expected peak. Verify you meet SLOs with healthy headroom.
  • Stress test past peak. Identify the first point of failure and the failure mode (timeouts, throttling, 500s, resource saturation).
  • Soak test at realistic peak for hours to uncover leaks, drift, and periodic jobs that cause spikes.
  • Spike test to ensure the system recovers quickly and autoscaling policies are effective.

7) Analyze results with a bottleneck-first mindset

  • Correlate latency percentiles with resource saturation and queue lengths. Tail latency matters more than averages.
  • Use traces to locate slow spans (DB queries, external calls). Evaluate N+1 patterns and serialization overhead.
  • Check connection/thread pool saturation, slow GC cycles, and lock contention. Increase limits only when justified by evidence.

8) Optimize, then re-test

  • Quick wins: add missing indexes, adjust query plans, tune timeouts/retries, increase key connection pool sizes, and cache expensive calls.
  • Structural fixes: batch operations, reduce chattiness, implement backpressure, introduce circuit breakers, and precompute hot data.
  • Re-run the same tests with identical parameters to validate improvements and prevent “moving goalposts.”

9) Automate and guard your pipeline

  • Include a fast performance smoke test in CI for critical endpoints with strict budgets.
  • Run heavier tests on a schedule or before major releases. Gate merges when budgets are exceeded.
  • Track trends across builds; watch for slow creep in p95/p99 latency.

10) Operate with feedback loops

  • Monitor in production with dashboards aligned to your test metrics. Alert on SLO burn rates.
  • Use canary releases and feature flags to limit blast radius while you observe real-world performance.
  • Feed production incidents back into test scenarios. If a cache stampede happened once, codify it in your spike test.

Practical example: Planning for an ecommerce sale

Goal: Maintain p95 ≤ 350 ms and error rate < 1% at 1,500 RPS; scale to 2,000 RPS with graceful degradation (return cached recommendations if backend is slow).

  1. Workload: 60% browsing, 30% search, 10% checkout; open model arrival rate. Include think time for browse flows and omit it for backend APIs.
  2. Baseline: At 800 RPS, p95 = 240 ms, p99 = 480 ms, error rate = 0.2%. CPU 55%, DB connections 70% used, cache hit rate 90%.
  3. Load to 1,500 RPS: p95 rises to 320 ms, p99 to 700 ms, errors 0.8%. DB connection pool hits 95% and queue time increases on checkout.
  4. Stress to 2,200 RPS: p95 600 ms, p99 1.8 s, errors 3%. Traces show checkout queries with sequential scans. Connection pool saturation triggers retries at the gateway, amplifying load.
  5. Fixes: Add index to orders (user_id, created_at), increase DB pool from 100 to 150 with queueing, add jittered retries with caps, enable cached recommendations fallback.
  6. Re-test: At 1,500 RPS, p95 = 280 ms, p99 = 520 ms, errors 0.4%. At 2,000 RPS, p95 = 340 ms, p99 = 900 ms, errors 0.9% with occasional fallbacks—meets objectives.
  7. Soak: 6-hour run at 1,500 RPS reveals memory creep in the search service. Heap dump points to a cache not honoring TTL. Fix and validate with another soak.

Interpreting results: a quick triage guide

  • High latency, low CPU: Likely I/O bound—database, network calls, or lock contention. Check connection pools and slow queries first.
  • High CPU, increasing tail latency: CPU bound or GC overhead. Optimize allocations, reduce serialization, or scale up/out.
  • Flat throughput, rising queue times: A hard limit (thread pool, DB pool, rate limit). Increase capacity or add backpressure.
  • High error rate during spikes: Timeouts and retries compounding. Tune retry policies, implement circuit breakers, and fast-fail when upstreams are degraded.

Optimization tactics that pay off

  • Focus on p95/p99: Tail latency hurts user experience. Optimize hot paths and reduce variance.
  • Batch and cache: Batch N small calls into one; cache idempotent results with coherent invalidation.
  • Control concurrency: Limit in-flight work with semaphores; apply backpressure when queues grow.
  • Right-size connection/thread pools: Measure saturation and queueing. Bigger isn’t always better; you can overwhelm the DB.
  • Reduce payloads: Compress and trim large JSON; paginate heavy lists.
  • Tune GC and memory: Reduce allocations; choose GC settings aligned to your latency targets.

Governance without red tape

  • Publish SLOs for key services and pages. Keep them visible on team dashboards.
  • Define performance budgets for new features and enforce them in code review and CI.
  • Keep a living playbook of bottlenecks found, fixes applied, and lessons learned. Reuse scenarios across teams.

Common mistakes to avoid

  • Testing the wrong workload: A neat, unrealistic script is worse than none. Base models on production logs when possible.
  • Chasing averages: Median looks fine while p99 burns. Always report percentiles.
  • Ignoring dependencies: If third-party latency defines your SLO, model it.
  • One-and-done testing: Performance is a regression risk. Automate and re-run on every significant change.
  • Assuming autoscaling solves everything: It helps capacity, not necessarily tail latency or noisy neighbors. Measure and tune.

Quick checklist

  • Clear goals and SLOs defined
  • Realistic workloads with proper data variance
  • Baseline, load, stress, spike, and soak tests planned
  • Full observability: metrics, logs, traces
  • Bottlenecks identified and fixed iteratively
  • Automation in CI with performance budgets
  • Production monitoring aligned to test metrics

In short, performance testing isn’t a one-off gate—it’s a continuous practice that blends measurement, modeling, and engineering judgment. With clear objectives, realistic scenarios, and disciplined analysis, you’ll not only keep your app fast under pressure—you’ll understand precisely why it’s fast, how far it can scale, and what it costs to stay that way.

Some books about performance:

These are Amazon affiliate links, so I make a small percentage if you buy the book. Thanks.

  • Systems Performance (Addison-Wesley Professional Computing Series) (Buy from Amazon, #ad)
  • Software Performance Testing: Concepts, Design, and Analysis (Buy from Amazon, #ad)
  • The Art of Application Performance Testing: From Strategy to Tools (Buy from Amazon, #ad)


Overview on Performance Testing


What is Performance Testing?




Friday, September 27, 2013

What are the parameters of QoS - Quality of Service?

With the arrival of the new technologies, applications and services in the field of networking, the competition is rising rapidly. Each of these technologies, services and applications are developed with an aim of delivering QoS (quality of service) that is either better with the legacy equipment or better than that. The network operators and the service providers follow from trusted brands. Maintenance of these brands is of critical importance to the business of these providers and operators. The biggest challenge here is to put the technology to work in such a way that all the expectations of the customers for the availability, reliability and quality are met and at the same time the flexibility for quick adaptation of the new techniques is offered to the network operators. 

What is Quality of Service?

- The quality of service is defined by its certain parameters which play a key role in the acceptance of the new technologies. 
- The organization working on several specifications of QoS is ETSI.
- The organization has been actively participating in the organization of the inter-operability events regarding the speech quality.
- The importance of the QoS parameters has been increasing ever since the increasing inter-connectivity of the networks and interaction between many service providers and network operators for delivering communication services.
- It is the quality of service that grants you the ability for the making parameters specifications based up on multiple queues in order to shoot up the performance as well as the throughput of wireless traffic as in VoIP (voice over internet), streaming media including audio and video of different types. 
- This is also done for usual IP over the access points.
- Configuration of the quality of service on these access points involves setting many parameters on the queues that are already there for various types of wireless traffic. 
- The minimum as well as the maximum wait times are also specified for the transmission. 
- This is done through the contention windows. 
- The flow of the traffic between the access point and the client station is affected by the EDCA (AP enhanced distributed channel access) parameters. 
The traffic flow from client to the access point is controlled by the station enhanced distribution channel access parameters. 

Below we mention some parameters:
Ø  QoS preset: The options listed by the QoS are WFA defaults, optimized for voice, custom and WFA defaults.
Ø  Queue: For different types of data transmissions between AP – to – client station, different queues are defined:
- Voice (data 0): Queue with minimum delay and high priority. Data which is time sensitive such as the streaming media and the VoIP are automatically put in this queue.
- Video (data 1): Queue with minimum delay and high priority. Video data which is time sensitive is put in to this queue automatically.
- Best effort (data 2): Queue with medium delay and throughput and medium priority. This queue holds all the traditional IP data. 
- Background (data 3): Queue with high throughput and lowest priority. Data which is bulky, requires high throughput and is not time sensitive such as the FTP data is queued up here.

Ø AIFS (inter-frame space): This puts a limit on the waiting time of the data frames. The measurement of this time is taken in terms of the slots. The valid values lie in the range of 1 to 255.
Ø Minimum contention window (cwMin): This QoS parameter is supplied as input to the algorithm for determining the random back off wait time for re-transmission.
Ø cwMax
Ø maximum burst
Ø wi – fi multimedia
Ø TXOP limit
Ø Bandwidth
Ø Variation in delay
Ø Synchronization
Ø Cell error ratio
Ø Cell loss ratio



Friday, July 12, 2013

Sliding Window Protocols? – Part 1

- There are many types of data transmission protocols of which one type is the packet based data transmission protocols. 
- These protocols have a feature called the sliding window protocol.
- The sliding window protocols are a great help wherever the in-order delivery of the data packets demand reliability. 
- For example, the Data link layer of the TCP (transmission control protocol) model and OSI model demand such reliability and thus use window sliding protocol. 
- According to the concept of the sliding window protocols, a consecutive number which is unique is assigned to each and every portion of the transmission i.e., the packets.
- These numbers are used by the receiver for placing the packets it will receive in their correct order. 
- Also, with the help of these numbers, the missing packets can be identified and the duplicate packets can be removed. 
- One problem regarding the sliding window protocols is that it has kept no limits for the size of these numbers that are required. 

- An unlimited number of data packets can be allowed to be communicated at any instant of time if limits are placed on the number of packets involved in transmission or reception. 
- By this, we mean using the sequence numbers of fixed size. 
- By term window we refer to the transmission side. 
- It actually represents the logical boundary or limit of the number of packets that the receiver has to acknowledge. 
- The transmitter has to be informed by the receiver for each ACK (acknowledgement) packet regarding the maximum size or the window boundary of the current receiver buffer. 
- For reporting the window size of the received buffer, a 16 bit field is used in the TCP header. 
- The maximum limit or boundary of the window that we can have is 216 i.e., 64 KB. 
- When operating in the slow start mode, the counting of the transmitter begins with a low packet count.
- Gradually, the number of packets involved increases in every transmission after the ACK packet has been received. 
- Whenever it receives an ACK packet, the window slides logically by one packet for the transmission of a new packet. 
- On reaching the window threshold, one packet is sent by the transmitter for every one packet of ACK received. 
- Suppose the limit of the window is 10 packets and the transmitter is in slow start mode. 
- Then, first one packet will be transmitted followed by another two. 
- Between these two transmissions, it will send an ACK packet also. 
- This process will continue until the limit of 10 has reached. 
- After crossing the limit, the transmission is restricted to one i.e., for every ACK packet received only one data packet is transmitted. 
- When viewed during simulation, it seems as if the window is shifting by distance of one packet whenever an ACK packet is received. 
- For avoiding the traffic congestion, the sliding window protocol works up a great deal.
- In this way the application layer would not have to worry about transmission the next set of data packets. 
- It can continue to do so since the sliding windows of the packet buffer will be implemented on both the sides i.e., the receiver’s and the sender’s side by the TCP. 
-However, the network traffic influences the window size dynamically to a great extent. 
- In order to achieve the highest possible throughput, care should be taken for not forcing the transmitter to stop the transmission before one RTT or round trip delay time by the sliding window protocol. 
- The bandwidth delay product of the links in the communication should be less than the limit of the data amount that can be sent before sending ACK packet. - If this condition is not met, the links’ effective bandwidth will be limited by the protocol. 


Friday, December 28, 2012

What is the difference between Purify and traditional debuggers?


The IBM Rational Purify grants power to the developers to deliver a product whose quality, reliability and performance matches with the expectations of the users. The purify plus combines the following and provides 3 benefits:
  1. Bug finding capabilities from the rational purify,
  2. Performance tuning effects from the rational quantify and
  3. Testing rigors from the rational pure coverage.
Together these three things make purify a different debugger that what the traditional debuggers we have. The above mentioned 3 benefits are measured in the terms of the faster development times, less number of errors and better code. 

About IBM Rational Purify

- The purify is actually a memory debugger by nature and is particularly used for the detection of the memory access errors especially in the programs that have been written in languages such as C and C++. 
- This software was originally developed by Reed Hastings, a member developer of the pure software organization. 
- However, Rational Purify exhibits the similar functionality as that of the Valgrind, bounds checker, Insure++.
- A process called dynamic verification using which the errors that occur during the execution can be discovered by a program is supported by the rational purify just like a debugger.
- However, there is another process called the static verification which is just the opposite of the dynamic verification and is also supported by the rational purify. 
- This process works by digging out inconsistencies present in the program logic. 
- Whenever there is a linking between a program and purify, the correct version of the verified code is automatically inserted in to the executable part of the code by either adding it to the object code or by parsing. 
- So, if whenever an error occurs, the location of the error, its memory address and other relevant info will be printed out by the tool. 
- Similarly, whenever a memory leak is detected by the purify it generates a leak report towards the exit of the program.

Difference between Rational Purify and Traditional Debuggers

- The major difference between the rational purify and the traditional debuggers is the ability of detecting the non – fatal errors. 
- The traditional debuggers only show up the sources which can cause the fatal errors such as a de-referencing to a null pointer can cause a program to crash and they are not effective in finding out the non – fatal memory errors. 
However, there are certain things for which the traditional debuggers are more effective than the rational purify for e.g.
- The debuggers can be used to step line by line through the code and to examine the memory of the program at any particular instance of time. 
- It would not be wrong if we say that these two tools are complementary to each other and can work great for a skilled developer. 
- The purify comes with other functionality which can be used for more general purposes rather than the debuggers which can be used only for the code.
- One thing to be noted about the purify is that it is more effective for the programming languages in which the memory management is left to the program developer. 
- This is the reason why the occurrence of the memory leaks is reduced in the programs written in languages such as java, visual basic and lisp etc. 
- It is not like these languages will never have memory leaks, they do have which occur because of the objects being referred to unnecessarily (this prevents the re – allocation of the memory.). 
- IBM has provided solution for these kind of errors also in the form of its another product called the rational application developer.
- Errors such as the following are covered by the purify:
  1. Array bounds
  2. Access to un-allocated memory
  3. Freeing the memory that is un-allocated
  4. Memory leaks and so on. 


Tuesday, July 10, 2012

What Tools are used for code coverage analysis?


Code coverage analysis is quite an essential process that makes up the complete and efficient software testing process. 
This analyzation consists of the following three basic activities:
  1. Checking out for the areas of the software system or application that have not been exercised by the set of tests that have been performed so far.
  2. Creation of the additional test cases so that the code coverage can be increased.
  3. Determination of the quantitative measure for the code coverage which some what provides an indirect measure of the quality of the software system and application.
Apart from this, there is one more optional aspect of the code coverage analysis which is that it helps in the identification of the redundant test cases that add to the measure of the code coverage but do not merely increase it.
In this article we have discussed about the tools that make this whole process of code coverage analyzation quite easy.

Tools Used for Code Coverage Analysis


- The code coverage analyzation is quite an effort and time consuming process and therefore is nowadays automated using tools like code coverage analyzer. 
- But a code coverage analyzer cannot be used always like in situations when the tests have to be run through the release candidate.
- For different languages, there are many different and vivid tools are available for code coverage analysis.

  1. For C++ and C programming languages:
a)  Tcov
b)  Bulls eye coverage
c)  Gcov
d)  LDRA test bed
e)  NuMega True Coverage
f)   Tessy
g)  Trucov
h)  Froglogic’s squish coco
i)   Parasoft C++ soft
j)   Test well CTC++
k)  McCabe IQ
l)   Insure++
m)Cantata

  1. Tools for C#:
a)  Mc Cabe IQ
b)  Jet brains dot cover
c)  Ncover
d)  Visual studio 2010
e)  Parasoft Dottest
f)   Test driven.NET
g)  Kalistick
h)  Dev partner

  1. Tools for Java:
a)  McCabe IQ
b)  Clover
c)  EMMA
d)  Kalistick
e)  JaCoCo
f)   JMockit coverage
g)  Code coverage
h)  LDRA test bed
i)    Jtest
j)   Den partner
k)  Cobertura

  1. Tools for Java Script:
a) Mc Cabe IQ
b) JS coverage
c) Code coverage
d) Script cover
e) Coveraje

  1. Tools for Perl:
a) Mc Cabe IQ
b)  Devel cover

  1. Tools for Haskell:
a) HPC (Haskell program coverage) tool kit

  1. Tools for Python:
a) Mc Cabe IQ
b) Fig leaf
c) Pester
d) Coverage.py

  1. Tools for PHP:
a) Mc Cabe IQ
b) PHP unit

  1. Tools for Ruby:
a) Rcov
b) Mc Cabe IQ
c) Simple cov
d) Cover Me

  1. Tools for Ada:
a) GNAT coverage
b) Mc Cabe IQ
c) Rapi Cover

Out of all the above mentioned tools for C and C++, the bulls eye coverage has proven to be the best code coverage analyzer in terms of reliability, usability and platform support etc. 
This coverage analyzer is different from the other analyzers in the following ways:
  1. Better coverage measurement
  2. Wide platform support
  3. Rigorously tested
  4. Efficient technical support
  5. Quite easy to use.
- Using this tool it can be determined that how much of the software system’s or application’s code was tested and this information later can be employed to focus your testing efforts and areas that require some improvement.
- With the bullseye coverage a more reliable code can be created and time can be saved. 
- The function coverage provided by the bulls eye coverage gives you a very high precision.

You can include or exclude the parts of the code of your choice. And what more? You can even merge the results you obtained from the distributed testing plus the run time code can also be included from custom environments. 


Facebook activity