Thursday, July 26, 2012
How can data caching have a negative effect on load testing results?
Posted by
Sunflower
at
7/26/2012 01:49:00 PM
0
comments
Labels: Application, Caching, Conditions, Data, Data Caching, Impact, Inconsistent, Load Testing, Negative, Pitfalls, Purpose, Results, Retrieve, Rules, Server, Software System, Testing, Virtual user
| Subscribe by Email |
|
Friday, March 9, 2012
What is meant by storm worm?
Storm worm? You may not recognize this worm at the first instance since you might be knowing it by one of the following other names:
1. Small. Dam
2. Trojan- downloader. Win 32. Small. Dam
3. F secure as dubbed by the finnish company.
4. W32/ Numwar@MM
5. Downloader BAI (McAfee’s specific variant)
6. Trojan. DL. Tibs. Gen! Pact13
7. Trojan. Peacomm (Symantec)
8. Win32/ Nuwar (ESET)
9. W32/ Zhelatin (kaspersky, F secure)
10. Trojan. Peed (Bit Defender)
11. Trojan. Tibs (Bit Defender)
12. Win32/ Nuwar. N@MM! CME- 711 (windows live one care)
13. TROJ_SMALL. EDW (trend micro)
14. Trojan. Downloader – 647
15. Loland Mal/ Dorf (sophos)
16. CME- 711 (mitre)
Evolution of Storm Worm
- It was recognized as a back door Trojan horse that had most of its impact on the computer systems that use the Microsoft operating systems or applications or extensions.
- This worm was first observed on the date of 17th January in the year of 2007.
- The storm worm first took its affect in the countries of the United States and Europe infecting millions of computer systems starting on the date of 19th January 2007.
- It was usually sent to the users as an e-mail message having the subject as a headline about the recent weather disaster like “230 dead as storm batters Europe”.
- At the starting of this cyber epidemic, there were around 6 waves of attack subsequently.
- At the end of the January 2007, the storm worm was said to account for 8 percent of all the world wide malware infections.
- According to the PC world, the history or origin of the storm worm can be traced back to a Russian business network.
- Mostly the European wind storm “kyrill” was used as the subject of the infected e- mails.
- This email usually had an attachment accompanying it which when opened, automatically installed this malware on to the system of the users.
Steps involved in installing the Malware
The malware was installed via the following steps:
1. Installation of the wincom32 service
2. Injection of payload
3. Passing of the packets to destinations as mentioned in the malware code.
4. Download and run the W32. Mixor. Q@mm worm and Trojan. Abwiz. F Trojan.
These downloaded Trojans then attached themselves to spam like flashcard.exe, postcard.exe and so on. Other changes regarding the original attack wave were made as the mutation of the attack carried on. Below mentioned are some other prominent spam attachments:
1. Ecard.exe
2. Fullstory.exe
3. Read more. Exe
4. Greeting postcard.exe
5. Read more.exe
6. Full news.exe
7. Arcade world.exe
8. Fullvideo.exe
9. Video.exe
10. Full clip.exe
11. More here.exe
12. Click here.exe
13. Nfl stat tracker.exe
14. Arcade world game.exe
Later the storm worm came to be spread by subjects regarding love such as “touched by love”, “love birds” and so on. These e- mails had the links referring to the malicious web sites containing virus like:
1. With love.exe
2. With_love.exe
3. From me to you.exe
4. Fck2008.exe
5. Fck2009.exe
6. Love.exe
7. Iheart you.exe
The storm worm has an exceptional ability to stay resilient. The affected machine or system used to become a part of botnet networks which was controlled through a central server. A botnet is seeded by the storm worm that acts as a P2P network without any control. The connected systems then act as a host and share the list of other hosts. One peculiarity was observed in the working of these machines which is that none of them shared the whole list of botnets.
Posted by
Sunflower
at
3/09/2012 10:57:00 PM
0
comments
Labels: Application, Computer system, Connected systems, Control, Destination, Hosts, Impact, Infection, Installation, Machines, Malware, Network, Packets, Security, Source, Storm Worm, Trojan Horse, Users
| Subscribe by Email |
|
Tuesday, March 8, 2011
Software Architecture Design - why is it important?
The architecture is not the operational software, rather it is a representation that enables a software engineer to analyze the effectiveness of the design in meeting its stated requirements, consider architectural alternatives at a stage when making design changes is still relatively easy and reduce the risk associated with the construction of the software.
- Software architecture enables and shows communication between all parties interested in the development of a computer based system.
- Early design decisions that has a profound impact on software engineering work is highlighted through architecture.
- Architecture constitutes a relatively small, intellectually graspable model of how the system is structured and how its components work together.
The architectural design model and the architectural patterns contained within it are transferable. Architectural styles and patterns can be applied to the design of other systems and represent a set of abstractions that enable software engineers to describe architecture in predictable ways.
Software architecture considers two levels of design pyramid - data design and architectural design. The software architecture of a program or computing system is the structure or structures of the system, which compose software components, the externally visible properties of those components and the relationships among them.
Posted by
Sunflower
at
3/08/2011 06:08:00 PM
1 comments
Labels: Architecture, Communication, Components, computers, Design, Impact, Levels, Operational, Patterns, program, Representation, Software, Software Architectue, Stages, Structures
| Subscribe by Email |
|
Tuesday, January 4, 2011
What is the need to execute Network Sensitivity Tests?
The three principle reasons for executing network sensitivity tests are as follows:
- Determine the impact on response time of WAN link.
- Determine the capacity of a system based on a given WAN link.
- Determine the impact on the system under test that is under dirty communications load.
Execution of performance and load tests for analysis of network sensitivity require test system configuration to emulate a WAN. Once a WAN link has been configured, performance and load tests conducted will become Network Sensitivity Tests.
There are two ways of configuring such tests:
- Use a simulated WAN and inject appropriate background traffic
This can be achieved by putting back to back routers between a load generator and the system under test. The routers can be configured to allow the required level of bandwidth, and instead of connecting to a real WAN, they connect directly through to each other.
When back to back routers are configured to be part of a test, they will basically limit the bandwidth. If the test is to be realistic, then additional traffic will need to be applied to the routers. This can be achieved by a web server at one end of the link serving pages and another load generator generating
requests. It is important that the mix of traffic is realistic.
For example, a few continuous file transfers may impact response time in a different way to a large number of small transmissions. By forcing extra more traffic over the simulated WAN link, the latency will increase and some packet loss may even occur. While this is much more realistic than testing over a high speed LAN, it does not take into account many features of a congested WAN such as out of sequence packets.
- Use the WAN emulation facility within LoadRunner
The WAN emulation facility within LoadRunner supports a variety of WAN scenarios. Each load generator can be assigned a number of WAN emulation parameters, such as error rates and latency. WAN parameters can be set individually, or WAN link types can be selected from a list of pre-set configurations.
It is important to ensure that measured response times incorporate the impact of WAN effects both at an individual session, as part of a performance test, and under load as part of a load test, because a system under WAN affected load may work much harder than a system doing the same actions over a clean communications link.
Posted by
Sunflower
at
1/04/2011 04:17:00 PM
0
comments
Labels: Configuring, Impact, Load, Load tests, LoadRunner, Network, Network Sensitivity Tests, Response time, Routers, Sensitive, Tests, traffic, WAN
| Subscribe by Email |
|
Tuesday, September 14, 2010
Risk Based Testing and the strategy behind risk based testing
Risk analysis is applicable on the level of system, subsystem and individual function or module. Testing is used in software development to reduce risks associated with a system. Risk-based testing (RBT) is a type of software testing that prioritizes the features and functions to be tested based on the risk they represent.
Risk-based testing is a skill. It’s not easy to know the ways that a product might fail, determine how important the failures would be if they occurred, and then develop and execute tests to discover whether the product indeed fails in those ways.
The main input into risk based testing is the business requirements supplied by the customer of a software application or system which outlines all of the features which must be present and explain how they should work, how each process should function and what the software should do.
Test managers prioritize tests to fit in with the project’s schedule and the test resources available.A risk based approach to testing takes a much deeper look at the real underlying needs of the project and what really matters to the end-customer.
Risk based testing is about carefully analysing each requirement and each test to ensure that the most important areas of the system and at the same time, those areas which are more likely to experience a failure receive the most attention from the test team. When risk based testing is deployed, every requirement must be rated for likelihood of failure and the impact of failure.
By analyzing the risk of a failure occuring with a specific component or feature and also the impact of failure if that component or feature failed in a real-life situation, project resources can be more efficiently allocated to focus on testing what really matters in the limited time available.
A risk based testing (RBT) approach can help save time and reduce costs on your testing project. Risk based testing enables the test manager to make an informed choice when allocating test resources on a project.
Posted by
Sunflower
at
9/14/2010 05:32:00 PM
1 comments
Labels: Approach, Development, Failure, Impact, Process, Product, Quality, RBT, Risk Analysis, Risk based testing, Risks, Software, Test Strategy, Tests
| Subscribe by Email |
|
Monday, February 2, 2009
Changing requirements and implications on testing
An ideal software development cycle involves a process whereby the requirements are frozen pretty early and the entire cycle happens with those frozen requirements. And if requirements do need to change, then a major impact analysis needs to happen, and the change is thoroughly studied before any change is taken. However, in the real world (and something that is increasingly acknowledged by incremental and Agile software methodologies), requirements can and do change and it is better for the software industry if there is a lot more effort put in to figure out how to incorporate the world of changing requirements. One of the folks impacted by changing requirements are the testing team, and once should evaluate how they can respond to such changing requirements. Let us start off by calling it a common problem and a major headache, and then work out what we can do. Here are some steps:
• Work with the project's stakeholders early on to understand how requirements might change (stakeholders have a much better idea on whether the requirements are fully known and stable) so that alternate test plans and strategies can be worked out in advance, if possible.
• It's helpful if the application is initially designed in a manner that allows for adaptability so that later changes do not require redoing the application from scratch, or at least minimise the amount of effort required for change.
• Coding practices of commenting and documenting, if followed religiously, makes handling changes easier for the developers.
• Another way to minimise the need for changing requirements is to present a prototype to the stakeholders and end users early enough in the cycle. This helps customers feel sure of their requirements and minimize changes.
• The project's initial schedule should allow for some extra time commensurate with the possibility of changes. Better to build such a time in the schedule.
• If possible, and if there is some amount of flexibility in negotiating relations with the client, try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version. This however does not work if the changes affect the workflows directly.
• Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application. This should be possible if there is a good change control process in the project.
• Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. This is typically done by delineating a proper change control process and explaining this to the stakeholders along with examples if necessary. Only after this, let management or the customers decide if the changes are warranted.
• Changes have a major effect on test automation systems, especially if there is a change in the UI of the application. Hence, you need to be sure that the effort put into setting up automated testing is commensurate with the expected effort required if there is change which causes re-doing of the test effort.
• Try to design some flexibility into automated test scripts. Not easy, but if you have initial ideas of change this should be possible.
• Focus initial automated testing on application aspects that are most likely to remain unchanged. This ensures that later test automation effort is done when there is some stability in the requirements.
• Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
• The last plan may seem very strange to a test manager. Focus less on detailed test plans and test cases and more on ad hoc testing; keep in mind however that this entails a certain risk.
Overall, when requirements are changing, teams also need to be more flexible to respond to such changes.
Posted by
Ashish Agarwal
at
2/02/2009 12:45:00 AM
0
comments
Labels: Change, Impact, Requirements, Testing
| Subscribe by Email |
|