Testing Blog

Performance Testing

Monday, October 08, 2007
Share on Twitter Share on Facebook
Google
Labels: Goranka Bjedov

24 comments :

  1. UnknownOctober 8, 2007 at 11:47:00 PM PDT

    Bravo Goranka!!

    One of these days you, Rob Sabourin, and I need to do a joint piece about exploratory performance testing! (Don't worry, Rob and I have been working on this piece since the end of day 2 at WOPR1).

    The only thing I want to point out to the general public is that the two - four weeks you mention is a testament to the fact that you are starting with very performance aware/concerned developers/admins, your team reacts quickly to performance issues that you do detect, and that you are quite good at what you do.

    I'm not saying that this is not achievable by other teams - it absolutely is! But depending on where a team is starting from, it may take having the performance tester on board from day 1, working side by side with the developers/admins to help them become more performance aware, etc. for a while to get to the two - four week time frame.

    The notion that buying a tool, sending someone to three days of vendor training and then expecting them to conduct a single test cycle with the tool that generates useful results (that the team has time to react to) during the final two weeks of a project is just as bogus as it has ever been. Or, as I'm often quoted...

    Only performance testing at the conclusion of system or functional testing is like ordering a diagnostic blood test after the patient is dead.

    Again, fabulous insights. I'll reference it often.

    Cheers,
    Scott
    --

    Scott Barber
    President & Chief Technologist,
    PerfTestPlus, Inc.
    Executive Director,
    Association for Software Testing

    "If you can see it in your mind...
    you will find it in your life."

    ReplyDelete
    Replies
      Reply
  2. UnknownOctober 9, 2007 at 5:50:00 AM PDT

    So what would you recommend as an entry-level performance profiling tool for Java applications? "Entry-level", because I'd like to get the students in my undergraduate software engineering course to tune their programs, and this will be the first time most of them have had to worry about performance.
    - Greg (gvwilson@cs.toronto.edu)

    ReplyDelete
    Replies
      Reply
  3. UnknownOctober 9, 2007 at 9:09:00 AM PDT

    Thanks Scott - and all good points. Beacause I work on the infrastructure that is really well suited for performance testing, and I have extremely interested development teams, I can turn in projects in 2 - 4 weeks. But, not every project would be like that. Would love to work with you and Rob... One question - does the marsupial get writing credits as well? :)

    Greg, I would suggest JProf. It has been a while since I last taught a class, but I would not be agaisnt giving different groups different tools and asking them to write reports - what worked well, what could be better, etc. This may end up being the best lesson they get - we do this all the time in "real life."

    ReplyDelete
    Replies
      Reply
  4. James ChangDecember 29, 2007 at 7:40:00 AM PST

    Great post.

    If applications are bridges, performance testing is finding out whether or not the bridge will collapse when people use it... before people use it.

    On the comment about open source tools:
    I too have used open source tools (pretty much exclusively). I would agree that they are quite powerful, my only complaint about them is that the vast majority of tools that I've been using have bugs that I tend to stumble upon, which causes a fair amount of time for me in rechecking my work due to concerns about the validity of my data. Enterprise level tools, offer at least someone to yell at when that happens :).

    It's awesome that you have usage data to work with and a rock solid infrastructure. I would say for me (and probably most companies that are looking into performance testing) this is usually not the case, and identifying usage patterns can be a fair amount of work in itself, and an infrastructure that isn't as stable requires more testing and more specific testing. In particular, on unstable systems I think the level of granularity needed to make comparative judgements between codebases simply isn't there, and thus benchmarking isn't as revealing as one would hope.

    In lieu of benchmark comparisons, I've found a lot of success by emulating user behavior at peak levels of load for extended periods of time and monitoring system performance looking for degradation during the course of the test. Another test that has been extremely effective for me has been targeted tests that exercise specific services at projected levels and then combining it with a peak load test.

    On your comment about functional testing:
    I would agree that the rendering of the page is in the domain of functional testing, but that being said I hate making assumptions that it just worked! I have not yet, but am considering trying to have functional tests run at the same time as the peak usage load described above to validate that remaining portion (and it provides even greater coverage of system behavior under load)

    The biggest problems that I'm having involve identifying failure. I'm curious how you approach the identification of "failure". A simple thing, but what I've been running in to a lot are errors that I wasn't looking for. For example, during 100,000 data submissions, 12 of them happened to be corrupted, and this was unexpected so it was chance that while running through the logs I noticed some corrupted data.

    I would preach that log analysis is an absolutely critical facet of performance testing, but what sort of things do you do to define a "failure" in a system aside from scanning logs / monitoring behavior? As simple as it is to say it "the system broke", for me there doesn't seem to be a very good science for identifying "broken" and as simple as it may sound, I'm curious how you know when something is broke.

    An excellent post Goranka, more please!

    James Chang
    QA Analyst
    Parature Inc
    http://www.parature.com/

    ReplyDelete
    Replies
      Reply
  5. UnknownJanuary 2, 2008 at 11:00:00 PM PST

    Performance testing does not get stopped just with the measurement of the response times.Though its of most importance, there are other kind of tests which we should bring upon.Its by definition called destructive testing.
    Its a unique kind of performance testing where we analyze how the software application behaves when one or more of its back end application either slowdown or becomes non responsive.Expectation is that the front end application should handle the back end slowdown .Front end application should not become non responsive during that state.Instead they should fast fail the requests, rather than queuing at the server level resulting in shutdown or restart of the application server. The present day online services have complex architecture which talks to "n" number of services to fetch data.So dependency on the other application becomes vital and we expect not a momentum of unavailability of the dependent systems.In worst case of unavailability, the front end should be able to handle.
    This is just a small note on Destructive testing and lots further to discuss.

    ReplyDelete
    Replies
      Reply
  6. Shuaib ZahdaFebruary 17, 2008 at 3:31:00 AM PST

    Hi, Thanks for the article and I watched your video. It is really helpful for a beginner like me.

    My question is: If I were to do benchmarking for web servers that are running under virtualization. Do I apply the same rules as if I were running one server on one machine. Because in virtualized environment, let's say we run like 5 VMs each hosting one web server.

    ReplyDelete
    Replies
      Reply
  7. Sargon BenjaminApril 5, 2008 at 9:46:00 PM PDT

    Nice post. Its nice to see performance guidelines that actually work in practice. I've been reading a guide called 'Performance Testing Guidance for Web Applications' (it's published by Microsoft - I know I know :) ) Its quite verbose but really informative and your post sums up some of there points and much more. The video from GTAC is great. Please continue posting happily

    ReplyDelete
    Replies
      Reply
  8. PECATS - Performance Engineering CATSAugust 12, 2008 at 12:50:00 PM PDT

    This is an excellent article exploring the nuances of performance testing.

    We would just like to mention, if performance evaluation and testing is carried on as a continuous process as off a one time activity, it not only helps in improving performance of systems much more effectively but it reduces time to market too.

    Cheers
    Deepak
    PECATS

    ReplyDelete
    Replies
      Reply
  9. rajAugust 14, 2008 at 8:06:00 PM PDT

    It is a useful post. Couple of questions about user pattern and load behaviour. How does meetings with development team helps in revising the load pattern and user behaviour? Agree that development and architect team can give more inputs with performance issues and resolution. But load pattern information is mostly with the end users and business group. Am I wrong?
    Another interesting topic (to me) end user experience for web application. Something like rendering of the web pages. I personally got requirements to test the performance (rendering time) of web page in different browsers. I feel it should be part of Performance testing. Can anyone share their experiences on this?

    Raj
    http://performancetestingfun.googlepages.com

    ReplyDelete
    Replies
      Reply
  10. rlonnJanuary 25, 2009 at 6:29:00 AM PST

    Nice post. Many people regard load testing as something you rarely have to do - they think maybe once or twice a year is enough. It is nice to see people writing about it as a natural part of the development & testing cycle.

    We have just launched a new online load testing service - http://loadimpact.com - and we have an interesting use-case article there that might interest some people: Iterative performance tuning with automated load testing, a case study

    Regards,

    /Ragnar, Load Impact

    ReplyDelete
    Replies
      Reply
  11. leheriaMarch 22, 2009 at 10:05:00 PM PDT

    I like the point about not investigating the code. It is real tempting to look into the code especially if you have been a developer in prior life. Lot of times there is tendency to also guess the root cause instead of taking methodological steps. Performance engineering is an art and science of narrowing down to problem areas by observing the system behavior and accurate measurements. “One accurate measurement is worth a thousand expert opinions.”
    - Adm. Grace Murray Hopper (December 9, 1906 – January 1, 1992)

    my blog is at www.esustain.com

    ReplyDelete
    Replies
      Reply
  12. UnknownJune 25, 2009 at 4:13:00 AM PDT

    What I feel is that the most important part of performance testing is gathering the application usage/usage pattern/workload mix and translating that to design a test scenario.But that is also most difficult part-gathering the application usage/usagepatter/workload mix.So do you have any process to gather that info?

    ReplyDelete
    Replies
      Reply
  13. Adam BrownAugust 12, 2009 at 3:24:00 PM PDT

    josesum – For load test specifications I find the best approach is to go as high as you can (Project Manager, IT Manager IT Director etc) and ask what the business wants the application to do, basically get back to requirements.
    For example, a car insurance company might expect to sell 1000 policies a day, typically they'll do their business between 6pm and 11pm, from that you can work out that the application needs to process 200 policies in an hour, that's key - the most important action to simulate in your load tests. Other transactions such as policy maintenance would also be important to script as even if they are not heavily used as the load they induce may have a disproportionate effect on the application.

    It’s an art to weigh up the riskiest transactions for the business (the transactions that have a high performance requirement) and those that may have the highest usage.

    You’ll then need to use a spreadsheet to work out the user think times you’ll need to get your desired transaction rate (200/hour in this case) with the number of users you plan to simulate.
    You can then increase the number of users to increase the transaction request rate to simulate particularly busy periods.
    Hope that helps.
    Adam Brown
    http://www.quotium.com

    ReplyDelete
    Replies
      Reply
  14. UnknownJune 8, 2010 at 10:19:00 AM PDT

    I am happy to see so many enthusiastic posts on Performance testing. I have been in Performance Engineering business for ~12 years. Note, I refer to it as Performance Engineering since I believe Testing is only a part of what we do as a part of overall process. I love what I do since it has given me exposure not only to various infrastructures but also to different technologies, etc. I believe that in order to provide a true value to end user/stakeholder, application/infrastructure has to be thoroughly analyzed from end to end perspective. After all, outcome and value of performance engineering is just as good as collected requirements.

    ReplyDelete
    Replies
      Reply
  15. S'linc_[catalyst]September 15, 2010 at 5:46:00 AM PDT

    I believe Testing represents a significant endeavor which still has to be rethought in terms of framing minds and forging skillsets in order to optimize and leverage it.
    Performance Testing is definitely at the forefront of application gauging as it pertains to revenue, customer satisfaction and adoption rate overall. But the challenges of testing efforts are increasingly obvious and daunting as complexity and sophistication augment in applications. Keeping in mind that from a functional perspective, 100% coverage is practically unattainable in while testing, we can however devise predictable boundaries and from there provide for major flexibility and resource adaptability or reconfigurability as to optimize performance to its highest.

    Souma Badombena Wanta
    www.livelypulse.com

    ReplyDelete
    Replies
      Reply
  16. UnknownSeptember 21, 2010 at 12:50:00 PM PDT

    this is a very nice post. though I am new in this field. I have recently join an org as performance engineer. I have idea about things but after reading this article i got a question

    whatever is perf testing but if you want to use less words just say perf testing is just using loadrunner or any other tool that you are useing that's it ! is it true???

    ReplyDelete
    Replies
      Reply
  17. UnknownApril 2, 2011 at 3:54:00 PM PDT

    Nice article which touches on most of the performance testing fundamentals. You are lucky Goranka to work in an environment where performance gets profile - in my experience this is not typically the case. Several posters have mentioned gathering usage stats & this is a perennial: so often the business really does not know how it uses its systems. I find the best approach to this is to extract data from logfiles (often Apache weblogs) & reverse-engineer the workload then get the business to approve & sign-off on it.

    I have been doing performance testing for >10 years & still enjoy it. I have done the highly-controlled baselining & profiling you mention. It is a very broad field of testing, often poorly understood & can be demanding, but it provides exposure to many different technologies & tools. And getting a result - given all the difficulties involved in putting complex tests together - is very rewarding and satisfying.

    cheers
    Steve

    ReplyDelete
    Replies
      Reply
  18. AnonymousMay 16, 2011 at 11:19:00 PM PDT

    great post, i am new to performance testing and am looking forward for more of such blogs a these...

    ReplyDelete
    Replies
      Reply
  19. Chandra Mohan DastiJuly 7, 2011 at 1:44:00 PM PDT

    Hi
    I am testing one client server application(.exe) which developed on C#, it deals with number of and loading the images, Is it possible to do performance test on an .eXE.
    I have VSTS performance tool available to test.

    ReplyDelete
    Replies
      Reply
  20. rajFebruary 12, 2012 at 4:16:00 PM PST

    Great inputs!!!

    I wanted to point out that service/server monitoring is one of the area needs to get little bit more importance in performance testing. May be one of the reason it is not given so much importance is that it is considered to be the system admin or infrastructure engineer job..I have personally seen that most of the application team just looks for simulating so many users to load the system and expects the client side statistics like response time. Most of them -application team or performance test engineers - don't seem to know the importance of monitoring and the value we can get by doing that. And they want to look at the server statistics after the fact only when there is a performance problem identified. By being pro-active in monitoring strategy, lot of time in after the fact test can be saved.

    I would try to simplify server/service performance monitoring as

    -Hardware performance statistics and its impact - CPU, Memory, IO.

    -Server software performance statics - Web server http connections, App server thread pool, connection pool, JVM Heap etc...

    -Application code performance - Method CPU time, Memory allocated by objects, ...

    As Bravo rightly pointed in the blog, Hardware statistics needs to be monitored bare minimum..

    Hope it helps..

    thanks
    Raj
    PerformanceTestingFun

    ReplyDelete
    Replies
      Reply
  21. easynet_search_marketingMarch 29, 2012 at 3:43:00 AM PDT

    I'm also going for Jmeter but as mentioned in the post, it takes some time to learn & operate.

    So, searching for a possible 'wrapper' or online service that can help me save time, I came across this Google Groups Post http://goo.gl/uLgva

    I am now testing BlazeMeter.com (so far, the test results are quite impressive, I am now considering a larger test). Any inputs on or experience with this?

    ReplyDelete
    Replies
      Reply
  22. lil rickApril 13, 2012 at 8:27:00 AM PDT

    I am trying to write scripts in JavaScript but i need to know about its conversion to C so that the scripts would run in Vugen 11 ? Can anyone throw some light on this subject matter?

    ReplyDelete
    Replies
      Reply
  23. KaramJune 15, 2012 at 2:54:00 PM PDT

    Excellent work!

    I just finished a script emulating 100 VU, got the results.
    But I don't have any benchmark to compare my results to. I don't either have a similar existing web application to base performance metrics on.
    What should be done in such a case. How to arrive at acceptable performance metrics.

    Karam
    QA
    TMS

    ReplyDelete
    Replies
      Reply
  24. AnonymousSeptember 12, 2021 at 9:41:00 AM PDT

    What is the organization structure related to your performance testing? Is the work Agile-entity embedded? If so, what about shared infrastructure?

    ReplyDelete
    Replies
      Reply
Add comment
Load more...

The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.

  

Labels


  • TotT 77
  • GTAC 61
  • James Whittaker 42
  • Misko Hevery 32
  • Anthony Vallone 27
  • Patrick Copeland 23
  • Jobs 18
  • Code Health 13
  • C++ 11
  • Andrew Trenk 10
  • Patrik Höglund 8
  • JavaScript 7
  • Allen Hutchison 6
  • George Pirocanac 6
  • Zhanyong Wan 6
  • Harry Robinson 5
  • Java 5
  • Julian Harty 5
  • Alberto Savoia 4
  • Ben Yu 4
  • Erik Kuefler 4
  • Philip Zembrod 4
  • Shyam Seshadri 4
  • Chrome 3
  • John Thomas 3
  • Lesley Katzen 3
  • Marc Kaplan 3
  • Markus Clermont 3
  • Sonal Shah 3
  • APIs 2
  • Abhishek Arya 2
  • Adam Bender 2
  • Alan Myrvold 2
  • Alek Icev 2
  • Android 2
  • April Fools 2
  • Chaitali Narla 2
  • Chris Lewis 2
  • Chrome OS 2
  • Diego Salas 2
  • Dillon Bly 2
  • Dori Reuveni 2
  • Jason Arbon 2
  • Jochen Wuttke 2
  • Kostya Serebryany 2
  • Marc Eaddy 2
  • Marko Ivanković 2
  • Max Kanat-Alexander 2
  • Mobile 2
  • Oliver Chang 2
  • Simon Stewart 2
  • Stefan Kennedy 2
  • Test Flakiness 2
  • Tony Voellm 2
  • WebRTC 2
  • Yvette Nameth 2
  • Zuri Kemp 2
  • Aaron Jacobs 1
  • Adam Porter 1
  • Adel Saoud 1
  • Alan Faulkner 1
  • Alex Eagle 1
  • Anantha Keesara 1
  • Antoine Picard 1
  • App Engine 1
  • Ari Shamash 1
  • Arif Sukoco 1
  • Benjamin Pick 1
  • Bob Nystrom 1
  • Bruce Leban 1
  • Carlos Arguelles 1
  • Carlos Israel Ortiz García 1
  • Cathal Weakliam 1
  • Christopher Semturs 1
  • Clay Murphy 1
  • Dan Shi 1
  • Dan Willemsen 1
  • Dave Chen 1
  • Dave Gladfelter 1
  • Derek Snyder 1
  • Diego Cavalcanti 1
  • Dmitry Vyukov 1
  • Eduardo Bravo Ortiz 1
  • Ekaterina Kamenskaya 1
  • Elliott Karpilovsky 1
  • Espresso 1
  • Google+ 1
  • Goran Petrovic 1
  • Goranka Bjedov 1
  • Hank Duan 1
  • Havard Rast Blok 1
  • Hongfei Ding 1
  • Jason Elbaum 1
  • Jason Huggins 1
  • Jay Han 1
  • Jeff Listfield 1
  • Jessica Tomechak 1
  • Jim Reardon 1
  • Joe Allan Muharsky 1
  • Joel Hynoski 1
  • John Micco 1
  • John Penix 1
  • Jonathan Rockway 1
  • Jonathan Velasquez 1
  • Josh Armour 1
  • Julie Ralph 1
  • Karin Lundberg 1
  • Kaue Silveira 1
  • Kevin Bourrillion 1
  • Kevin Graney 1
  • Kirkland 1
  • Kurt Alfred Kluever 1
  • Manjusha Parvathaneni 1
  • Marek Kiszkis 1
  • Mark Ivey 1
  • Mark Striebeck 1
  • Matt Lowrie 1
  • Meredith Whittaker 1
  • Michael Bachman 1
  • Michael Klepikov 1
  • Mike Aizatsky 1
  • Mike Wacker 1
  • Mona El Mahdy 1
  • Noel Yap 1
  • Patricia Legaspi 1
  • Peter Arrenbrecht 1
  • Peter Spragins 1
  • Phil Rollet 1
  • Pooja Gupta 1
  • Project Showcase 1
  • Radoslav Vasilev 1
  • Rajat Dewan 1
  • Rajat Jain 1
  • Rich Martin 1
  • Richard Bustamante 1
  • Roshan Sembacuttiaratchy 1
  • Ruslan Khamitov 1
  • Sean Jordan 1
  • Sharon Zhou 1
  • Siddartha Janga 1
  • Simran Basi 1
  • Stephen Ng 1
  • Tejas Shah 1
  • Test Analytics 1
  • Test Engineer 1
  • Tom O'Neill 1
  • Vojta Jína 1
  • iOS 1
  • mutation testing 1


Archive


  • ►  2022 (2)
    • ►  Feb (2)
  • ►  2021 (3)
    • ►  Jun (1)
    • ►  Apr (1)
    • ►  Mar (1)
  • ►  2020 (8)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  May (1)
  • ►  2019 (4)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Jul (1)
    • ►  Jan (1)
  • ►  2018 (7)
    • ►  Nov (1)
    • ►  Sep (1)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (1)
    • ►  Feb (1)
  • ►  2017 (17)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2016 (15)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (1)
    • ►  Sep (2)
    • ►  Aug (1)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (1)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2015 (14)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Aug (1)
    • ►  Jun (1)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (1)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2014 (24)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Sep (2)
    • ►  Aug (2)
    • ►  Jul (3)
    • ►  Jun (3)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2013 (16)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Jan (2)
  • ►  2012 (11)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (1)
    • ►  Aug (4)
  • ►  2011 (39)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (2)
    • ►  Aug (4)
    • ►  Jul (2)
    • ►  Jun (5)
    • ►  May (4)
    • ►  Apr (3)
    • ►  Mar (4)
    • ►  Feb (5)
    • ►  Jan (3)
  • ►  2010 (37)
    • ►  Dec (3)
    • ►  Nov (3)
    • ►  Oct (4)
    • ►  Sep (8)
    • ►  Aug (3)
    • ►  Jul (3)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (3)
    • ►  Mar (3)
    • ►  Feb (2)
    • ►  Jan (1)
  • ►  2009 (54)
    • ►  Dec (3)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (5)
    • ►  Aug (4)
    • ►  Jul (15)
    • ►  Jun (8)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (5)
    • ►  Jan (4)
  • ►  2008 (75)
    • ►  Dec (6)
    • ►  Nov (8)
    • ►  Oct (9)
    • ►  Sep (8)
    • ►  Aug (9)
    • ►  Jul (9)
    • ►  Jun (6)
    • ►  May (6)
    • ►  Apr (4)
    • ►  Mar (4)
    • ►  Feb (4)
    • ►  Jan (2)
  • ▼  2007 (41)
    • ▼  Oct (6)
      • TotT: Avoiding friend Twister in C++
      • Automating tests vs. test-automation
      • Overview of Infrastructure Testing
      • Testing Google Mashup Editor Class
      • Performance Testing
      • Post Release: Closing the loop
    • ►  Sep (5)
    • ►  Aug (3)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (7)
    • ►  Mar (5)
    • ►  Feb (5)
    • ►  Jan (4)

Feed

follow us in feedly
  • Google
  • Privacy
  • Terms