Strong Buy Why you need to buy this book? Previously the JVM was a blackbox that would run your Java applications. This book will lay out the JVM as an open book. So you have an opportunity to master the JVM. But I have not read it completely. The reason is that this is an advanced topic.
|Published (Last):||23 February 2012|
|PDF File Size:||11.41 Mb|
|ePub File Size:||13.69 Mb|
|Price:||Free* [*Free Regsitration Required]|
In addition, we recognized that there was a lot of interest in knowing more about the Java HotSpot VM internals along with some structure around how to go about tuning it.
And, mostly, we wanted to offer what we have learned over the years doing Java performance. Our hope is that readers will find at least one thing in the book that helps them. InfoQ: In general the methodology you describe, at least in terms of the language you use, seemed to me to be a closer fit for an RUP approach to software development than, say, one of the agile approaches. Is that a valid comment, or do you think the approach you describe should adapt well to any software development approach?
In our opinion, we think the methodology can be adapted to agile approaches and in general most software development approaches. In particular, the expected performance of the application is something that should be well understood as early as possible. If there are concerns about whether that performance can be met early in the software development process, then you have the luxury to mitigate that risk by conducting some experiments to identify whether those risks are "real" along with whether you may need to make some alternative decisions, which may require a major shift in the choice of software architecture, design or implementation.
The key here is the ability to "catch" the performance issue as early as possible regardless of the software development methodology. InfoQ: What would you say are the key things that need to be in place before you start performance tuning?
The first and foremost thing to have in place is to understand exactly what it is you are trying to accomplish. Without having a clear understanding of that, you may still learn some things, but you are at risk of not accomplishing what you really wanted to accomplish.
Having a clear understanding of what you want to accomplish will help identify what you need in the way of hardware, software and other environmental needs. How much variability is acceptable depends on how much of an improvement you are looking for. If you spend some time investigating statistical methods and their equations, you can understand the reasoning behind the "variability" discussion here.
This topic is also touched upon in the book sections from Chapter 8, "Design of Experiments and Use of Statistical Methods". Another important thing if your testing environment deviates from production, is that you understand the differences, and more importantly, whether those differences will impact or impede what you are trying to accomplish in your performance tuning.
It depends on what you want to learn from the performance test and it also depends on how the environment deviates from the production environment. Ideally, to ensure the highest probability of success with your performance goals, it is important to ensure that the test machines use the same CPU architecture as the production machines.
Also, it is important to account for differences in architecture between the different CPU families from the same manufacturer, eg: Intel Xeon vs. Intel Itanium. More on chip differences below. However, keep in mind that you need to be able to convince yourself, and your stakeholders, that the differences between your testing environment and the production environment do not introduce performance differences.
That can be a difficult task. Again, that can be a difficult task. There are two points we should also make here.
Then, identifying a test environment that can satisfy the "design of experiment" without introducing bias or variability that puts into jeopardy what you want to learn.
The reason this assumption and approach has flaws is its difference in CPU architecture. The motivation for including these sections is so that folks who are evaluating systems understand that this traditional approach has flaws and to have readers understand why it is flawed. In addition, our hope is that readers will also question whether any other differences in their testing environment versus production may introduce some unexpected or unforeseen flaw s.
Another topic that is applicable is scalability. Testing on different hardware will most often not show scalability issues if the test hardware has fewer virtual processors than the production hardware. This can be illustrated with some of the example programs used in Chapter 6, "Java Application Profiling Tips and Tricks".
If you happen to run some of those example programs on hardware with a small number of virtual processors, it is likely you may not observe the same performance issues as you may see on a test system that has a large number of virtual processors. This is also pointed out in Appendix B where the example source code listings for those programs exist. So, if what you are wanting to learn from performance testing is related to how well a Java application scales, having a testing environment that replicates as close as possible the production environment is important.
Reading your book, you seem to advocate a somewhat different approach, that is more use case centric. So you suggest taking a look at what use case is being executed that includes this particular method, and consider if there are alternative approaches or algorithms that could be used to implement that particular use case, that might perform better.
Is that a fair summary, and if so why do you favour this approach? To be a little more specific, we advocate first identifying whether you need to profile the application. For example, a Java developer tends to go immediately look at the code, some will profile the code immediately, systems administrators will look at operating system data, attempt to tune the operating system, or communicate to the Java developers that his or her application is behaving badly and proceed to tell him or her what is being observed at the operating system, and a person with JVM knowledge will tend to want to tune the JVM first.
What we advocate is using monitoring tools at the operating system, JVM and application level. Then analyze that data to formulate a hypothesis as to the next step. If we assume that we have sufficient evidence to suggest the next step is to do application profiling, then we advocate the idea of first stepping back at the call path level, which often times maps to a use case, and asking yourself what is it that the application is really trying to do.
Most modern profilers offer a "hottest call path" view. You will almost always be able to realize greater improvement by changing to, for example, a better algorithm in the "hottest call path" than you will by improving the performance of the "hottest method". If, however, you only need a small improvement to meet your performance goals, then looking at the hottest method and improving it will likely offer a quicker means to your end goal than the "hottest call path" approach.
We think most folks would agree that stepping back and looking at alternative algorithms or data structures offers the potential for a bigger performance improvement than making changes to the implementation of a method or several methods.
InfoQ: You also advocate thinking about performance during the requirements gathering phase. Again this is much earlier than is common in my experience. Why do you think it should be done there? Several reasons It follows from the well understood idiom of, "The earlier a bug is found in the software development lifecycle, the less costly it is to fix it".
And, we consider a performance issue a bug. It can also potentially be used or incorporated as part of an acceptance test plan with the users of the application. InfoQ: You suggest integrating performance tuning into a continuous integration cycle, in addition to the unit and other functional testing that is typically automated.
Given that, would you advocate hiring performance specialists, or does your recommendation that performance tuning be addressed early and as part of the build cycle, push the task towards developers? The reason for recommending performance testing to also be included as part of unit and other functional testing is to catch performance issues as soon as they are introduced. The need for hiring performance specialists should come out of not being able to find a performance issue, or perhaps with advising on how to go about making performance testing part of unit and functional testing.
Again, the motivation here is to minimize the amount of time and effort in tracking down when a performance issue is introduced. InfoQ: Are there other books on the subject that you would recommend as a companion volume to yours? Again, although Solaris-specific, generally speaking, if you understand the most important pieces of a modern operating system, those concepts apply to other modern operating systems too.
Binu John is a senior performance engineer at Ning, Inc. Rate this Article.
Book Review and Interview: Java Performance, by Charlie Hunt and Binu John
In addition, we recognized that there was a lot of interest in knowing more about the Java HotSpot VM internals along with some structure around how to go about tuning it. And, mostly, we wanted to offer what we have learned over the years doing Java performance. Our hope is that readers will find at least one thing in the book that helps them. InfoQ: In general the methodology you describe, at least in terms of the language you use, seemed to me to be a closer fit for an RUP approach to software development than, say, one of the agile approaches. Is that a valid comment, or do you think the approach you describe should adapt well to any software development approach?
JAVA PERFORMANCE CHARLES HUNT BINU JOHN PDF
Samujinn The author described and explained very well, tuning from Operation System level to Java Application level. Read reviews that mention java performance web services garbage collectors java applications virtual machine performance monitoring performance tuning operating systems garbage collection towards oracle solaris studio tuning the jvm performance issues netbeans profiler oracle solaris linux and solaris studio performance well written read for java java developer. Then analyze that data to formulate a hypothesis as to the next step. Add all charless to Cart Add all three to List.
Kajin Share your thoughts with other customers. You need to Register an InfoQ account or Login or login to post comments. More on chip differences below. Subscribe to our newsletter? Started to read this book as soon as it arrived.